Data Science and Security: Proceedings of IDSCS 2022 (Lecture Notes in Networks and Systems, 462) 9811922101, 9789811922107

This book presents best selected papers presented at the International Conference on Data Science for Computational Secu

127 5

English Pages 527 [505] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Editors and Contributors
CoInMPro: Confidential Inference and Model Protection Using Secure Multi-Party Computation
1 Introduction
2 Background
3 Method and Process
3.1 Experimental Setup
3.2 Implementation
4 Result Analysis
4.1 Insights
5 Conclusion and Future Work
References
An Improved Face Mask-aware Recognition System Based on Deep Learning
1 Introduction
2 Related Works
3 Methodology
3.1 Data Management Module
3.2 Face Mask-Aware Detection Model Learning Module
3.3 Face Unmask-aware Recognition Model Learning Module
3.4 Face Mask-aware Recognition Model Learning Module
3.5 Model Prediction Module
4 Experimental Setting
4.1 Dataset
4.2 Implementation
5 Experimental Results
5.1 Face Mask Detection Performance (RQ1)
5.2 Face Mask Recognition Performance (RQ2, RQ3)
5.3 The Ear Model Performance Comparison (RQ4)
6 Conclusion
References
Semantically Driven Machine Learning-Infused Approach for Tracing Evolution on Software Requirements
1 Introduction
2 Related Works
3 Proposed Methodology
4 Implementation and Performance Evaluation
5 Conclusion
References
Detecting Dengue Disease Using Ensemble Classification Algorithms
1 Introduction
2 Literature Survey
3 Methodology
3.1 Data Collection
3.2 Data Preprocessing
3.3 Data Processing
3.4 Model Building
4 Results and Discussion
5 Conclusion
References
Masked Face Recognition and Liveness Detection Using Deep Learning Technique
1 Introduction
2 Related Works
3 Problem Definition, Challenges, and Data Description
3.1 Problem Definition
3.2 Challenges
3.3 Dataset Description
4 Methodology
4.1 Module Description
4.2 Proposed Model
5 Experimental Result
5.1 Liveness Detection
5.2 Mask Detection
5.3 Face Recognition with Mask
5.4 Face Recognition Without a Mask
6 Conclusion and Future Work
References
Method of Optimal Threshold Calculation in Case of Radio Equipment Maintenance
1 Introduction
1.1 Introduction to the Problem
1.2 Motivation
1.3 Contribution
1.4 The Organization of the Paper
2 Literature Review and Problem Statement
3 Deterioration Model Description
4 Method of Optimal Preventive Threshold Calculation
5 Results and Discussions
6 Conclusion
7 Future Scope
References
Swarm Intelligence-Based Smart City Applications: A Review for Transformative Technology with Artificial Intelligence
1 Introduction
2 Related Works
3 Swarm Intelligence Algorithms
3.1 Artificial Bee Colony Optimization
3.2 Ant Colony Optimization
3.3 Elephant Herd Optimization
3.4 Particle Swarm Optimization
4 Application Areas
4.1 Application Technologies of Intelligent Manufacturing
4.2 Automated Digitation
5 Comparative Study
6 Conclusion
References
Performance Evaluation of Machine Learning Classifiers for Prediction of Type 2 Diabetes Using Stress-Related Parameters
1 Introduction
1.1 Diabetes Mellitus a Global Problem
1.2 Stress and Diabetes
2 Related Work
3 Methodology
3.1 Data Description
3.2 Model Architecture
4 Result and Analysis
5 Conclusion
References
Parametrised Hesitant Fuzzy Soft Multiset for Decision Making
1 Introduction
2 Some New Operations in Hesitant Fuzzy Soft Multiset
2.1 A Hesitant Fuzzy Soft Multiset Approach to Decision-Making Problem
3 Parametrised Hesitant Fuzzy Soft Multiset
4 A Parametrised Hesitant Fuzzy Soft Multiset Approach to Decision Making
4.1 Application in a Decision-Making Problem
5 Discussion and Comparison
6 Conclusion
References
Some Variations of Domination in Order Sum Graphs
1 Introduction
2 Domination in Order Sum Graphs
3 Domination in Line Graphs and Complement of Order Sum Graphs
4 Conclusion and Scope for Future Work
References
Emotion Detection Using Natural Language Processing and ConvNets
1 Introduction
2 Literature Survey
3 Working Principle
4 Description
4.1 2D Convoluted Neural Network
4.2 Recurrent Neural Networks
4.3 Intent-Based ChatBot
4.4 Datasets
5 Implementation and Result
5.1 Results
6 Conclusion
References
Analysis and Forecasting of Crude Oil Price Based on Univariate and Multivariate Time Series Approaches
1 Introduction
2 Data Description
3 Methodology
4 Analysis
4.1 Multivariate Approach
4.2 Univariate Approach
5 Results and Discussion
6 Conclusion
References
Deep Learning-based Gender Recognition Using Fusion of Texture Features from Gait Silhouettes
1 Introduction
2 Related Works
3 Proposed Methodology
4 Experiments and Results
5 Conclusion
References
Forest Protection by Fire Detection, Alarming, Messaging Through IoT, Blockchain, and Digital Technologies in Thailand Chiang Mai Forest Range
1 Introduction
1.1 Background of the Study
1.2 Research Objective
1.3 Relevant Review of Forest Fire and Technology Framework
1.4 State of Forest Fire in Thailand
1.5 Forest Fire-Related Policy and Acts
1.6 Prevention Activities for Forest Fire
2 Preventing Fire with Technology
2.1 The System Architecture and the Flowchart
2.2 The Proposed Method in Chiang Mai Case Study
3 Conclusion
References
On Circulant Completion of Graphs
1 Introduction
2 Circulant Completion of Graphs
3 Conclusion
References
Analysis of Challenges Experienced by Students with Online Classes During the COVID-19 Pandemic
1 Introduction
2 Literature Review
3 Problem Definition, Research Challenges, and Dataset Description
3.1 Problem Definition
3.2 Challenges
3.3 Dataset Description
4 Methodology
5 Results and Discussion
6 Conclusion and Future Work
References
Conceptualization, Modeling, Visualization, and Evaluation of Specialized Domain Ontologies for Nano-energy as a Domain
1 Introduction
2 Related Work
3 Ontology Modeling and Knowledge Representation for Nano-energy
4 Visualization
5 Ontology Evaluation
6 Conclusion
References
An AI-Based Forensic Model for Online Social Networks
1 Introduction
1.1 Importance of Crime Data Analysis
1.2 Social Media Crimes—Types
1.3 Evidence Acquisition and Provenance Management
2 Literature Review
3 Proposed Model
3.1 Model Description
3.2 Formal Description of the Model
4 Evaluation of Model using ISO Standards and It Specifications
5 Comparison with Other Models
6 Conclusion
References
Protection Against SIM Swap Attacks on OTP System
1 Introduction
2 Related Works
3 Proposed Model
3.1 Data Extraction
3.2 Risk Engine
3.3 Decision Block
4 Results and Discussions
5 Conclusion
References
One Time Password-Based Two Channel Authentication Mechanism Using Blockchain
1 Introduction
1.1 Motivation
1.2 Contribution
1.3 Organization
2 Literature Review
3 Proposed Method
3.1 Node Registration Phase
3.2 Authentication Phase
3.3 Extracting Metadata
3.4 Hashing
4 Implementation and Analysis
5 Conclusion
References
IEESWPR: An Integrative Entity Enrichment Scheme for Socially Aware Web Page Recommendation
1 Introduction
1.1 Motivation
1.2 Contribution
1.3 Organization
2 Related Works
3 Proposed Architecture
4 Implementation
5 Performance Evaluation and Result
6 Conclusion
References
Interval-Valued Fuzzy Trees and Cycles
1 Introduction
2 Interval-Valued Fuzzy Tree
3 Interval-Valued Fuzzy Cycle
4 Applications of Interval-Valued Fuzzy Trees and Cycles
5 Conclusion
References
OntoQC: An Ontology-Infused Machine Learning Scheme for Question Classification
1 Introduction
2 Related Work
3 Proposed Architecture
4 Implementation
5 Conclusion
References
A Study of Preprocessing Techniques on Digital Microscopic Blood Smear Images to Detect Leukemia
1 Introduction
2 Literature Review
3 Material and Methods
3.1 Wiener Filter
3.2 Bilateral Filter
3.3 Gaussian Filter
3.4 Median Filter
3.5 Mean Filter
4 Results and Discussion
5 Conclusion
References
Emotion Recognition of Speech by Audio Analysis using Machine Learning and Deep Learning Techniques
1 Introduction
2 Literature Survey
3 Speech Recognition and Its Various Factors
4 Methodology for Speech Emotion Recognition
5 Steps and Results
6 Conclusion
References
Decision Support System Based on the ELECTRE Method
1 Introduction
2 Formulation of the Multi-criteria Choice Problem
3 The Review of the Existing Methods
4 The Decision-Making Models Development
4.1 A Model Based on the Outranking Relation Graph Construction and Processing
4.2 Model Based on the Construction of a Generalized Objective Function
5 Experimental Research
6 Conclusion and Future Work
References
An Improved and Efficient YOLOv4 Method for Object Detection in Video Streaming
1 Introduction
2 Related Work
3 Methodology
3.1 Data Preprocessing
3.2 YOLO (You Only Look Once)
3.3 Training and Implementation
4 Result and Discussion
5 Conclusion
References
A Survey on Adaptive Authentication Using Machine Learning Techniques
1 Introduction
2 Adaptive Authentication Techniques
2.1 Authentication Technique Using Behavioral Dynamics
2.2 Authentication Using Device or Browser-Based Fingerprinting Technique
3 Fingerprint Authentication Using Advances Technologies
4 Categories of Attacks During Authentication
5 Discussion and Challenges
5.1 Challenges
6 Conclusion
References
An Overview on Security Challenges in Cloud, Fog, and Edge Computing
1 Introduction
2 Comparison Between Cloud, Fog, and Edge Computing
2.1 Cloud Computing
2.2 Fog Computing
2.3 Edge Computing
3 Security Challenges in Cloud Computing
3.1 Data Breaches
3.2 Compromised Credentials and Broken Authentication
3.3 Hacked Interface and APIs
3.4 Data Locations
3.5 The APT Parasite
3.6 Cloud Service Abuses
4 Security Challenges in Fog Computing
4.1 Authentication
4.2 Malicious Attack
4.3 Data Protection
4.4 Privacy
4.5 Confidentiality
5 Security Challenges in Edge Computing
5.1 Privacy and Security of Data
5.2 Access Control
5.3 Attack Mitigation
5.4 Detection for Anomalies
6 Conclusion
References
An Efficient Deep Learning-Based Hybrid Architecture for Hate Speech Detection in Social Media
1 Introduction
2 Literature Review
3 Methodology
3.1 Hybrid Model
4 Experimental Results and Analysis
5 Conclusion
References
Application of Machine Learning Algorithms to Real-Time Indian Railways Data for Delay Prediction
1 Introduction
1.1 Objectives
2 Literature Survey
3 Technical Details
4 Experiment
5 Results and Discussion
6 Conclusion
References
Computer Assisted Unsupervised Extraction and Validation Technique for Brain Images from MRI
1 Introduction
2 Methods
2.1 Extraction Evaluation Metrics
3 Results and Discussion
4 Conclusion
References
A Hybrid Feature Selection for Improving Prediction Performance with a Brain Stroke Case Study
1 Introduction
1.1 Problem Definition
1.2 Motivation
1.3 Contribution
1.4 Organization of the Paper
2 Related Work
3 Proposed Framework
4 Dataset and Experimental Setup
5 Results and Discussion
6 Conclusion and Future Work
References
Analysis of Fine Needle Aspiration Images by Using Hybrid Feature Selection and Various Machine Learning Classifiers
1 Introduction
2 Related Works
3 Acquisition of Data
4 Approaches
4.1 Methods for Selecting Features
4.2 Classification
5 Discussions and Recommendations
5.1 Analysis of Various Machine Learning Techniques
6 Conclusion
References
Exploring Transfer Learning Techniques for Flower Recognition Using CNN
1 Introduction
2 Literature Review
3 Method
3.1 Data Visualization and Augmentation
3.2 Training from Scratch
4 Experiments and Results
4.1 Training Custom Model from Scratch
4.2 RegNetX
4.3 RegNetY
4.4 Other Models
5 Conclusion and Future Scope
References
Novel Approach for Automatic Cataract Detection Using Image Processing
1 Introduction
2 Related Work
3 Work Strategy and Implementation
3.1 Preprocessing
3.2 Thresholding
3.3 Morphological Operations
3.4 Iris Contour Separation
3.5 Image Inversion
4 Experimental Results
5 Conclusion and Future Scope
References
Sentimental Analysis on Online Education Using Machine Learning Models
1 Introduction
2 Related Works
3 Methodology
3.1 Data Collection and Preprocessing
3.2 EDA
3.3 Algorithms Used for the Proposed Study
4 Proposed Architecture of the System
5 Results and Performance Evaluation
6 Conclusion
References
A Study on Crude Oil Price Forecasting Using RNN Model
1 Introduction
2 Literature Review
3 Methodology
3.1 Data Description
3.2 Recurrent Neural Networks (RNNs)
3.3 Activation Functions
3.4 Long Short-Term Memory (LSTM)
3.5 Gated Recurrent Network (GRU)
3.6 Mean Absolute Error
3.7 Mean Squared Error
3.8 Mean Absolute Percentage Error (MAPE/M)
4 Model Validation Strategies
5 Experiment and Results
6 Conclusion
References
Applying Ensemble Techniques for the Prediction of Alcoholic Liver Cirrhosis
1 Introduction
2 Related Work
3 Proposed Methodology—Stacked Ensemble Classifier
3.1 Dimensionality Reduction
4 Experimental Analysis
4.1 Effectiveness of Dimensionality Reduction
4.2 Performance Analysis of Ensemble Classifiers
5 Conclusion
References
Fake News Detection using Machine Learning and Deep Learning Hybrid Algorithms
1 Introduction
1.1 Problem Statement
1.2 Motivation
1.3 Contribution
1.4 Organization of the Paper
2 Related Works
3 Methodology
3.1 Dataset Description
3.2 Preprocessing
3.3 Proposed Models
4 Results and Discussion
5 Conclusion and Future Scope
References
The Pendant Number of Line Graphs and Total Graphs
1 Introduction
2 Pendant Number of Line Graphs and Total Graphs
3 Some Open Problems
4 Conclusion
References
Impact of Prolonged Screen Time on the Mental Health of Students During COVID-19
1 Introduction
2 Literature Review
3 Methodology
3.1 Participants and Survey
3.2 Questionnaire
3.3 Data Analysis
4 Results
4.1 General Information
4.2 Average Screen Time Per Day
4.3 Stress on Eyes
4.4 Stress Due to Workload
4.5 Concentration Level
4.6 Productivity
4.7 Preferred Average Screen Time.
5 Findings and Suggestions
6 Limitation of the Study
7 Conclusion and Future Scope
References
A Systematic Review of Challenges, Tools, and Myths of Big Data Ingestion
1 Introduction
2 Data Ingestion
2.1 Data Lake
2.2 Master Data Assets
3 Myths of Data Ingestion
4 Data Ingestion Challenges
5 Popular Data Ingestion Frameworks
5.1 Kafka
5.2 NiFi
5.3 Flume
6 Comparison of Popular Ingestion Tools
7 Conclusion
References
Automated Fetal Brain Localization, Segmentation, and Abnormalities Detection Through Random Sample Consensus
1 Introduction
2 Related Works
3 Challenges
4 Proposed Method
5 Dataset
6 Result and Discussion
7 Conclusion and Future Scope
References
Author Index
Recommend Papers

Data Science and Security: Proceedings of IDSCS 2022 (Lecture Notes in Networks and Systems, 462)
 9811922101, 9789811922107

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 462

Samiksha Shukla Xiao-Zhi Gao Joseph Varghese Kureethara Durgesh Mishra   Editors

Data Science and Security Proceedings of IDSCS 2022

Lecture Notes in Networks and Systems Volume 462

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

More information about this series at https://link.springer.com/bookseries/15179

Samiksha Shukla · Xiao-Zhi Gao · Joseph Varghese Kureethara · Durgesh Mishra Editors

Data Science and Security Proceedings of IDSCS 2022

Editors Samiksha Shukla Christ University Bengaluru, Karnataka, India Joseph Varghese Kureethara Christ University Bengaluru, Karnataka, India

Xiao-Zhi Gao School of Computing University of Eastern Finland Kuopio, Finland Durgesh Mishra Department of Computer Science and Engineering Sri Aurobindo Institute of Technology Indore, Madhya Pradesh, India

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-19-2210-7 ISBN 978-981-19-2211-4 (eBook) https://doi.org/10.1007/978-981-19-2211-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

This volume contains the papers presented at International Conference on Data Science, Computation, and Security (IDSCS 2022) held on February 11–12, 2022, organized by CHRIST (Deemed to be University), Pune Lavasa Campus, India. IDSCS 2022 received 160 research submissions from four different countries, viz. Taiwan, Iraq, United Arab Emirates, Thailand, Malaysia, and Ukraine. Technology is the driving force in this era of globalization for any country’s socioeconomic growth and sustained development. The influence of data and security in shaping the process of globalization, particularly in productivity, commercial, and financial spheres, is highly required. The phase of revolution has significant implications for the current and future societal and economic situation of all the countries in the world. Data and security play a fundamental role in enabling people to rely on digital media for a better lifestyle. It is concluded that data science and security are a significant contributor to the success of the current initiative of Digital India. The international conference deliberated on topics specified within its scope. It focused on exploring the role of technology, data science, computational security, and related applications in enhancing and securing business processes. The conference offered a platform for bringing forward substantial research and literature across the arena of data science and security. It provided an overview of the upcoming technologies. IDSCS 2022 provided a platform for leading experts, academicians, fellow students, researchers, and practitioners to share their perceptions, provide supervision, and address participants’ interrogations and concerns. After a rigorous peer review with the help of program committee members consisting of several reviewers across the globe, 44 papers were selected for the conference. The conference was inaugurated by Dr. K. K. Aggarwal, Chairman, National Board of Accreditation, India, and other eminent dignitaries, including Dr. Fr Abraham VM, Dr. A. K. Nayak, Dr. Fr Jose CC, Dr. Fr Joseph Varghese, Dr. Fr Jossy P. George, Dr. Fr Arun Antony, Mrs. Bhanumathi S, and Dr. D. K. Mishra.

v

vi

Preface

The conference witnessed keynote addresses from eminent speakers, namely Dr. L. M. Patanaik, Eminent Professor, IISc, Bengaluru, Dr. Aninda Bose, Senior Publishing Editor, Springer Nature, Dr. Arceloni Neusa Volpato, International Affairs Coordinator, Transcultural Practices Master and Coordinator, Centro Universitário Facvest—UNIFACVEST, Dr. Marta Zurek-Mortkan, Lukasiewicz Research Network, Institute for Sustainable Technologies, Department of Control Systems, Radom, Poland, Dr. Sudeendra Koushik, Innovation Director at Volvo Group, Head of CampX, Bangalore, Dr. Chih-Yang Lin, Chief of the Global Affairs Office and Professor, Department of Electrical Engineering, Yuan-Ze University, Taoyuan, Taiwan, Dr. Fr Joseph Varghese, Director and Professor, Centre for Research, Christ (Deemed to be University), Bengaluru, India, Dr. Silvio Cesar Viegas, Assistant Professor II—Higher Level FAQI (Porto Alegre and Gravataí), Brazil, Dr. Rejane Dutra Bergamaschi, Professor of Psychology, Centro Universitário Facvest (Unifacvest), Brazil, Dr. Doris Hernandez Dukova, Director, Inter-institutional and International Relations, Technological School of the Central Technical Institute of Bogotá, Colombia, Dr. Xavier Chelladurai, Director and Professor, Human Resource Development Centre, Christ (Deemed to be University), Bengaluru, India. The organizers wish to thank Dr. Aninda Bose, Senior Editor, Springer Nature, New Delhi, India, for his support and guidance and Mr. Suresh Dharmalingam, Springer Nature. The organizing committee wishes to thank EasyChair Conference Management System as it is a wonderful tool for easy organization and compilation of conference documents. Pune, India Kuopio, Finland Bengaluru, India Indore, India

Samiksha Shukla Xiao-Zhi Gao Joseph Varghese Kureethara Durgesh Mishra

Contents

CoInMPro: Confidential Inference and Model Protection Using Secure Multi-Party Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kapil Tiwari, Kritica Bisht, and Jossy P. George An Improved Face Mask-aware Recognition System Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chih-Yang Lin, Amornthep Rojanasarit, Tipajin Thaipisutikul, Chi-Wen Lung, and Fityanul Akhyar

1

15

Semantically Driven Machine Learning-Infused Approach for Tracing Evolution on Software Requirements . . . . . . . . . . . . . . . . . . . . . Rashi Anubhi Srivastava and Gerard Deepak

31

Detecting Dengue Disease Using Ensemble Classification Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Ruban, Naresha, and Sanjeev Rai

43

Masked Face Recognition and Liveness Detection Using Deep Learning Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mukul Mishra, Lija Jacob, and Samiksha Shukla

53

Method of Optimal Threshold Calculation in Case of Radio Equipment Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oleksandr Solomentsev, Maksym Zaliskyi, Yuliya Averyanova, Ivan Ostroumov, Nataliia Kuzmenko, Olha Sushchenko, Borys Kuznetsov, Tatyana Nikitina, Eduard Tserne, Vladimir Pavlikov, Simeon Zhyla, Kostiantyn Dergachov, Olena Havrylenko, Anatoliy Popov, Valerii Volosyuk, Nikolay Ruzhentsev, and Oleksandr Shmatko Swarm Intelligence-Based Smart City Applications: A Review for Transformative Technology with Artificial Intelligence . . . . . . . . . . . . . Anusruti Mitra, Dipannita Basu, and Ahona Ghosh

69

81

vii

viii

Contents

Performance Evaluation of Machine Learning Classifiers for Prediction of Type 2 Diabetes Using Stress-Related Parameters . . . . . Rohini Patil and Kamal Shah

93

Parametrised Hesitant Fuzzy Soft Multiset for Decision Making . . . . . . . 103 Sreelekshmi C. Warrier, Terry Jacob Mathew, and Vijayakumar Varadarajan Some Variations of Domination in Order Sum Graphs . . . . . . . . . . . . . . . . 117 Javeria Amreen and Sudev Naduvath Emotion Detection Using Natural Language Processing and ConvNets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Akash Das, Kartik Nair, and Yukti Bandi Analysis and Forecasting of Crude Oil Price Based on Univariate and Multivariate Time Series Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Anna Thomas and Nimitha John Deep Learning-based Gender Recognition Using Fusion of Texture Features from Gait Silhouettes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 K. T. Thomas and K. P. Pushpalatha Forest Protection by Fire Detection, Alarming, Messaging Through IoT, Blockchain, and Digital Technologies in Thailand Chiang Mai Forest Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Siva Shankar Ramasamy, Naret Suyaroj, and Nopasit Chakpitak On Circulant Completion of Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Toby B Antony and Sudev Naduvath Analysis of Challenges Experienced by Students with Online Classes During the COVID-19 Pandemic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 D. Elsheba, Nirmalya Sarkar, S. Sandeep Jabez, Arun Antony Chully, and Samiksha Shukla Conceptualization, Modeling, Visualization, and Evaluation of Specialized Domain Ontologies for Nano-energy as a Domain . . . . . . . 199 Palvannan and Gerard Deepak An AI-Based Forensic Model for Online Social Networks . . . . . . . . . . . . . . 209 Varsha Pawar and Deepa V. Jose Protection Against SIM Swap Attacks on OTP System . . . . . . . . . . . . . . . . 219 Ebin Varghese and R. M. Pramila One Time Password-Based Two Channel Authentication Mechanism Using Blockchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 H. P. Asha and I. Diana Jeba Jingle

Contents

ix

IEESWPR: An Integrative Entity Enrichment Scheme for Socially Aware Web Page Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Gurunameh Singh Chhatwal and Gerard Deepak Interval-Valued Fuzzy Trees and Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Ann Mary Philip, Sunny Joseph Kalayathankal, and Joseph Varghese Kureethara OntoQC: An Ontology-Infused Machine Learning Scheme for Question Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 D. Naga Yethindra, Gerard Deepak, and A. Santhanavijayan A Study of Preprocessing Techniques on Digital Microscopic Blood Smear Images to Detect Leukemia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Ashwini P. Patil, Manjunatha Hiremath, and K. Kavipriya Emotion Recognition of Speech by Audio Analysis using Machine Learning and Deep Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Ati Jain, Hare Ram Sah, and Abhay Kothari Decision Support System Based on the ELECTRE Method . . . . . . . . . . . . 295 Olena Havrylenko, Kostiantyn Dergachov, Vladimir Pavlikov, Simeon Zhyla, Oleksandr Shmatko, Nikolay Ruzhentsev, Anatoliy Popov, Valerii Volosyuk, Eduard Tserne, Maksym Zaliskyi, Oleksandr Solomentsev, Ivan Ostroumov, Olha Sushchenko, Yuliya Averyanova, Nataliia Kuzmenko, Tatyana Nikitina, and Borys Kuznetsov An Improved and Efficient YOLOv4 Method for Object Detection in Video Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Javid Hussain, Boppuru Rudra Prathap, and Arpit Sharma A Survey on Adaptive Authentication Using Machine Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 R. M. Pramila, Mohammed Misbahuddin, and Samiksha Shukla An Overview on Security Challenges in Cloud, Fog, and Edge Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Deep Rahul Shah, Dev Ajay Dhawan, and Vijayetha Thoday An Efficient Deep Learning-Based Hybrid Architecture for Hate Speech Detection in Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Nilanjan Nath, Jossy P. George, Athishay Kesan, and Andrea Rodrigues Application of Machine Learning Algorithms to Real-Time Indian Railways Data for Delay Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 V. Asha and Heena Gupta

x

Contents

Computer Assisted Unsupervised Extraction and Validation Technique for Brain Images from MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 S. Vijayalakshmi, T. Genish, and S. P. Gayathri A Hybrid Feature Selection for Improving Prediction Performance with a Brain Stroke Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 D. Ushasree, A. V. Praveen Krishna, Ch. Mallikarjuna Rao, and D. V. Lalita Parameswari Analysis of Fine Needle Aspiration Images by Using Hybrid Feature Selection and Various Machine Learning Classifiers . . . . . . . . . . . 383 N. Preethi and W. Jaisingh Exploring Transfer Learning Techniques for Flower Recognition Using CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Surya Pandey, Bangari Sindhuja, C. S. Nagamanjularani, and Sasikala Nagarajan Novel Approach for Automatic Cataract Detection Using Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Satish Chaurasiya, Neelu Nihalani, and Durgesh Mishra Sentimental Analysis on Online Education Using Machine Learning Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Sharon T. Mathew and Lija Jacob A Study on Crude Oil Price Forecasting Using RNN Model . . . . . . . . . . . . 423 Joseph Saj Pulimoottil and Jitendra Kaushik Applying Ensemble Techniques for the Prediction of Alcoholic Liver Cirrhosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 M. R. Vinutha, J. Chandrika, Balachandran Krishnan, and Sujatha Arun Kokatnoor Fake News Detection using Machine Learning and Deep Learning Hybrid Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 Aditya Saha and K. T. Thomas The Pendant Number of Line Graphs and Total Graphs . . . . . . . . . . . . . . . 457 Jomon Kottarathil, Sudev Naduvath, and Joseph Varghese Kureethara Impact of Prolonged Screen Time on the Mental Health of Students During COVID-19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 Aaron Mathew Shaji, K. S. Vivekanand, Sagaya Aurelia, and Deepthi Das A Systematic Review of Challenges, Tools, and Myths of Big Data Ingestion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 Mohammad Irfan and Jossy P. George

Contents

xi

Automated Fetal Brain Localization, Segmentation, and Abnormalities Detection Through Random Sample Consensus . . . . . 495 S. Vijayalakshmi, P. Durgadevi, S. P. Gayathri, and A. S. Mohammed Shariff Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505

Editors and Contributors

About the Editors Dr. Samiksha Shukla is currently employed as Associate Professor and Head, Data Science Department, CHRIST (Deemed to be University), Pune Lavasa Campus. Her research interest includes Computation Security, Machine Learning, Data Science and Big Data. She has presented and published several research papers in reputed journals and conferences. She has 15 years of academic and research experience and is serving as reviewer for Inderscience Journal, Springer Nature’s International Journal of Systems Assurance Engineering and Management (IJSA), and for IEEE and ACM conferences. An experienced and focused teacher, who is always committed to promote the education and well-being of students. Passionate about innovation and good practices in teaching. Always engaged in continuous learning to broaden knowledge and experience. Having core expertise at Computational Security, Artificial Intelligence and Healthcare related projects. Skilled in adopting a pragmatic approach in improvising on solutions and resolving complex research problems. Possesses an integrated set of competencies that encompass areas related to Teaching, Mentoring, Strategic Management and establishing Centre of Excellence via Industry tie-ups. Having a track record of driving unprecedented research and development project with international collaboration, instrumental in organizing various National and International Events. Dr. Xiao-Zhi Gao received his B.Sc. and M.Sc. degrees from the Harbin Institute of Technology, China in 1993 and 1996, respectively. He obtained his D.Sc. (Tech.) degree from the Helsinki University of Technology (now Aalto University), Finland in 1999. He has been working as a professor at the University of Eastern Finland, Finland since 2018. Professor Gao has published more than 450 technical papers in refereed journals and international conferences. His current Google Scholar H-index is 34. Professor Gao’s research interests are nature-inspired computing methods with their applications in optimization, data mining, machine learning, control, signal processing, and industrial electronics.

xiii

xiv

Editors and Contributors

Dr. Joseph Varghese Kureethara is heading the Centre for Research at Christ University. He has over 16 years of experience in teaching and research at CHRIST (Deemed to be University), Bengaluru and has published over 100 articles in the fields of Graph Theory, Number Theory, History, Religious Studies and Sports both in English and Malayalam. Dr. Joseph co-edited three books and authored one book. His blog articles, comments, facts and poems that has earned about 1.5 lakhs total pageviews. He has delivered invited talks in over thirty conferences and workshops. He is the Mathematics section editor of Mapana Journal of Sciences and member of the Editorial Board and reviewer of several journals. He has worked as member of the Board of Studies, Board of Examiners and Management Committee of several institutions. He has supervised five Ph.Ds., 12 M.Phils. and supervising eight Ph.Ds. Current Profession: Dr. Joseph Varghese is a Professor of Mathematics CHRIST (Deemed to be University), Bengaluru. Dr. Durgesh Mishra has received M.Tech. degree in Computer Science from DAVV, Indore in 1994 and Ph.D. in Computer Engineering in 2008. Presently he is working as Professor (CSE) and Director Microsoft Innovation Centre at Sri Aurobindo Institute of Technology, Indore, MP, India. He is having around 24 years. of teaching experience and more than 6 years. of research experience. His research topics are Secure Multi-Party Computation, Image processing and cryptography. He has published more than 80 papers in refereed International/National Journals and Conferences including IEEE and ACM. He is a senior member of IEEE, Computer Society of India and ACM. He has played very important role in professional society as Chairman. He has been a consultant to industries and Government organization like Sales tax and Labor Department of Government of Madhya Pradesh, India.

Contributors Fityanul Akhyar Department of Electrical Engineering College of Electrical and Communication Engineering, Yuan Ze University, Taoyuan, Taiwan; School of Electrical Engineering, Telkom University, Jawa Barat, Indonesia Javeria Amreen Department of Mathematics, CHRIST (Deemed to be University), Bangalore, India Toby B Antony Department of Mathematics, CHRIST (Deemed to be University), Bangalore, India H. P. Asha Department of Computer Science and Engineering, School of Engineering and Technology, Christ (Deemed to be) University, Bangalore, India V. Asha New Horizon College of Engineering, Bengaluru, India Sagaya Aurelia CHRIST (Deemed to be University), Bangalore, India Yuliya Averyanova National Aviation University, Kyiv, Ukraine

Editors and Contributors

xv

Yukti Bandi Electronics and Telecommunication Department, D. J. Sanghvi College of Engineering, Vile-Parle (W), Mumbai, India Dipannita Basu Department of Information Technology, Maulana Abul Kalam Azad University of Technology, Kolkata, West Bengal, India Kritica Bisht CHRIST University, Bangalore, India Nopasit Chakpitak International College of Digital Innovation-Chiang Mai University, Chiang Mai, Thailand J. Chandrika Department of ISE, Malnad College of Engineering, Hassan, India Satish Chaurasiya University Institute of Technology, RGPV, Bhopal, M.P, India Gurunameh Singh Chhatwal Department of Electronics and Communication Engineering, University Institute of Engineering and Technology, Panjab University, Chandigarh, India Arun Antony Chully Christ University, Bangalore, India Akash Das Electronics and Telecommunication Department, D. J. Sanghvi College of Engineering, Vile-Parle (W), Mumbai, India Deepthi Das CHRIST (Deemed to be University), Bangalore, India Gerard Deepak Department of Computer Science and Engineering, National Institute of Technology, Tiruchirappalli, India Kostiantyn Dergachov National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine Dev Ajay Dhawan NMIMS Mukesh Patel School of Technology Management and Engineering, Mumbai, India P. Durgadevi Galgotias College of Engineering and Technology, Greater Noida, India D. Elsheba Christ University, Bangalore, India S. P. Gayathri Department of Computer Science and Applications, The Gandhigram Rural Institute–Deemed to be University, Gandhigram, India T. Genish School of Computing Science, KPR College of Arts Science and Research, Coimbatore, India Jossy P. George CHRIST (Deemed to be University), Bengaluru, India Ahona Ghosh Department of Computer Science and Engineering, Maulana Abul Kalam Azad University of Technology, Kolkata, West Bengal, India Heena Gupta Visvesvaraya Technological University, Belgaum, India Olena Havrylenko National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine

xvi

Editors and Contributors

Manjunatha Hiremath Department of Computer Science, CHRIST (Deemed to be University), Bengaluru, India Javid Hussain Department of Computer Science and Engineering, Christ (Deemed to be University), Bangalore, India Mohammad Irfan CHRIST (Deemed to be University), Bangalore, India S. Sandeep Jabez Christ University, Bangalore, India Lija Jacob Department of Data Science, Christ University, Bengaluru, India Ati Jain Institute of Advance Computing, SAGE University, Indore, India W. Jaisingh School of Computing Science and Engineering, VIT Bhopal University, Bhopal, Madhya Pradesh, India I. Diana Jeba Jingle Department of Computer Science and Engineering, School of Engineering and Technology, Christ (Deemed to be) University, Bangalore, India Nimitha John CHRIST (Deemed to be University), Bengaluru, India Deepa V. Jose Department of Computer Science, CHRIST (Deemed to be University), Bangalore, India Sunny Joseph Kalayathankal Jyothi Engineering College, Thrissur, India Jitendra Kaushik Christ University, Bangalore, India K. Kavipriya Department of Computer Science, CHRIST (Deemed to be University), Bengaluru, India Athishay Kesan CHRIST (Deemed to be University), Bengaluru, India Sujatha Arun Kokatnoor Department of Computer Science and Engineering, CHRIST (Deemed to be University), Bangalore, India Abhay Kothari Sagar Institute of Research & Technology, SAGE University, Indore, India Jomon Kottarathil St. Joseph’s College, Moolamattom, Kerala, India A. V. Praveen Krishna Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Balachandran Krishnan Department of Computer Science and Engineering, CHRIST (Deemed to be University), Bangalore, India Joseph Varghese Kureethara Christ University, Bangalore, India Nataliia Kuzmenko National Aviation University, Kyiv, Ukraine Borys Kuznetsov State Institution “Institute of Technical Problems of Magnetism of the National Academy of Sciences of Ukraine”, Kharkiv, Ukraine

Editors and Contributors

xvii

Chih-Yang Lin Department of Electrical Engineering College of Electrical and Communication Engineering, Yuan Ze University, Taoyuan, Taiwan Chi-Wen Lung Department of Creative Product Design, Asia University, Taichung City, Taiwan Sharon T. Mathew Department of Data Science, Christ University, Bengaluru, India Terry Jacob Mathew School of Computer Sciences, Mahatma Gandhi University, Kottayam, India; MACFAST, Thiruvalla, India Mohammed Misbahuddin Centre for Development and Advanced Computing (CDAC), Bangalore, India Durgesh Mishra Sri Aurobindo Institute of Technology, RGPV, Indore, M.P, India Mukul Mishra Christ University, Bangalore, India Anusruti Mitra Department of Information Technology, Maulana Abul Kalam Azad University of Technology, Kolkata, West Bengal, India Sudev Naduvath Department of Mathematics, CHRIST (Deemed to be University), Bangalore, India D. Naga Yethindra Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, India C. S. Nagamanjularani Department of Computer Science and Engineering, New Horizon College of Engineering, Bengaluru, India Sasikala Nagarajan Department of Computer Science and Engineering, New Horizon College of Engineering, Bengaluru, India Kartik Nair Electronics and Telecommunication Department, D. J. Sanghvi College of Engineering, Vile-Parle (W), Mumbai, India Naresha St Aloysius College (Autonomous), Mangalore, India Nilanjan Nath CHRIST (Deemed to be University), Bengaluru, India Neelu Nihalani University Institute of Technology, RGPV, Bhopal, M.P, India Tatyana Nikitina Kharkiv National Automobile and Highway University, Kharkiv, Ukraine Ivan Ostroumov National Aviation University, Kyiv, Ukraine Palvannan Department of Metallurgical and Materials Engineering, National Institute of Technology, Tiruchirappalli, India Surya Pandey Department of Computer Science and Engineering, New Horizon College of Engineering, Bengaluru, India

xviii

Editors and Contributors

D. V. Lalita Parameswari Department of CSE, GNITS, Hyderabad, India Ashwini P. Patil Department of Computer Science, CHRIST (Deemed to be University), Bengaluru, India; Department of Computer Application, CMR Institute of Technology, Bengaluru, India Rohini Patil Thakur College of Engineering and Technology, Mumbai, India; Terna Engineering College, Navi Mumbai, India Vladimir Pavlikov National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine Varsha Pawar Department of Computer Applications, CMR Institute of Technology, Bengaluru, India; Department of Computer Science, CHRIST (Deemed to be University), Bangalore, India Ann Mary Philip Assumption College, Changanacherry, India Anatoliy Popov National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine R. M. Pramila Christ University, Bangalore, India Boppuru Rudra Prathap Department of Computer Science and Engineering, Christ (Deemed to be University), Bangalore, India N. Preethi Christ University, Pune, Lavasa, India Joseph Saj Pulimoottil Christ University, Bangalore, India K. P. Pushpalatha School of Computer Sciences, Mahatma Gandhi University, Kottayam, India Sanjeev Rai Father Muller Medical College, Mangalore, India Siva Shankar Ramasamy International College of Digital Innovation-Chiang Mai University, Chiang Mai, Thailand Ch. Mallikarjuna Rao Department of CSE, GRIET, Hyderabad, India Andrea Rodrigues CHRIST (Deemed to be University), Bengaluru, India Amornthep Rojanasarit Department of Electrical Engineering College of Electrical and Communication Engineering, Yuan Ze University, Taoyuan, Taiwan S. Ruban St Aloysius College (Autonomous), Mangalore, India Nikolay Ruzhentsev National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine Hare Ram Sah Sagar Institute of Research & Technology, SAGE University, Indore, India

Editors and Contributors

xix

Aditya Saha Department of Data Science, Christ (Deemed to be University), Bangalore, India A. Santhanavijayan Department of Computer Science and Engineering, National Institute of Technology, Tiruchirapalli, India Nirmalya Sarkar Christ University, Bangalore, India Deep Rahul Shah NMIMS Mukesh Patel School of Technology Management and Engineering, Mumbai, India Kamal Shah Thakur College of Engineering and Technology, Mumbai, India Aaron Mathew Shaji CHRIST (Deemed to be University), Bangalore, India A. S. Mohammed Shariff Galgotias College of Engineering and Technology, Greater Noida, India Arpit Sharma Department of Computer Science and Engineering, Christ (Deemed to be University), Bangalore, India Oleksandr Shmatko National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine Samiksha Shukla Christ University, Bangalore, India Bangari Sindhuja Department of Computer Science and Engineering, New Horizon College of Engineering, Bengaluru, India Oleksandr Solomentsev National Aviation University, Kyiv, Ukraine Rashi Anubhi Srivastava Department of Electrical Engineering, Central University of Karnataka, Kalaburagi, India Olha Sushchenko National Aviation University, Kyiv, Ukraine Naret Suyaroj International College of Digital Innovation-Chiang Mai University, Chiang Mai, Thailand Tipajin Thaipisutikul Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom, Thailand Vijayetha Thoday NMIMS Mukesh Patel School of Technology Management and Engineering, Mumbai, India Anna Thomas CHRIST (Deemed to be University), Bengaluru, India K. T. Thomas School of Computer Sciences, Mahatma Gandhi University, Kottayam, India; Christ University, Pune, India; Department of Data Science, Christ (Deemed to be University), Bangalore, India Kapil Tiwari CHRIST University, Bangalore, India

xx

Editors and Contributors

Eduard Tserne National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine D. Ushasree Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Vijayakumar Varadarajan School of Computer Science and Engineering, The University of New South Wales, Sydney, Australia Ebin Varghese Christ University, Bangalore, India S. Vijayalakshmi Christ University, Bangalore, India; Department of Data Science, Christ—Deemed to be University, Pune, India M. R. Vinutha Department of ISE, Malnad College of Engineering, Hassan, India K. S. Vivekanand CHRIST (Deemed to be University), Bangalore, India Valerii Volosyuk National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine Sreelekshmi C. Warrier Department of Mathematics, Sree Ayyappa College, Chengannur, Kerala, India Maksym Zaliskyi National Aviation University, Kyiv, Ukraine Simeon Zhyla National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine

CoInMPro: Confidential Inference and Model Protection Using Secure Multi-Party Computation Kapil Tiwari, Kritica Bisht, and Jossy P. George

Abstract In the twenty-first century, machine learning has revolutionized insight generation by using historical data across domains like health care, finance, and pharma. The effectiveness of machine learning solutions depends largely on the collaboration between data owners, model owners, and ML clients, without privacy concerns. The existing privacy-preserving solutions lack efficient and confidential ML inference. This paper addresses this inefficiency by presenting the Confidential Inference and Model Protection, also known as the CoInMPro, to solve the privacy issue faced by model owners and ML clients. The CoInMPro technique is suggested with an aim to boost the privacy of model parameters and client input during ML inference, without affecting the accuracy and by paying a marginal performance cost. Secure multi-party computation (SMPC) techniques were used to calculate inference results confidentially after sharing client input and model parameters privately from different model owners. The technique was implemented in Python language using the open-source SyMPC library to support the SMPC function. The Boston Housing Dataset was used, and the experiments were run on Azure data science VM using Ubuntu OS. The result suggests CoInMPro’s effectiveness in addressing privacy concerns of model owners and inference clients, with no sizable impact on accuracy and trade-off. A linear impact on performance was noted with an increase of secure nodes in the SMPC cluster. Keywords Privacy-preserving machine learning (PPML) · Secure multi-party computation (SMPC) · Privacy · Machine learning (ML) · Confidential inference and model protection

K. Tiwari (B) · K. Bisht · J. P. George CHRIST University, Bengaluru, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_1

1

2

K. Tiwari et al.

1 Introduction Machine learning (ML) is transforming the world with innovations in varied fields like health care, finance, and national defense. It is the crux of some advanced techniques like recommendation and prediction systems, disease detection mechanisms, and face–speech recognition algorithms. In an era of data explosion, prediction algorithms are solving many contemporary and complex problems that users face by optimizing the function of data. Sharing and working on data in every domain are a major challenge due to security and privacy concerns. Machine learning has three parties involved, namely data owners, model owners, and clients. Data owners are the individuals and institutions who partake in the process by voluntarily providing the necessary details. Model owners are the engineers behind the ML algorithms, and their primary concern is to secure the parameters that the model generates after learning from the data. Clients are the end consumer who eventually avails the services provided by the ML model. In machine learning, the privacy of data owners, model owners, and client’s input is required. The stakes are higher in evolving domains like Fintech. Fintech came into existence with the amalgamation of finance and technology. It currently employs machine learning models in day-to-day activities like stock price prediction, credit scoring, risk assessment, fraud detection, algorithmic trading, and even predicting a crowdfunding’s success. FinTechs like JPMorgan and Morgan Stanley are developing automated investment advisors, robo-advisors powered by machine learning technology. Studies have already proven that the introduction of machine learning in FinTech can increase customer satisfaction and employee benefit, but the accuracy of such a model indefinitely depends upon the diversity of data. The rule of thumb in machine learning is that the more the amount of data, the better the accuracy, but with increased innovation, privacy has become a significant concern. As the industry thrives on consumer satisfaction and most consumers value their privacy, the training data must be kept private and secure from both implicit and explicit attacks. Alongside this, the model and parameters also have a privacy threat as they become a piece of public knowledge, the authenticity of the algorithm becomes null. Lastly, the clients will trust the model if their input data is kept private and protected from foreign attacks and perpetrators. In machine learning, privacy needs to be safeguarded at mainly three stages: data aggregation, model training, and inference stage. Research techniques that are commonly used in machine learning to safeguard privacy include homomorphic encryption, differential privacy, and secure multiparty computation [1]. Homomorphic encryption is the process of encrypting the input data to protect it from privacy breaches. The model trains on encrypted data and gives inferences based on it. There is a high demand for privacy-preserving mechanisms to tackle the problem of privacy threats. Homomorphic encryption is a complex process due to its high computational cost and intricate functions. Differential privacy intends to increase privacy by adding noise to the dataset, thus making systems more secure at the expense of accuracy. Secure multi-party computation (SMPC) uses collaborative computing technology

CoInMPro: Confidential Inference and Model Protection …

3

with multiple parties to solve privacy concerns. In comparison with differential privacy, data is more secure as each party is oblivious to the data or presence of other parties. The communicational cost of performing SMPC increases with the complexity of the function. There are several works in the research fraternity in which various privacy-preserving mechanisms have been developed. Still, they lack either efficiency and accuracy or have a high computational or communication overhead cost. There is a lack of inference-focused privacy-preserving machine learning techniques that support multiple model owners and enable confidential inference, without paying an enormous performance or accuracy penalty. The paper presents CoInMPro, a privacy-preserving machine learning mechanism that combines the privacy-preserving techniques of secure multi-party computation (SMPC) for model owners and inference clients. The proposal enables secure and private inference in a secure multi-party computation setting in a semi-honest security mode, making the model parameters and inference input private. The proposal’s implementation found a negligible impact on the accuracy and performance of private inference compared with plain inference. The proposed study conducted various experiments with different parameter settings like the number of model owners and the number of secure nodes to find insights on accuracy and performance in this setting. The rest of this research paper is structured as follows. Section 2 presents an overview of the focus area and mentions literature on private inference and model protection. Section 3 proposes the architecture and explanation of the technique, followed by laboratory setup details, code snippets, and time complexity. Section 4 discusses the experiment finding and insights. Finally, Sect. 5 presents the conclusion and future work.

2 Background Data is fragmented globally, and although technological advancement has made data processes easy, several privacy concerns still exist. Privacy breaches are not unheard of with various multinational corporations facing the same problem. In April 2021, a privacy breach occurred on the Facebook site when a hacker published the personal information of over 533 million Facebook users. Privacy, a threefold concept in machine learning, has a prominent significance in the future of any ML model. For data owners, it addresses the fear of competitive advantages others will have if the shared data is leaked. As a model developer, there is a potential threat of model theft and added responsibility of adhering to privacy regulations. As clients of an ML model, there is a possibility of a branch in their input and data sovereignty. Privacy in machine learning can be attained in any of the three stages a model goes through. The first stage is data aggregation, during which data from various sources is collected and compiled. In this stage, privacy is threatened by data owners, i.e., individuals and organizations that contribute to the innovation by providing data. Data aggregation is followed by model training, during which a data scientist calculates

4

K. Tiwari et al.

the parameters and trains the model with a training dataset. This phase aims to make the model accurate. There is a risk of leakage of the model parameters and hyperparameters, alongside the threat of model theft itself. The final stage in ML is the inference stage which involves using the model to arrive at inferences in the form of recommendations (for a recommendation model), recognition (for speech and image recognition models), or predictions (predictive models). This stage, too, has a threat of training data and model stealing. Anonymization is a privacy-preserving mechanism through which PII, i.e., personally identifiable information, is removed from the shared dataset [1]. Although the mechanism is relatively simple and cost-effective, it is applicable only during the data aggregation stage. Homomorphic encryption encrypts the data and then forwards it to the model for training. Although theoretically it can be used in all the three stages of machine learning, in practicality, its usage in the model training and inference stage is currently a far-fetched idea. This is due to the extensive number-crunching it requires, increasing the computational cost. Differential privacy is a highly used privacy-preserving model to enhance privacy in the data aggregation and training phase. At the data aggregation stage, one can minimize the odds of identifying the records by adding noise in dummy records [2]. The model is trained on the noise-induced dataset; thus, efficiency is traded off for privacy in the stage. In the training phase, noise can be added to the loss function, gradient update, parameters, or labels. Differential privacy can be done at the local as well as global levels. Local differential privacy is based on the idea that the data contributors are not trusted sources. The contributor adds noise to the data at the beginning. In global differential privacy, noise is added by a centralized DP server on the compiled dataset. Secure multi-party computation is the privacy-preserving mechanism that is widely used for training and inference stages. It works based on the idea that nothing should be learned beyond what is necessary. It uses collaborative computing in which connected yet different devices carry out the model’s training [3]. These additional nodes or parties have access to only one part of the encrypted data. The total data is kept a secret from each node, and it is only responsible for its assigned set. Privacy can be preserved not only during the three phases, but also through environments or during execution. Like secure multi-party computation, federated learning also works on collaborative computing [1] but instead of the client’s data going to different nodes, the model reaches various clients and is trained locally. The model is perfected collaboratively after multiple rounds of such local training. Split Learning is the environment in which the learning process is divided into clients and servers. It is commonly used during the training of neural networks, a process that involves separating the neural networks layers and giving them to each party that performs the function oblivious to the other data. Secure enclaves are cloudbased platforms equipped for the training and inference stages of ML. They provide confidentiality and integrity during execution. Privacy-preserving machine learning is a well-researched area with various mechanisms in the market. All these mechanisms can be compared on multiple parameters, including efficiency, computational cost, communicational costs, accuracy, and the

CoInMPro: Confidential Inference and Model Protection …

5

number of parties that can partake. Any ML algorithm’s usage is based on efficiency, especially in finance. A simple delay of a minute can incur huge losses. Computational and communication cost are significant factors that determine the usability of the model and its price during commercialization. Accuracy, a make-or-break factor, can singularly impact the demand for the model. Various privacy-preserving models are present in the market. Distributed DP does deep collaborative learning with multiple parties, but has substantial communicational overhead [4]. This communication overhead results from sharing model parameters between the global parameter server and the client. SecureML [5] and ABY3 [4] train on DNN yet give lesser accuracy than the classic DNN model. ABY3 [4] and SecureML [5] have high computational costs and are low in efficiency and accuracy. Both models are also less effective in CNN training. MiniONN, [6] a high efficiency and accuracy model, lacks scalability as it only supports two parties. Moreover, the secure model remains passive in MiniONN [7] and uses additive homomorphic encryption and two-party SMPC to preserve privacy. Although these processes enhance efficiency, the computational cost alongside communicational cost increases. While distributed DP [4] has a low computational cost, it has a high communication overhead. This communication overhead resulting from sharing model parameters between the global parameter server and the client. Even [7] has high computational and communication overhead costs. MiniONN, [7] a privacypreserving mechanism with low computational cost, can support only two parties. Another drawback of MiniONN is that its security model remains passive. Due to privacy and performance concerns, differential privacy has been avoided during the inference phase of the model development process [1]. GAZELLE used HE along with SMPC but used garbled circuit approach which limited the use of secure nodes to 2 [7]. EPIC [8] is one of the latest and well-accepted secure classification algorithms that use secure vector machine and SMPC to perform non-NN image classification. Although it has high efficiency and accuracy alongside low cost, the model is restricted to two-party and complex. The study found a lack of efficient privacypreserving machine learning mechanisms at the inference level, which can work in the secure multi-party setting beyond two secure nodes.

3 Method and Process Privacy-preserving machine learning is a solution for preserving privacy for participants such as data owners, data scientists, and clients (inference use case). In this study, the m data owners (DO1 , DO2 , …… DOm ) hold their private data. The data owners then individually train their data using a machine learning algorithm, such as linear regression and arrive at models. M data owners come up with m private models (M 1 , M 2 …… M m ). The data and model are private to respective data owners. To increase the accuracy and efficiency of model prediction, all the models should be included to arrive at model ensemble. However, if this is done, the model owners need to share their model centrally, putting their model privacy in jeopardy.

6

K. Tiwari et al.

During the inference phase, a client approaches a new data point and wants a prediction/recommendation/classification result, but maintaining the privacy of client data is also vital. In a nutshell, the privacy for models and clients’ data needs to be maintained. As shown in Fig. 1: CoInMPro architecture, the study proposes the application of secure multi-party computation to get the inference for privacy-preserving machine learning for models and inference data of the client. To preserve the privacy of the model, each mode (M 1 , M 2 …… M m ) will be secret shared to N secure nodes (A, B, C … N) such that they get in a secure multi-party setting. For instance, M1 is secret shared across N secure node in (M 1a , M 2b …… M mn ). No secure node knows about the other’s model parameter. Models are now made private. Similarly, the inference client’s data is secret shared from X to N secure nodes (A, B, C … N), such that they get (X a , X b , X c … X n ) shares in a secure multi-party setting, which ensures that not a single secure node knows another node’s secret share. None of the secure nodes knows complete data transmitted by the client. Client data is made private now. Each secure node calculates the model result by applying its encrypted client data on the encrypted model and by preparing the encrypted result. It means that X a is applied to the M 1a model on secure node A and produces a Y a result. Similarly, the N secure nodes will create n results (ya , yb , yc , ……. yn ). The encrypted result needs to be decrypted in the SMPC setting by aggregating result data ya + yb + yc … + yn = Y by the result assembler, a trusted single machine. The result assembler neither knows about participating models nor knows the client’s input, although it calculates the

Fig. 1 CoInMPro architecture

CoInMPro: Confidential Inference and Model Protection … Table 1 Experiment setup details

Compute environment

Ubuntu machine learning VM from Azure cloud Configuration

Library dependencies

7

2vCPU, 8GiB RAM

PyTorch PySyft (from OpenMined) SyMPC (from OpenMined)

Development environment

Language

Python 3.9

Development environment

Jupyter Notebook

Other parameters

Model owners

1–20

Secure nodes

2–10

Result assembler

1

SMPC protocol

SPDZ

SMPC security setting

Semi-honest parties

Dataset

Boston housing dataset

final inference output. The result Y is as good as applying ensemble learning of m models without preserving privacy. The output result is encrypted with the client’s private key to arrive at Yenc, which can be decrypted at the client end by using their private key.

3.1 Experimental Setup Ubuntu machine learning from Azure cloud with 2vCPU and 8GiB RAM has been used for experimentation along with the open-source libraries. Table 1, experiment setup details, shows the experiment setup details.

3.2 Implementation The proposal has used SyMPC [https://github.com/OpenMined/SyMPC] library, which uses PySyft to enhance the SMPC support function. The implementation created multiple linear regression models on the open-source Boston Housing Dataset from the UCI machine learning repository and simulated multiple model owner’s setting. The implementation also used virtual machines derived from the SyMPC library to generate secure multi-party, aka secure nodes and shared all the models from different model owners among secure nodes. Additionally, the implementation

8

K. Tiwari et al.

has used the SPDZ protocol for SMPC implementation and used semi-honest as the security setting for SPDZ protocol. The secure inference implementation produces O(n) time complexity for given x value input with ‘sn’ (secure nodes) and ‘mo’ (model owners). The implementation run inference against multiple ‘x’ (input) values on different models, and finally, a mean of ‘y’ (output) value is calculated. The model inference operation latency increases linearlly with number of secure nodes (sn). Hence if t is the time complexity of the inference function: t α sn, t α mo Figure 2, CoInMPro inference function, shows the sudo code for inference function where the number of secure nodes, various models, and inference data are input and encrypted final result is the output as inference result. The function mainly instantiates the SMPC environment first by setting up the SMPC protocol and secure node cluster, and later, model and data are secretly shared among the SMPC secure nodes. Finally, the inference result is calculated with the help of SMPC and encrypted using asymmetric encryption. The study ran private inference by sharing x data values across the same number of secure nodes and used SMPC encrypted model tensor to generate the encrypted inference of shared data. The proposal has used a trusted result aggregator, which aggregates the secure inferences from a different model and produces the final y value or output. The output is encrypted to enhance data security during the transfer, before sharing with the inference client who decrypts it at the end. The study mainly

Fig. 2 CoInMPro inference function

CoInMPro: Confidential Inference and Model Protection …

9

collected two success KPI MSE loss to depict the accuracy of private inference and time taken to infer to showcase the performance. The implementation was run against a different number of model owners and secure nodes to arrive at insights or takeaways of the proposal.

4 Result Analysis The following Table 2, performance and accuracy results for different secure nodes and model owners, shows the minified experimentation result where accuracy and performance of the technique are evaluated against different combinations of number of secure nodes and number of models.

4.1 Insights 4.1.1

Accuracy Versus Privacy

The experiments showed that the non-private inference had the MSE loss in the range 20–21, while the private inference had a negligible impact on accuracy. The experiment proved that while the models and inference x values were confidential, none of the secure nodes knew data beyond the secretly shared value. The experiments also showed that the proposal could enhance the privacy of model owners and clients during inference with a negligible impact on accuracy.

4.1.2

Model Versus Accuracy

The experiment has shown that increasing the number of models resulted in MSE loss, having a variation of not more than 1 point and staying between 20 and 21 range. The MSE loss mainly remained constant, proving that accuracy remains intact when the number of models in the framework is increased. Secure multi-party computation nodes can compute on multiple models without affecting accuracy by a high degree.

4.1.3

Secure Nodes Versus Accuracy

As shown in Fig. 3m comparative analysis for different numbers of secure nodes, the study saw a negligible impact on MSE loss when the number of secure nodes increased from 2 to 10. The number of secure nodes impacted the time taken for inference, but the virtual machines computed the inference without affecting accuracy by a high degree.

10

K. Tiwari et al.

Table 2 Performance and accuracy results for different secure nodes and model owners Hardware

ML Algo

sMPC SMPC protocol security

2vCPU, Linear SPDZ 8 GB RAM Regression

No. No. Performance of of (sec.) SN MO

Semi-honest 2

3

4

5

Accuracy/loss

5

0.206405878 20.31747246

10

0.166790009 20.18802261

15

0.18239975

20

0.180090189 20.24121284

5

0.623690128 20.1605072

10

0.643671751 20.20030594

15

0.633473635 20.30055428

20

0.613318205 20.23125458

5

1.142184496 20.22513199

10

1.15519762

15

1.156814575 20.25988197

20

1.150216818 20.21783066

5

1.788914204 20.20358849

10

1.991049051 20.2536335

15

1.904865742 20.30569077

20.30132675

20.28224564

20 6

7

5

2.572024107 20.25515747

10

2.58530283

15

2.640356779 20.24275398

20

2.559947968 20.24914742

5

3.520589828 20.23002434

10

3.668842077 20.29376221

15

3.531092882 20.21783066

20.25584221

20 8

9

10

5

3.435461998 20.20701218

10

3.57885313

15

3.466508627 20.26420021

20

3.486571312 20.26704407

5

4.246810913 20.19329834

10

4.306902647 20.25208664

15

4.299203157 20.23276711

20.19074821

20

4.311759949 20.31554985

5

5.377040625 20.26992607

10

5.529221296 20.27954102

15

5.333623171 20.22979164 (continued)

CoInMPro: Confidential Inference and Model Protection …

11

Table 2 (continued) Hardware

ML Algo

sMPC SMPC protocol security

No. No. Performance of of (sec.) SN MO 20

4.1.4

Accuracy/loss

5.429213047 20.23600006

Privacy Versus Performance

The experiment revealed the time taken to compute inference in the SMPC setting increased linearly with an increase in the number of the secure node. The inference is calculated separately on different virtual machines and communicated to the result aggregator, where it gets consolidated after the reconstruction of intermediate output. The whole sequence of operations takes more time as the number of secure nodes increases, which could be the reason for the rise in time to inference.

4.1.5

Model Versus Performance

Unlike secure nodes, the number of models does not create complexity in the calculation. Hence, the time to inference remains unchanged when the number of models is increased for a static secure node setting. It is a good value-added component from the proposal, and more models can be added to boost the accuracy. In contrast, the models remain private, and the framework does not impact the accuracy and performance of private inference.

4.1.6

Secure Nodes Versus Performance

The experimentation found that private inference time increases with an increase in secure nodes, but the growth is linear. Moreover, as shown in Fig. 4, secure node versus performance versus MSE loss, shows the increase in the time is in the range of only a few seconds; e.g., for 17 secure nodes, our inference time was around 16 s; i.e., for each addition of secure nodes, a second time in inference was added. In contrast, the performance decrease can be compensated by using a specialized hardware with increased CPU, GPUs, and RAM. However, increasing the secure nodes also increases the privacy of models and the data as the privacy probability increases accordingly.

4.1.7

Accuracy Versus Performance

The test results revealed that accuracy remains intact, despite the fact that the time taken to inference changes based on the hardware performance. When tested with higher hardware configuration, the accuracy numbers were between 20 and 21 MSE

12

Fig. 3 Comparative analysis for different numbers of secure nodes

K. Tiwari et al.

CoInMPro: Confidential Inference and Model Protection …

13

Fig. 4 Secure node versus performance versus MSE loss

loss, while the time consumption was coming down with higher CPU and RAM. Hence, it is concluded that within CoInMPro the accuracy and performance are not correlated. In contrast, accuracy depends more on the presence of diverse models. The performance depends on the number of secure nodes in the SMPC cluster.

5 Conclusion and Future Work The performance and accuracy of machine learning solutions depend majorly on the availability of comprehensive datasets. In the recent past, collaborative machine learning has improved performance and accuracy by taking advantage of the massive data owned by multiple parties. While the upside of using collaborative learning is evident, there is also a risk of information leakage and a potential compromise of privacy and security of training data and model parameters. Addressing such concerns is required because the clients of the machine learning model also need confidential inference. This study presented CoInMPro, a practical and confidential inference and model protection machine learning technique. This technique addresses the privacy concerns of the contributing model owners by safeguarding model parameters against an honest-but-curious adversary by using secure multi-party computation. The proposed technique was implemented in the Python language using the SyMPC framework, which provided SMPC capabilities built on top of the open-source PPML framework, PySyft from OpenMined. The Boston Housing Dataset was used, and the experiments ran on Azure machine learning VM with Ubuntu OS. The implementation proved CoInMPro’s effectiveness in addressing privacy concerns of model owners and inference clients. The implementation proved no sizable impact on accuracy and a linear effect on performance when more secure nodes were added to the SMPC cluster. The implementation also demonstrated that the accuracy of ML inference remains intact irrespective of the number of participating model owners. The implementation focused on the linear regression ML algorithm for model training. Other ML algorithms like logistic regression and several types of neural networks were not considered for CoInMPro. Future research should minimize the computation

14

K. Tiwari et al.

and communication overhead that CoInMPro produces and expand its applicability to other machine learning algorithms and larger datasets.

References 1. Tiwari K, Shukla S, George JP (2021) A systematic review of challenges and techniques of privacy-preserving machine learning. Lecture Notes Netw Syst 290:19–41. https://doi.org/10. 1007/978-981-16-4486-3_3 2. Abadi M et al (2016) Deep learning with differential privacy. https://doi.org/10.1145/2976749. 2978318 3. Shukla S, Sadashivappa (2014) Secure multi-party computation protocol using asymmetric encryption. https://doi.org/10.1109/IndiaCom.2014.6828069 4. Mohassel P, Rindal P (2018) ABY3: a mixed protocol framework for ma-chine learning. https:// doi.org/10.1145/3243734.3243760 5. Mohassel P, Zhang Y (2017) SecureML: a system for scalable privacy-preserving machine learning. https://doi.org/10.1109/SP.2017.12 6. Liu J, Juuti M, Lu Y, Asokan N (2017) Oblivious neural network predictions via MiniONN transformations. https://doi.org/10.1145/3133956.3134056 7. Juvekar C, Vaikuntanathan V, Chandrakasan A (2018) GAZELLE: a low latency framework for secure neural network inference 8. Makri E, Rotaru D, Smart NP, Vercauteren F (2019) EPIC: efficient private image classification (or: learning from the masters). https://doi.org/10.1007/978-3-030-12612-4_24

An Improved Face Mask-aware Recognition System Based on Deep Learning Chih-Yang Lin, Amornthep Rojanasarit, Tipajin Thaipisutikul, Chi-Wen Lung, and Fityanul Akhyar

Abstract Face mask detection and recognition have been incorporated into many applications in daily life, especially during the current COVID-19 pandemic. To mitigate the spread of coronavirus, wearing face masks has become commonplace. However, traditional face detection and recognition systems utilize main facial features such as the mouth, nose, and eyes to determine a person’s identity. Masks make facial detection and recognition tasks more challenging since certain parts of the face are concealed. Yet, how to improve the performance of existing systems with a face mask overlaid on the original face input images remains an open area of inquiry. In this study, we propose an improved face mask-aware recognition system named ‘MAR’ based on deep learning, which can tackle challenges in face mask detection and recognition. MAR consists of five main modules to handle various kinds of input images. We re-train the CenterNet model with our augmented face mask inputs to perform face mask detection and propose four variations on face mask recognition models based on the pre-trained ArcFace to handle facial recognition.

C.-Y. Lin (B) · A. Rojanasarit · F. Akhyar Department of Electrical Engineering College of Electrical and Communication Engineering, Yuan Ze University, Taoyuan, Taiwan e-mail: [email protected] A. Rojanasarit e-mail: [email protected] F. Akhyar e-mail: [email protected] T. Thaipisutikul Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom, Thailand e-mail: [email protected] C.-W. Lung Department of Creative Product Design, Asia University, Taichung City, Taiwan e-mail: [email protected] F. Akhyar School of Electrical Engineering, Telkom University, Jawa Barat, Indonesia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_2

15

16

C.-Y. Lin et al.

Finally, we demonstrate the effectiveness of our proposed models on the VGGFACE2 dataset and achieve a high accuracy score on both detection and recognition tasks. Keywords Face mask detection · Face mask recognition · Deep learning

1 Introduction The coronavirus (COVID-19) and its variants have spread globally since the end of 2019, impacting the safety and behaviors of people worldwide in countless ways. Wearing masks has become a common way for people to protect themselves from the virus and reduce unnecessary deaths. As masks obstruct a significant portion of facial features, traditional facial detection and recognition algorithms such as FaceNet [1] and ArcFace [2], which rely on features from the full face for face recognition, become unreliable. Consequently, improving face mask-aware detection and recognition algorithms carries substantial practical significance. To solve the aforementioned limitations, it is crucial to enhance the existing face recognition methods by enabling identity verification to work accurately by using partially exposed faces rather than having to rely on all facial feature points. In this work, we design an improved face mask-aware recognition based on deep learning (MAR) framework for face recognition on face mask images. Specifically, the proposed framework includes five components to achieve improved performance on detection and recognition tasks. We first perform image augmentation on full frontal face images to obtain face mask images. Then, we utilize the re-trained CenterNet model to perform face mask detection. In addition, we re-train ArcFace model on the face without a mask, the face with a mask, the upper part of the face, and face mask with ear feature images. The experimental results on real-world datasets show the superior effectiveness of our proposed framework in terms of accuracy by a vast margin. We summarize the contributions of our study as follows: (i)

(ii) (iii)

We propose an improved face mask detection model based on the CenterNet algorithm by re-training the model with our augmented face mask images. We propose four model variations for face mask recognition by re-training the ArcFace model with four types of augmented face mask images. We perform experiments on a real-world dataset and show the efficiency of our proposed framework under different settings.

This paper is organized as follows. Sections 2 and 3 present related works and the details of our proposed framework. Sections 4 and 5 describe the experimental setup and results. We then offer conclusions in Sect. 6.

An Improved Face Mask-aware Recognition System Based on Deep …

17

2 Related Works Object detection algorithms such as Faster R-CNN [3], YOLO [4], RetinaNet [5], and CenterNet [6] are deep learning models that detect an object of interest or find the object of some class within an image. In this work, we use the CenterNet model to detect a face and classify that face with/without a mask. CenterNet provides a fast, simple, and accurate method for predicting the bounding boxes and poses of objects and persons, respectively, and represents objects by a single point at their bounding box center. The bounding box size and other object properties are inferred from the key-point feature at the center. Face recognition is the process of matching a human face from the input image with other faces in the corpus to find the most similar identity match. One promising face recognition model is DeepFace [7], which uses the deep convolutional neural network model to extract the features from 3 million facial images based on 3000 different identities. Deep hidden IDentity features [8] is another popular deep learning technique for face recognition and verification. This method uses two eyes and the corners of two mouths as the main features to perform facial identification. Currently, ArcFace [2] has achieved state-of-the-art performance on several face recognition benchmarks by introducing Additive Angular Margin Loss (ArcFace) to enhance the discriminative power of the face recognition model. Therefore, we employed ArcFace as the backbone model in our face mask recognition module.

3 Methodology In this section, we first present an overview of our proposed framework: improved mask-aware face recognition based on deep learning (MAR), as shown in Fig. 1. We also provide more details of our framework with pseudocode as presented in Algorithm 1. The MAR model consists of five core modules as follows: (1) data management module to preprocess and augment the original images into the proper format for further processing. The input in this module is the set of facial images from accessible online datasets, while the outputs are two sets of training and testing datasets. More specifically, we perform the augmentation procedure on the original full face without mask images into the entire face with mask images and top-half face images for both training and testing datasets; (2) face mask-aware model detection learning module to detect whether the given input face images have a mask or not; (3) face unmaskaware model recognition learning module to learn the identity of each person by extracting the features from the full face without mask images. This module is used only if the output from the face mask-aware model detection learning module returns a detection prediction of ‘Face without Mask Class’; (4) face mask-aware model recognition learning module to learn the identity of each person from the full face with mask images and top-half facial images. Similarly, this module is used only if

18

C.-Y. Lin et al.

Fig. 1 Overview of our proposed MAR framework, which consists of five core modules: (1) data management module; (2) face mask-aware model detection learning module; (3) face unmask-aware model recognition learning module; (4) face mask-aware model recognition learning module; and (5) model prediction module

the face mask-aware model detection learning module returns a detection prediction of ‘Face with Mask Class.’ After we finish the training process, we then utilize the fine-tuned models in (5) model prediction module, to predict the identity of the given input images and return the mask-aware detection and bounding box as the final output. We provide the details of each module below.

An Improved Face Mask-aware Recognition System Based on Deep …

19

Algorithm 1: Pseudo-Code of the MAR framework Input_Image = read(input_images) bbox, face_class=Model_Dection_Module(Input_Image) if(bbox != null and face_class != null): if(face_class = ‘face_w_mask’): IdentityPred = Face_Mask_Aware_Recoginition_Module Input_Image, options= [full face with mask, half-top face]) else if (face_class = ‘face_w/o_mask’): IdentityPred = Face Unmask-Aware Model Recognition (Input_Image) output = [bbox, face_class, IdentityPred]

3.1 Data Management Module This section explains how we can obtain the ‘full face with mask’ images and ‘tophalf’ facial images from the original full face images. We achieve this task by utilizing data preprocessing and data train/test split sub-modules as shows in Fig. 2. Data Preprocessing sub-module: First, we perform face mask augmentation on the original full face without mask images (D f _w/o_m ) to obtain the full face mask images (D f _w_m ). To complete this task, we utilize MaskTheFace [9], the opensource Python script to generate a mask on the target images. The representation images before (D f _w/o_m ) and after (D f _w_m ) applying the MaskTheFace script are shown in Fig. 3a. Secondly, we perform half-top augmentation to obtain images with only the upper part of the face above the mask. We vary the cut ratio between 50 and 70% in this experiment. After performing a grid-search operation to find the best cut ratio, we set 53% as an optimal number for a cut ratio to get the proper top-half face images (D f _up ). The output images of performing half-top augmentation are shown in Fig. 3a. Data Train/Test Split sub-module: After obtaining three datasets (D f _w/o_m , D f _w_m , D f _up ) from the data preprocessing sub-module, we further divide these datasets into training and testing datasets. For all images belonging to each person, we select 80% and 20% of all images as training and testing datasets,

20

C.-Y. Lin et al.

Fig. 2 Details of the data management module, which consists of two sub-modules: data preprocessing and data train/test split

train train respectively. The output of this sub-module is, therefore, D train f _w/o_m , D f _w_m , D f up , test test D test f _w/o_m , D f _w_m , and D f _up .

3.2 Face Mask-Aware Detection Model Learning Module In this module, we utilize the well-known pre-trained CenterNet and ResNet50 as a building block for face mask detection. We follow the original convolutional neural network layers and all parameter settings stated in [6]. We then introduce the fully connected layers to this pre-trained model in order to let the model learn the best

An Improved Face Mask-aware Recognition System Based on Deep …

21

Fig. 3 a Examples from input and output datasets in the data preprocessing sub-module. Row A represents the input images, denoted as D f _w/o_m . Row B represents the output images after performing face mask augmentation, denoted as D f _w_m . Row C represents the output images after performing half-top augmentation, denoted as D f _up . b Example of an additional ear feature extracted from the original dataset

weights according to the face mask detection task. Given the training datasets we created in the earlier section, our face mask-aware detection model learning module will return two outputs, including the face bounding box and the face mask detection class as shown in Fig. 4a. In this study, the face mask detection class could be either ‘Face without Mask Class’ or ‘Face with Mask Class.’ The face mask detection class then is used as a guideline for selecting the proper face recognition models in the next step.

Fig. 4 a Details of the face mask-aware detection model learning module. b Details of the face unmask-aware recognition model learning module

22

C.-Y. Lin et al.

3.3 Face Unmask-aware Recognition Model Learning Module As shown in Fig. 4b, for the input images classified as ‘Face without Mask Class,’ we exploit this module for the next step to identify the person. We utilize the stateof-the-art pre-trained ArcFace model [2] as a building block for face unmask-aware recognition. Aside from the original structure of ArcFace, we add the last fully connected layers to enable the model to learn the nonlinear interactions between the input features and the expected output from our dataset.

3.4 Face Mask-aware Recognition Model Learning Module This module also utilizes the same deep learning network architecture described in the face unmask-aware recognition model learning module, which consists of the pre-trained ArcFace layers and the additional fully connected layers to perform the person identification task. However, we train the model with two face mask datasets as presented in Fig. 5. As a result, we get two face mask-aware recognition models for each dataset. Besides the standard facial features, we also extract the ear features as an additional input for our proposed models. We call this proposed model ‘ear model’ for simplicity. Our rationale for using the ear as an additional feature is twofold. First, each human ear has unique characteristics. Second, to the best of our knowledge, we are the first study to investigate the impact of ear feature integration on performance gain in face recognition. We manually crop the area with the left and right ears from our datasets. Examples of ear features we could extract are shown in Fig. 3b. The structure of ear model is presented in Fig. 6. We use the pre-trained VGG16 model

Fig. 5 Details of the face mask-aware recognition model learning module

An Improved Face Mask-aware Recognition System Based on Deep …

23

Fig. 6 Structure of ear model

to extract ear features from our ear image. Then, we concatenate the ear and facial features before feeding them into a set of fully connected layers to perform facial identification.

3.5 Model Prediction Module As shown in Fig. 7, after we train our proposed models on the particular dataset, we finally acquire four fine-tuned models ready to use in the testing phase. The output

24

C.-Y. Lin et al.

Fig. 7 Details of the model prediction module

from this prediction module consists of the bounding box of the face on the input image, the face mask detection class, and the identity prediction for that person.

4 Experimental Setting In contrast to existing state-of-the-art face recognition models, our proposed MAR framework aims to improve the performance of the face mask recognition task. In particular, we re-trained CenterNet and ArcFace with our augmented datasets for face mask detection and facial identification tasks, respectively. We designed the experiments to answer four research questions as follows. For the face mask detection task: (RQ1) Can we improve the performance of CenterNet by re-training the model with our augmented face mask images? For the face mask recognition task: (RQ2) Can we improve the performance of ArcFace by re-training the model with our augmented face mask images? (RQ3) Are there any differences between using ‘full face with mask’ images and top-half facial images with the proposed face mask-aware recognition model? (RQ4) Can we improve the performance of our re-trained ArcFace model by integrating the ear feature as an additional input?

4.1 Dataset We use the publicly available VGGFACE2 [9] dataset in our experiments. VGGFACE2 contains 3.31 million images of 9131 subjects (identities), with an average of 362.6 images for each subject. The images in the VGGFACE2 dataset have an average resolution of 137 × 180 pixels with fewer than 1% at a resolution below 32 pixels. For each subject, we use 80% of the images for training, 10% for validation, and another 10% for testing. In order to perform the scalability test on our proposed models, we create smaller sub-datasets that contain only 20 and 50

An Improved Face Mask-aware Recognition System Based on Deep … Table 1 Dataset statistic used in our experiments

Dataset\class

25

Train

Validation

Test

VGGFACE2

2,592,000

333,000

333,000

VGGFACE2-20

5760

740

740

VGGFACE2-50

14,400

1850

1850

identities from the original VGGFACE2 dataset. We name these two sub-datasets VGGFACE2-20 and VGGFACE2-50, respectively. The summary of statistics from the dataset used is presented in Table 1.

4.2 Implementation In the experiment, we employed the following environment to implement our framework: GPU: NVIDIA GeForce GTX 1080 Ti 11G, RAM: 32 GB, CPU: Intel® Core™ i7-8700 processor, operation system: Microsoft Windows 10, Cuda: Cuda 9. We perform a grid-search operation through a list of parameters to get the best weights for each proposed model. We train the face mask-aware detection model (CenterNet) with a batch size equal to 20 and 200 epochs. For the recognition task, we train the face unmask-aware recognition model and face mask-aware recognition model (ArcFace) on VGGFACE2 with a batch size of 32, 100 epochs, and a learning rate set to 0.01, while we use 64 as the batch size, 30 epochs, and the learning rate is set to 0.01. We use accuracy rate as the main metric for evaluation in the experiments.

5 Experimental Results In this section, we provide the evaluation results from our facial detection and recognition models to answer the research questions mentioned above.

5.1 Face Mask Detection Performance (RQ1) In this detection evaluation, besides using the VGGFACE2 dataset, we also include the public real-world masked face dataset or RMFD1 [10] to evaluate our proposed face mask-aware detection model. This second dataset contains 90,000 facial images without masks and 5000 face images with face masks. Table 2 shows that we achieved high accuracy up to almost 98% and 89% on the VGGFACE2 and RMFD datasets,

1

https://github.com/X-zhangyang/Real-World-Masked-Face-Dataset.

26 Table 2 Accuracy of face mask detection model (an improved version of the pre-trained CenterNet model)

C.-Y. Lin et al. Dataset\class

With all classes (%)

Face with Mask Class (%)

Face without Mask Class (%)

VGGFACE2

98.23

98.59

97.87

RMFD

89.25

85.87

92.64

respectively. Hence, we can conclude that the performance of CenterNet can be improved by re-training the model with our augmented face mask images.

5.2 Face Mask Recognition Performance (RQ2, RQ3) In Table 3, we can observe the following interesting points. First, the performance of our proposed model is stable across small to large datasets. This indicates that the proposed model can resist scalability concerns. Second, our proposed model outperforms the traditional ArcFace model in all datasets by a large margin, especially when performing the recognition task on images with face masks. Therefore, we can conclude that we are able to improve the performance of ArcFace by re-training the model with our augmented face mask images. We also do not see much difference between using full face with mask images and top-half facial images with the proposed face mask-aware recognition model.

5.3 The Ear Model Performance Comparison (RQ4) As shown in Fig. 9, we do not see a benefit from integrating ear features when we perform face recognition on images of faces without masks. However, when we perform face recognition on images of faces with masks, we observe an improvement in performance by including the ear feature. This suggests that the unique ear feature is an essential additional input. Hence, we can conclude that we can improve the performance of our re-trained ArcFace model by integrating the ear feature as an additional input primarily when face recognition is performed on images of faces with masks. Last but not least, Fig. 8 shows the final output consisting of the bounding box, the face mask detection class, and the face identification ID from our framework.

6 Conclusion This study proposes an improved face mask-aware recognition based on deep learning (MAR) for face mask-aware detection and recognition tasks. Our framework contains

An Improved Face Mask-aware Recognition System Based on Deep …

27

Table 3 Accuracy of face mask recognition model (an improved version of the pre-trained ArcFace model) Model\datasets

VGGFACE2 Face with Mask Class (%)

VGGFACE2 Face without Mask Class (%)

Traditional ArcFace

68.75

96.63

Our proposed face mask-aware recognition model (trained with full face mask images)

86.76

96.38

Our proposed face mask-aware recognition model (trained with half-top face images)

82.53

96.38

Model\datasets

VGGFACE2-20 Face with Mask Class

VGGFACE2-20 Face without Mask Class

Traditional ArcFace

68.75

98.07

Our proposed face mask-aware recognition model (trained with full face mask images)

77.30

97.83

Our proposed face mask-aware recognition model (trained with half-top face images)

82.33

97.83

Model\datasets

VGGFACE2-50 Face with Mask Class

VGGFACE2-50 Face without Mask Class

Traditional ArcFace

68.95

97.12

Our proposed face mask-aware recognition model (trained with full face mask images)

74.41

97.12

Our proposed face mask-aware recognition model (trained with half-top face images)

76.22

97.12

Fig. 8 Examples of the final output returned by our proposed model

28

C.-Y. Lin et al.

Fig. 9 Performance comparison of our proposed ear model on the VGGFACE-20 dataset

five core modules to handle various input images and select the most appropriate proposed model under the different settings. In particular, we improve upon the state-of-the-art performance by re-training the models with our augmented input images. As a result, we propose one variation of the CenterNet model to perform the face mask detection task. We additionally propose four variations of the ArcFace models: face unmask-aware recognition model, face mask-aware recognition model (with full face mask images), face mask-aware recognition model (top-half face images), and ear model to perform face unmask/mask-aware recognition. Through the experiments, we can observe the superiority of our proposed models over the baselines, which indicates the effectiveness of our new approaches. Acknowledgements This work was supported in part by the Ministry of Science and Technology, Taiwan, under Grants MOST 109-2634-F-008-007, MOST 107-2221-E-155-048-MY3, and 1102221-E-155-039-MY3.

References 1. Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815–823 2. Deng J, Guo J, Xue N, Zafeiriou S (2019) Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4690–4699

An Improved Face Mask-aware Recognition System Based on Deep …

29

3. Ren S, He K, Girshick R, Sun J (2016) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39:1137–1149 4. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788 5. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988 6. Zhou X, Wang D, Krähenbühl P (2019) Objects as points. arXiv:1904.07850 7. Parkhi OM, Vedaldi A, Zisserman A (2015) Deep face recognition 8. Sun Y, Wang X, Tang X (2014) Deep learning face representation from predicting 10,000 classes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1891–1898 9. Cao Q, Shen L, Xie W, Parkhi OM, Zisserman A (2018) Vggface2: a dataset for recognising faces across pose and age. In: 2018 13th IEEE international conference on automatic face and gesture recognition (FG 2018), pp 67–74. IEEE 10. Cabani A, Hammoudi K, Benhabiles H, Melkemi M (2021) MaskedFace-Net—a dataset of correctly/incorrectly masked face images in the context of COVID-19. Smart Health 19:100144

Semantically Driven Machine Learning-Infused Approach for Tracing Evolution on Software Requirements Rashi Anubhi Srivastava and Gerard Deepak

Abstract Software engineering forms a well-structured application of engineering methodologies for software development. It forms a branch under engineering discipline which is associated with all the facets of software production. The entire information regarding a particular software is documented and stored in configuration management repositories of a particular organization in the form of software requirement specifications (SRSs). Information about several such software requirements, software paradigms, etc., are then tweeted on developer’s Twitter handles, especially during the release of particular software. Through this paper, a novel approach has been put forth for the recommendation of tweets adhering to the developer’s community and SRS documents to the users in correspondence to their queries. Various machine learning and Web mining techniques have been explored and implemented for tweets classification, grouping, ranking, and prioritization. The results obtained advocate for the proposed methodology to be classified as a highly effective approach. Keywords Ontology · Recommender system · Semantic similarity · Software evolution · Software requirement · Text mining

1 Introduction Users employ Twitter to convey their opinions about a particular product which further acts as a dialog between the system stakeholders, such as system users and its developers. The success of any software product depends on the user’s satisfaction and contribution. Keeping in mind the success of the software, an entire description of the software requirements for a system is meticulously delineated in SRS. These requirements are utilized to explain the features of the software. Nowadays, a spike in R. A. Srivastava Department of Electrical Engineering, Central University of Karnataka, Kalaburagi, India G. Deepak (B) Department of Computer Science and Engineering, National Institute of Technology, Tiruchirappalli, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_3

31

32

R. A. Srivastava and G. Deepak

the demand for software requirements and functionalities has been witnessed. Given the fact that software engineering calls for an enormous amount of requirement engineering; it is quite challenging to deliver the end product to the stakeholders in a go; hence, the product is delivered in stages. Information about each of the new releases is tweeted in the developer’s community which can be later utilized by the users for their software requirement affairs. Henceforth, a novel framework of SMLTR is proposed to trace down the relevance of tweets from the developer’s community and make suitable recommendations to the users. In this work, Twitter streams pertinent to the developer’s community have been investigated in order to make tweet recommendations based on user queries for a particular software requirement. Tweets that contain the developer information from developer handles are taken into account. Details about software requirements, requirement engineering, software paradigms, etc., are tweeted via these developer communities before or during the release of any software. Such tweets which comprise links to blogs, requirement specifications, software versions, etc., are crawled directly from Twitter and form the dataset which has been used to carry out the experimentation of the proposed system. Multiple semantic similarity concepts and SRS documents are taken advantage of to build ontologies. These ontologies are then used for semantic matchmaking and entropy calculation to further classify the tweets, rank them based on relevance, perform requirement prioritization, and eventually make recommendations to the users. The presented paper has been organized in the following format—Sect. 2 outlines the Related Work. Sections 3 and 4 depict the Proposed Methodology and Implementation and Performance Evaluation in detail Sect. 5 concludes the paper.

2 Related Works There has been a significant amount of research in the field of software requirements and evolution. Various techniques and strategies have been experimented with to carry out the research work. Guzman et al. [1] proposed a framework for tweet classification, grouping, and ranking based on software evolution, content, and relevance, respectively. A study on the application of systematically mapped ML techniques in software engineering is put forth by Gramajo et al. [2], wherein varied approaches for the identification and analysis of ML techniques have been discussed. Another such comparative study has been done by Pratvina et al. [3]. This study aimed at finding out the best-in-class ML model for software requirement classification and prioritization. A thesis was presented by Vivian et al. [4] which talked about the classification of software requirements using word embedding and CNN. A novel idea of TSS semantic similarity measure was proposed in the work of Carrillo et al. [5]. TSS is used to measure the concept evolution and its variations around the semantic locality. Pagano and Maalej et al. [6] conducted experiments to analyze the feedback of users from various application windows. By taking reference from each of the previous works, it has been tried to refine the relevance of recommendations made.

Semantically Driven Machine Learning-Infused Approach …

33

Various techniques have been included in order to achieve the expected outcome. In [7–14], several papers have been given in support of the proposed methodology.

3 Proposed Methodology A novel strategy for the recommendation of tweets premised under the software developer’s community is presented through this work. The experimentation of this work has been taken up on the Twitter dataset as an input feature for text classification, grouping, ranking, and prioritization. Multifold relevance computation and semantic matchmaking have been implemented to make final recommendations. SRS documents have been scratched out from multiple requirement configuration libraries of various organizations to construct ontology and thereafter, perform semantic matchmaking and tweet ranking. Classification of tweets is accomplished by the encompassment of logistic regression. Entropy-based matching and ranking of tweets are done, which are then given priority tags based on semantic similarity, and recommendations are made in coherence to client requirements. On encounter with the Twitter dataset, it is first made equipped with the proposed system architecture by encompassing various natural language processing (NLP) techniques. The acquired data consist of tweets comprising informal language, unstructured sentences, etc. Hence, this raw textual data firstly undergo tokenization for preliminary phase preprocessing, wherein each of the sentences is separated into smaller units called tokens which assist in context understanding and interpretation by inspecting the word sequences. A customized tokenizer has been modeled keeping in mind white spaces, special characters, punctuations, etc. This preprocessing phase is followed by lemmatization, where each of the generated tokens is chopped down to its root word, known as lemmas. It is predominantly done to group together the non-identical inflection of a word in order to analyze them as a single entity. WordNet Lemmatizer has been operational for this work. After the data have been lemmatized, it is given in for stop word removal. To amplify the computation cost of the model, stop words are filtered out from the text corpus. A self-designed regular expression-based stop word removal algorithm has been employed. In the second phase of data preprocessing, word sense disambiguation (WSD) is employed to determine the sense which is activated by the usage of a word in any particular context. The occurrence of a word in context is classified into its multiple sense classes. A clustering-based approach of unsupervised word sense disambiguation (WSD) has been integrated into the proposed model, wherein a similarity measure of the context is used. These four prime phases of preprocessing make the textual data qualified for model implementation and further instrumentations. The final phase of data preprocessing is realized by incorporating named entity recognition (NER). Named entities are located and classified under pre-defined attributes. TwitterNER has been exploited for data training and entity labeling in the system put forth. To classify the tweets and sketch out an inference to rank and prioritize them,

34

R. A. Srivastava and G. Deepak

ontology construction is first taken into consideration. These ontologies are generated from the developer documents and SRS documents available online. Details about each software version, intermediate code, software analysis results, version updates, version control, changes made to the software packages, etc., are systematically maintained in these repositories. Based on the dataset domain, documents are crawled in these configuration management libraries, and befitting ontology is generated using OntoCollab. Hereupon, the actual task is initialized, where multiple-phase processing is exercised on the input data for appropriate classification and feature extraction from the Twitter dataset. Each of these phases is described in an orderly manner in the following section. Semantic networks are schemas for knowledge representation that involve nodes and links in between these nodes. This semantic network formulation is achieved by integrating the ontologies contrived using various SRS. To significantly simulate the procedure of ontology integration, SemantoSim measure has been used on the preprocessed terms with a threshold of 0.75. It is a measure to delineate semantic similarity on the lines of PMI score. SemantoSim evaluates the relatedness of a term ‘a’ in regards to the term ‘b’ concerning their probability score as depicted in Eq. 1, where p (a, b) represents the probability score of ‘a’ occurring with ‘b’, while p(a) and p(b) are their individual probabilities. SemantoSim(a, b) =

pmi(a, b) + p(a, b)log log[ p(a, b)] [ p(a) · (b)] + log log [ p(a, b)]

(1)

To make this process of network formulation more robust, Twitter semantic similarity (TSS) has also been incorporated alongside SemantoSim. The frequency for each word is estimated in the given tweets from their occurrence velocity. Following this, the frequency of co-occurrence of two words η1 and η2 is estimated from the Twitter streams containing both of them. The TSS between these two terms is given by Eq. 2, where α is the scaling factor and ϕ represents the frequency of each term.   TSS η1 , η2 =



a   φ η1 ∧ η2      max φ η1 , φ η2

(2)

Term Set Formulation and content generation: The next step in the pipeline is to formulate term sets which are predominantly hash sets. The preprocessed Twitter dataset is formed into a term set which constitutes a repository for all the unique terms from the dataset. Semantic relevance is established for each of these terms in the repository and several contents from the online developer forums, and Twitter streams are crawled and assembled with an aim to construct a deck for software relevant blogs and tweets. Tweets Classification: The principal aim of this phase is to identify and classify the tweets under various categories. Certain features are extracted from the semantic network formulated earlier, and the dataset is then subjected to classification. Here,

Semantically Driven Machine Learning-Infused Approach …

35

the tweets are camped under several meticulously chosen attributes, exceptionally admissible to the domain of requirement engineering, software evolution, user demand, etc. The categorical nature of the target variable is highly exploited in the proposed algorithm; henceforth, application of logistic regression is exercised for the task of classification. A logistic function is used to model the dependent variable. To map the predicted values, a sigmoid function is used in logistic regression. The nature of this function having a non-negative derivative for each point and exactly one inflection point makes our model highly efficient. The negative sign depicts the need to maximize the probability by loss function minimization. With the completion of three distinct processing phases, the output of all three is brought into conjuncture to realize grouping, ranking, and prioritization of the tweets. To achieve this, a novel three-fold semantic matching strategy has been put together. Due to the enormously large dataset, only, the top 50% of the classified instances is taken into account. In order to increase the accuracy and precision and increase the relevance of the recommendations, the top 50% of the classified instances is taken from each class, and based on the term set enriched from the content of online developer community forums, semantic similarity, entropy-based matching, and diversity index are computed. For the computation of semantic similarity, SemantoSim measure is brought into the picture again and is calculated with 0.5 as the threshold between the contents generated from developer forums and the classified instances acquired from the preceding phases. Twitter semantic similarity is also worked out beside SemantoSim to further supplement the model performance. A coherence is established between the categorized Twitter instances and contents secured from the configuration management repositories after semantically mapping them with the given techniques. To significantly simulate the procedure of computing semantic relatedness of the tweets with the generated contents, another wave of semantic matching is implemented using entropy measures. The entropybased semantic feature alignment model remarkably escalates the matching scores. Entropy-based semantic scores are discretized, and they are, then, divided into groups in line with the corresponding frequency distribution. Shannon entropy has been incorporated in the proposed algorithm to convey the downright volume of the information enclosed in a tweet. Shannon entropy is estimated between the terms x and y where x is taken from the term set enriched with contents from online developer community forums and y from the 50% of classified instances. Entropy which gives out quantitative measurement of the system uncertainty manifests a standard estimate for the sequence disorder. The items having a similar entropy with a deviation of not less than 0.25 are all formulated into a hash set. Anything greater than 0.5 is put into a separate hash set, and then, the intersection between the two is computed. With the completion of this phase compressibility, complexity and the amount of information contained by a sequence of characters in a tweet are procured. In the final phase of semantic matching, the Pielous diversity index is determined for each of these x and y occurrences from the classified dataset and the content equipped term-set. The diversity index estimation gives us a sense of different aspects of the data biodiversity, namely evenness, dominance, and richness. The diversity index is computed such that the diversity between the two data points from both x and

36

R. A. Srivastava and G. Deepak

y does not deviate more than 0.25. The representation of evenness in a community is  represented by Eq. 3 where the Shannon diversity index score is represented by H   and H max gives out the highest possible estimate of H , J =

H H  max

(3)

At the end of this phase, a Twitter hash set is contrived which forms the most refined hash set that has been obtained after a rigorous treatment under a three-fold semantic matching score calculation strategy. With numerous tweets to be inspected, the relation of each of the tweets with user demand is determined so that the algorithm can concentrate on the prime ones to make promising recommendations. All the tweets are individually ranked in an ordered list fashion. For the ranking purpose, certain parameters are taken into cogitation, such as retweets, category, sentiment, likes, duplicates, and social ranks to regulate the relevance while ranking. Individual ranking of tweets is realized by means of Twitter semantic similarity index calculation. Common items in each of the hash sets are taken and then are ranked in the increasing order of Twitter similarity score. The rank scores assigned to each tweet is mainly based on the density of tweets pertaining to the subject in demand. These ranked tweets constitute the final-stage data for the proposed methodology. This algorithm has been designed in such a way that both the developer and the client are treated as equivalent users. The client requirement and the user requirement are taken as real-time input data. Each user query is foraged thoroughly, and the final stage semantic similarity is computed between the ranked tweets and the query generated. The bulk of ranked tweets calls for a semantic similarity-based prioritization of tweets based on distinct software requirement demands and techniques. Semantic similarity-based prioritization technique supervises for time-complexity, fault tolerance, etc., to enhance the efficiency of the model. Priorities are assigned at run time to the requirements under the following headers: functionality, availability, tolerance to faults, legal issues, user experience, maintenance, operationality, performance score, portable or not, security, usability, and scalability. On the basis of priority tags, the tweets are recommended relating to user demands. To obtain enhanced performance from this multi-pass recommender system, weighted ranking and prioritization are done to make sound and potentially relevant recommendations to the users. The complete system architecture of SMLTR is depicted in Fig. 1. The data are now qualified enough to be given for training, and final recommendation procedure is carried out here. The system starts with input of Twitter data which undergo preprocessing measures. Once the preprocessing is done, multiphase semantic matching starts off with semantic similarity-based feature extraction technique being implemented; at the end of which, a classified truth set is obtained with refined data. All the redundant data have been eliminated when the data reach this phase.

Semantically Driven Machine Learning-Infused Approach …

37

Fig. 1 System architecture of the proposed approach

4 Implementation and Performance Evaluation The implementation of the proposed system is carried out in concurrence with the algorithm put forth as depicted in Algorithm 1. The raw data are first made consistent and transformed into an understandable format by giving it for the preprocessing phase. Once the dataset is cleaned according to model demands, a semantic network is formulated using ontology integration persistent to the ontologies generated from software documents libraries. Features are extracted from this semantic network and are directed for classification under logistic regression. Subsequently, a term set is formulated along with content generation from developer community forums. Three-fold semantic knowledge-based similarity matching, entropy calculation, and diversity index are calculated, and the final set for tweets is generated. The tweets present in this set are ranked based on multiple parameters. For each user query generated, the ranked tweets are given a priority index based on semantic similarity with the user query. And finally, potentially relevant tweets to the user queries are recommended. The execution of the proposed algorithm was realized on a dataset consisting of 94,683 Twitter streams. These tweets were collected from the relational accounts of four highly used software applications Slack, GitHub, Spotify, and Dropbox. To evaluate the performance of the methodology put forth, recall, precision, F-measure, accuracy, and FDR score have been considered as potential evaluation metrics. Precision can be defined as a measure that estimates the number of positive instances which are true. Recall gives us the fraction of all the appropriate instances that have been extracted correctly. Accuracy computes the ratio of

38

R. A. Srivastava and G. Deepak

Table 1 Comparison of performance of the proposed SMLTR with other approaches Model

Average precision %

Average recall %

Average accuracy %

Average F-measure %

FDR

LBTM

79.12

82.15

80.63

80.6

0.21

Naive Bayes

74.45

77.26

75.85

71.33

0.26

RF

79.69

81.58

80.63

80.37

0.21

Fuzzy C-Means Clustering + SVM

81.14

83.54

80.87

83.80

0.19

DT + Cosine Similarity

83.23

85.02

84.12

84.12

0.17

Random Forest + SVM + Jaccard Similarity

87.96

89.11

88.53

88.53

0.13

Proposed SMLTR 93.17

96.32

94.7

94.76

0.07

true predictions made against the total number of samples given as input. F-measure estimates how accurate a test is, and the FDR score gives us a calculation of the rate of false-negative values. The abovementioned evaluation metrics compute the relevance of the results, while FDR quantifies all the false positives which are furnished by false positives in the tweets based on the client as well as the developer’s requirements. Average values of these metrics have been considered to compare the performance of this methodology put forth with other approaches. Table 1 gives us the complete computational analysis of the proposed algorithm along with a comparative result for other such strategies. As evident in Table 1, the average precision secured by the proposed framework is 93.17%, average recall of 96.32%, average accuracy of 94.7%, average F-measure score of 94.76%, and average FDR rate of 0.07. The result scores yielded by the system put forth for all the evaluation metrics taken into account are significantly appreciable and outperform the existing baseline strategies. The multi-phase feature extraction and semantic similarity computation play a remarkable role in the proposed model’s performance. Inclusion of ontologies modeled for the underlined dataset as well as the SRS documents and several micro-requirements further helps in boosting the accuracy of the proposed framework. Generating ontologies from software requirement specifications by itself ensures that every possibility of requirements and domain terminologies are included into ontologies to provide auxiliary knowledge. Semantic network formulation secured by ontology alignments makes the feature extraction phase highly robust which eventually helps in enhancing the overall model performance. Another factor for high percentage performance secured by the proposed algorithm is obligated to content generation from online developer communities which ascertain that recent technological gaps, as well as recent technological additions, are included in the approach. The use of logistic regression, which is a very powerful classifier, makes sure that the classification of tweets is realized with very high accuracy. In order to accelerate the relevance of the results,

Semantically Driven Machine Learning-Infused Approach …

39

the Pielous diversity index along with the entropy-based semantic matching with a step difference of 0.25 is considered. All of this ensures that the relevance of the result is quite high. The re-ranking of tweets based on Twitter semantic similarity and inclusion set comprising of four different measures along with two semantic similarity measures, divergence index, and information content in the form of entropy scores advocates that the proposed approach yields potentially relevant tweets to the software developers and clients’ requirements. It was seen that the final results obtained by the proposed algorithm are more efficient than any of the frameworks taken into consideration. The reason for this can be subjected to the implementation of strategies mentioned in the above paragraphs. To compare and evaluate the performance of the proposed approach, various other strategies have been experimented with, around a similar environment. The proposed methodology performed exceptionally well by securing the highest score for the evaluation metric and lowest FRD rate, in comparison to the baseline model LBTM and other such approaches taken into account for experimentation. The incorporation of multimodal Naive Bayes along with random forest for classification and inclusion of weighted function-based ranking alongside sentiment analysis though helps the model to attain a significant accuracy percentage, but there is a definitive scope of improvement for classification technique, and semantics can also be added thereby ensuring that results obtained are significantly finer. Given that Naive Bayes is a particularly traditional approach and forms a standalone model, it gives out the lowest precision percentage and the highest FDR score. For model comparison, the experiment was conducted on a standalone random forest-based model as well. A slightly higher percentage for the evaluation metrics is seen with random forest in the picture. Although it is a power classifier by itself, it remains insufficient for the task of semantically ranking the tweets and final recommendation. When fuzzy C-means clustering is combined with SVM, a much better score for evaluation metrics is secured, but transforming the categorical dataset for clustering requires a lot of mathematical functions as well as transfer functions which advocates that the error rate is existential, and it isn’t the best-in-class model for the subject under research. Combination of decision tree with cosine similarity where cosine similarity indicates semantic similarity score while decision tree is an impressive classifier yields a remarkable result compared to other baseline models as mentioned, but the lack of tweet ranking causes the recommendation imbalance. In one of the approaches, random forest has been hybridized with support vector machine (SVM) and Jaccard similarity which forms a two-pass classifier consisting of one very powerful classifier, while another one being a binary linear classifier along with semantic similarity measure secures much higher precision, recall, etc., but the absence of tweet prioritization disproportionates the results. Moreover, all the baseline models give predictions for a lesser number of data points, while a considerable amount of data has been taken into the course of action for carrying out experimentation on the proposed system. Furthermore, this novel strategy of ensembling all the strategies mentioned priorly together contributes toward enhancing model accuracy and overall performance. Each algorithm complements the other and, hence, aids to the proposed strategy being liable enough to outperform all the existing baseline

40

R. A. Srivastava and G. Deepak

Fig. 2 Comparison of proposed SMLTR with other approaches

techniques. Figure 2 depicts a graphical representation of comparative analysis on precision percentage versus number of recommendations of the proposed model with other existing baseline frameworks. It can be seen that the results achieved by the system put forth are highly significant and remarkable. The proposed methodology achieves a precision percentage of 95.12% against number of recommendations, which is better than any other strategy outlined here. Incorporating this framework of the multi-stage semantic matching, ranking, and tweet prioritization along with classification has aided a lot in the greater yields of the proposed framework. Taking into account, all the data and corresponding model performances, it can be concluded that the methodology put forth is more efficient than other existing frameworks. Hence, it can be marked out that the proposed framework stands as the best-in-class model for tweet recommendation to trace evolution on software requirements.

5 Conclusion An innovative and ingenious strategy is proposed via this work for the recommendation of tweets to trace down evolution on software requirements. The proposed algorithm puts across a semantic similarity-based approach for tweets classification, ranking, prioritization, and recommendation. Logistic regression has been used as a prime classifier to get secure feature sets for tweets. This methodology is an ensemble of multiple algorithms collated to form a calculated and proper pipeline for efficient classification and recommendation of all the textual data present in the corpus. The scores obtained for the evaluation metrics are pretty reliable and make the model evidently best-in-class. The precision obtained by SMLTR is 93.17%, while the recall value achieved is 96.32%. Furthermore, the accuracy yielded toward the recommendations made is 94.7%. The F-measure value reaches 94.76%, and the FDR rate secured by the proposed SMLTR framework is 0.07. All these assessments

Semantically Driven Machine Learning-Infused Approach …

41

help in drawing out the conclusion for the proposed system being a top-notch model for tweet recommendation for tracing software evolution. The further work which can be done on this system could probably be to focus on raising the relevance tally on the whole. Moreover, distinct techniques of semantic similarity or priority estimation can be used to achieve even better values for evaluation metrics. An ensemble of different machine learning classifiers can be tried out to achieve better results.

References 1. Guzman E, Ibrahim M, Glinz M (2017) A little bird told me: mining tweets for requirements and software evolution. In: 2017 IEEE 25th international requirements engineering conference (RE), pp 11–20 2. Gramajo M, Ballejos L, Ale M (2018) Software requirements engineering through machine learning techniques: a literature review 1 3. Talele P, Phalnikar R (2021) Classification and prioritisation of software requirements using machine learning—a systematic review. In: 2021 11th international conference on cloud computing, data science and engineering, pp 912–918 4. Fong VL (2018) Software requirements classification using word embeddings and convolutional neural networks 5. Carrillo F, Cecchi GA, Sigman M, Fernandez Slezak D (2015) Fast distributed dynamics of semantic networks via social media. Comput Intell Neurosci 6. Pagano D, Maalej W (2013) User feedback in the appstore: an empirical study. In: Proceedings of the international requirements engineering conference, pp 125–134 7. Adithya V, Deepak G (2021) HBlogRec: a hybridized cognitive knowledge scheme for blog recommendation infusing XGBoosting and semantic intelligence. In: 2021 IEEE international conference on electronics, computing and communication technologies (CONECCT), pp 1–6 8. Deepak G, Shwetha BN, Pushpa CN, Thriveni J, Venugopal KR (2020) A hybridized semantic trust-based framework for personalized web page recommendation. Int J Comput Appl 42(8):729–739 9. Deepak G, Santhanavijayan A (2020) OntoBestFit: a best-fit occurrence estimation strategy for RDF driven faceted semantic search. Comput Commun 160:284–298 10. Deepak G, Kasaraneni D (2019) OntoCommerce: an ontology focused semantic framework for personalised product recommendation for user targeted e-commerce. Int J Comput Aided Eng Technol 11(4–5):449–466 11. Adithya V, Deepak G (2021) HBlogRec: a hybridized cognitive knowledge scheme for blog recommendation infusing XGBoosting and semantic intelligence. In: 2021 IEEE international conference on electronics, computing and communication technologies (CONECCT), pp 1–6. IEEE 12. Krishnan N, Deepak G (2021) KnowSum: knowledge inclusive approach for text summarization using semantic allignment. In: 2021 7th international conference on web research (ICWR), pp 227–231. IEEE 13. Roopak N, Deepak G (2021) OntoKnowNHS: ontology driven knowledge centric novel hybridised semantic scheme for image recommendation using knowledge graph. In: Iberoamerican knowledge graphs and semantic web conference, pp 138–152. Springer, Cham 14. Ojha R, Deepak G (2021) Metadata driven semantically aware medical query expansion. In: Iberoamerican knowledge graphs and semantic web conference, pp 223–233. Springer, Cham

Detecting Dengue Disease Using Ensemble Classification Algorithms S. Ruban, Naresha, and Sanjeev Rai

Abstract Health care has grown beyond imagination in the last few years with the impact of artificial intelligence. Artificial intelligence applications are used to solve many health issues in the society. However, developing these applications involves transforming the data from the original format to a format that is understandable by the system. It also involves using suitable algorithms appropriate for the problem to be solved. This work discusses an approach used to detect dengue. Performance evaluation was done with the real-time dataset. Few classification algorithms have been used over the dataset. To have a better accuracy, ensemble learning methods were used. Out of the three ensemble machine learning algorithms, like light gradient boost classifier, logistic regression, and support vector machine classifier that were used, the experimental study reveals that light gradient boost classifier gives a better accuracy of 94.47% compared with the other algorithms that are used. Keywords Machine learning · Ensemble · Dengue · Vector-borne disease · Light gradient boost classifier · Logistic regression · Support vector machine classifier

1 Introduction Artificial intelligence is transforming the health care like never before. From the collection of healthcare data, to processing and understanding the data, AI applications are playing a tremendous role. AI is all about developing machines or applications that can assist the human beings by emulating human intelligence at various roles and levels. A recent report published by the World Health Organization (WHO) [1] considers artificial intelligence as a technology that holds a great promise for transforming the health care globally. However, they insist on putting the sound ethical principles to guard and formulate its usage in the design, development, and S. Ruban (B) · Naresha St Aloysius College (Autonomous), Mangalore, India e-mail: [email protected] S. Rai Father Muller Medical College, Mangalore, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_4

43

44

S. Ruban et al.

deployment of AI-based solutions. Another work that was published recently by Shneiderman [2] lists out the various conflicts that happen during the AI-enabled development, to address healthcare problems. This work emphasizes two important issues of AI, human emulation, and developing useful applications that could contribute to the development of the healthcare solutions. Though the objectives of AI research were set many years back, when Alan Turing asked this question, “can the machines think?” [3], the usage of AI in health care is very widely discussed and adapted in the recent times. Recent advances of using AI in health care have led to solutions that can predict the outcome of a procedure, assist in predicting emergencies like respiratory arrests and lung cancers so that the healthcare institutions can take better measures in providing the healthcare facilities to the patients. Artificial intelligence is a broader domain. Researchers have also studied the possibility of using AI to detect the epidemiological patterns that are the reasons for causing epidemics. Machine learning models can be built over the medical data and can be used to analyze vast amount of data to understand and detect the possible reason behind the epidemic. Similarly, the clinical notes that are available in the medical institutions can provide the base for developing noninvasive methods of diagnosing diseases earlier before the diagnostic tests could find out them. Based on the data that are available with the World Health Organization, occurrence and spread of dengue are seen rising in many parts of the world. Accordingly, studies suggest around 50 million infections each year [4–6]. Early diagnosis of dengue fever can reduce this burden to a larger extent. Seeing the success of machine learning in different domains, researchers began to use machine learning techniques to develop tools that can assist clinicians diagnose illnesses at an early stage. Other benefits include saving the costs and the time taken in the diagnostic tests [7, 8]. Another work that was done [9] in this domain gives us a better result. The next section deals with the existing work in this area; Sect. 3 describes the methodology that is adopted for this research study; Sect. 4 describes the results and discussion and finally the conclusion.

2 Literature Survey Few of the earlier works in detecting dengue, by applying machine learning algorithms, points to the usage of artificial neural network (ANN) algorithm [10]. A recent work that was done by the researchers in Paraguay, also, points out the usage of ANN and SVM for finding dengue [11]. ML has been used to determine the most effective treatment [12], identify the patients at risk for a particular disease, suggesting treatment plans, and also used to predict a disease. Traditionally, predictive analytics was using just conventional logistic regression modeling. However, with machine learning models that are better at prediction [13], they help to uncover diseases and symptoms which are hiding in plain sight. Vector-borne disease is an important public health problem in India, resulting in about 7 lakhs deaths annually. They are infections transmitted by the bite of infected

Detecting Dengue Disease Using Ensemble Classification Algorithms

45

mosquitoes or other flies. Karnataka state, with good coastal region and irrigation, facilitates growth of mosquitoes that yields to high transmission of dengue [14]. AI and big data technology can help to understand the disease outbreaks [15]. It can also be used for building predictive model [16] and also help to correlate with other factors like climate and rainfall. One of the work that was done on dengue was done in the Thiruvananthapuram district, Kerala [17]. Another experimental study involving Bayesian network was done in Malaysia [18]. This dengue surveillance tool was implemented in the state of Penang. It incorporates features to enter data related to the dengue outbreaks and uses the geographical location to track and predict the outbreak earlier. The authors of this work claim an accuracy of 81.08% and also have predicted 37 outbreaks that have happened in the region, a month in advance. This study has helped to validate the claim of using machine learning as a tool for real-time surveillance. Another experimental study involving support vector regression algorithm [19] was done by collecting data from the province of Guangdong in the country of china. The authors collected the meteorological data from 2011 to 2014 and developed a model to predict the occurrence of data in the locality. The authors also studied the usage of different machine learning algorithms for their study, however decided to use SVR algorithm, since it gives the optimal performance. The authors claim this experimental study finding could help the government and other people relevant to public health to respond early to the dengue outbreak. Similar experimental study was done in Manila, Philippines [20]. The authors captured various data variables related to the meteorological factors such as direction and the speed of the breeze, the temperature, and humidity. Four years’ data were gathered and used for the study. The authors used various modeling techniques such as random forest, gradient boosting, and general additive modeling. Though they concluded stating that every modeling technique is able to predict the outcome, random forest performed well for the given dataset. This study is aimed at deriving insights from the existing hospital data of the Father Muller Hospital in dealing with dengue. Performance evaluation was done with the real-time dataset available for time period from 2015 to 2018. Few classification algorithms, have been used over the dataset. To have a better accuracy, ensemble learning methods were used. Out of the three ensemble machine learning algorithms, like light gradient boost classifier, logistic regression, and support vector machine classifier that were used, this experimental study reveals that light gradient boost classifier performs better with an accuracy of 94.47% compared with the other algorithms that were used.

3 Methodology A medical institution holds a huge repository of health data (Textual and Image) that have remained underutilized over the period of time. Machine learning and data analytics provide a path of transforming this data into a huge wealth by discovering

46

S. Ruban et al.

Fig. 1 Methodology of dengue fever classification using ensemble methods

patterns. The workflow and the methodology followed in this experimental study is elaborated and is represented in Fig. 1. Different data sources from where the realtime data were collected, the format of dengue data, data gathering process, data preprocessing, data processing, and ensemble classifiers are elaborated below.

3.1 Data Collection Dengue is one of the transcendent vector-borne maladies universally, caused by the mosquitos. A person enduring this illness has mostly fever with other symptoms [21]. Indications ordinarily last about two to seven days. Normally, individuals start recouping after seven days. The personal clinical notes of each patient treated within the stipulated time interval were taken from the Department of Patient Registration and Clinical Records section (MRD). The clinical records are arranged based on ICD [22].

Detecting Dengue Disease Using Ensemble Classification Algorithms

47

3.2 Data Preprocessing The collected data were raw, and most of the portion were handwritten. Most of these handwritten portion of the clinical notes were written by various doctors and nurses who were attending to the patient who was undergoing treatment. Only, the discharge summary which was part of the clinical notes is the typed one. Since the quality of data is important, missing entries, inconsistencies, typographical, and semantic errors that were there in the raw data were clarified and rectified based on the discussion with the healthcare professionals who were assigned for that. This step does not give any meaningful insight but helps to find out the right assumption, that has to be made for the analysis and the features that has to be extracted. Tesseract optical character recognition engine [23] (OCR) was used for extracting the raw data from the scanned images that were stored in the clinical records. However, the accuracy of data that were extracted was moderate depending upon the clarity of the images, blur, and noise that were affecting the quality of the images. So, the data that were extracted had to be manually checked by the healthcare professionals. It was followed by pattern identification.

3.3 Data Processing Each image which contained raw data of the dengue patient was captured. For extricating information, Python-tesseract was used. It has the capability of recognizing and reading the text embedded in images. Textual content was generated from images. To identify the features from the clinical notes, a dictionary was prepared, which was finalized after consulting the physician in the hospital. The list of the symptoms was looked in the clinical records. The entire process of the data processing is elaborated in the research work that has been published earlier [24] by the authors. Similar kind of work was also done for another common infectious disease called malaria [25]. The data dictionary is listed in Fig. 2.

3.4 Model Building Machine learning techniques are basically algorithms that try to find out the relationship between different features that are found in the dataset. A machine learning model that produces discrete categories [26] are called as classification algorithm. Few of the case studies to understand classification algorithm include, those which can predict whether a patient has malaria or not, whether a tumor is benign or malign etc. In medicine, such kind of classification problems do exist, and classification algorithms are used in those areas. In this research work, we have used few algorithms such as light gradient boost classifier, logistic regression, and SVM

48

S. Ruban et al.

Fig. 2 Snapshot of the data dictionary created for dengue data processing

classifier. One of the supervised machine learning algorithm is LR. The outcome obtained either be Yes or No, 0 or 1, true or False, etc. [27]. Another popular classification algorithm is support vector machine [28]. The intention of the SVM is to set rules to create the satisfactory line or selection boundary which can segregate. This best decision boundary is known as a hyperplane. Similarly, LGB machine learning algorithm [29] is also one of the popular ensemble-based classification algorithms.

4 Results and Discussion The real-time data were subjected to analysis after the preprocessing was completed. The analysis was done based on the data dictionary that was developed. Few of the insights that were generated are presented below from Figs. 3, 4, 5, 6, and 7. The model was developed using the three classification algorithms such as Light gradient boost classifier, Logistic regression, and Support vector machine. Different metrics for evaluating the machine learning models were generated and are listed below. The light gradient boost classifier (LGBM) gives the best accuracy of 94.47%, followed by the support vector machine (SVM) at 91.77% and the logistic regression at 86.15%.

5 Conclusion This research work explores the possibility of a noninvasive method of identifying dengue from the symptoms. Before the lab results could confirm the illness, preliminary treatment can be started to avoid the adverse effects of dengue. More data from

Detecting Dengue Disease Using Ensemble Classification Algorithms

Fig. 3 Dengue cases recorded from 2015 to 2018, who showed the symptoms of fever

Fig. 4 Patients who displayed the symptoms of cough, cold, and headache

Fig. 5 Patients who displayed the symptoms of body ache, joint pain, and burning micturition

49

50

S. Ruban et al.

Fig. 6 Patients who displayed the symptoms of vomiting, chills, and loose stools

Fig. 7 Performance comparison of the classification algorithms

various other hospitals could facilitate better results. Though traditional classification algorithms could be used to solve this problem, ensemble methods prove to be much effective than the other methods. Acknowledgements Authors acknowledge that this work was carried out in the Big Data Analytics Lab funded by VGST, Govt. of Karnataka, under K-FIST(L2)-545, and the data were collected from Father Muller Medical College, protocol no: 126/19 (FMMCIEC/CCM/149/2019).

References 1. WHO report on AI. https://www.who.int/news/item/28-06-2021-who-issues-first-global-rep ort-on-ai-in-health-and-six-guiding-principles-for-its-design-and-use. Accessed on 28 Oct 2021 2. Shneiderman B (2020) Design lessons from AI’s two grand goals: human emulation and useful applications. IEEE Trans. Technol. Soc. 1(2):73–82 3. Turing AM (1950) Computing machinery and intelligence. Mind 49:433–460 4. WHO (1999) Strengthening implementation of the global strategy for dengue fever/dengue haemorrhagic fever prevention and control. Report of the informal consultation. Geneva, Switzerland 5. San Martın JL, Solórzano JO et al (2010) Epidemiology of dengue in the Americas over the last three decades: a worrisome reality. Am J Trop Med Hyg 82(1):128–135

Detecting Dengue Disease Using Ensemble Classification Algorithms

51

6. Shepard DS, Undurraga EA, Betancourt-Cravioto M et al (2014) Approaches to refining estimates of global burden and economics of dengue. PLoS Neglected Tropical Diseases 8(11) 7. Jain A (2015) Machine learning techniques for medical diagnosis: a review. In: Conference center, New Delhi, India 8. Kononenko I (2001) Machine learning for medical diagnosis: history, state of the art and perspective. Artif Intell Med 23(1):89–109 9. Raval D, Bhatt D, Kumhar MK, Parikh V, Vyas D (2016) Medical diagnosis system using machine learning. Int J Comput Sci Commun 7(1):177–182 10. Ibrahim F, Taib MN, Abas WABW, Guan CC, Sulaiman S (2005) A novel dengue fever (DF) and dengue haemorrhagic fever (DHF) analysis using artificial neural network (ANN). Comput Methods Programs Biomed 79(3):273–281 11. Mello-Roman JD et al (2019) Predictive models for the medical diagnosis of dengue: a case study in Paraguay. Comput Math Methods Med 1–9 12. Obermeyer Z, Emanuel EJ (2016) Predicting the future-big data, machine learning and clinical medicine. N Engl J Medicine 375:1216–1219 13. Cuddeback J (2017) Using big data to find hypertension patients hiding in plain sight. AMGA Analytics 14. Arali PK et al (2019) Assessment of national vector borne disease control programme in state of Karnataka. Int J Community Med Public Health 6(2):525–532 15. Wong ZSY et al (2019) Artificial Intelligence for infectious disease big data analytics. Inf Disease Health 24:44–48 16. Guo J, Li B (2018) The application of medical artificial intelligence technology in rural areas of developing countries. Health Equity 2(1) 17. Valson JS, Soman B (2017) Spatiotemporal clustering of dengue cases in Thiruvananthapuram district, Kerala. Indian J Public Health 61:74–80 18. Sundram BM, Raja DB, Mydin F, Yee TC, Raj K (2019) Utilizing artificial intelligence as a dengue surveillance and prediction tool. J Appl Bioinformatics Comput Biol 8 19. Guo P, Liu T, Zhang Q et al (2017) Developing a dengue forecast model using machine learning: a case study in China. PLoS Neglected Tropical Diseases 11(10) 20. Carvajal TM, Viacrusis KM, Hernandez LFT, Ho HT, Amalin DM, Watanabe K (2018) Machine learning methods reveal the temporal pattern of dengue incidence using meteorological factors in Metropolitan Manila, Philippines. BMC Inf Diseases 18(1):183 21. Symptoms of Dengue. https://www.cdc.gov/dengue/symptoms. Accessed on 3 Oct 2020 22. ICD code for Dengue. https://icd.codes/icd10cm/A90. Accessed on 21 Sept2020 23. Smith R (2007) An overview of the Tesseract OCR engine. In: Proceedings of ninth international conference on document analysis and recognition (ICDAR), IEEE computer society, pp 629– 633 24. Ruban S, Rai S (2021) Enabling data to develop an AI-based application for detecting malaria and dengue. In: Tanwar P, Kumar P, Rawat S, Mohammadian M, Ahmad S (eds) Computational intelligence and predictive analysis for medical science: a pragmatic approach, De Gruyter, Berlin, Boston, pp 115–138 25. Ruban S, Naresh A, Rai S (2021) A noninvasive model to detect malaria based on symptoms using machine learning. In: Advances in parallel computing technologies and applications, IOS Press, pp 23–30 26. Gibbons S, Gibbons S (2019) Machine learning in medicine: a practical introduction. BMC Med Res Methodol 19–64 27. Panagiotis Pintelas IEL (2020) Special issue on ensemble learning and applications. Editorial MDPI 4 28. Harimoorthy K, Thangavelu M (2020) Multi-disease prediction model using improved SVMradial bias technique in healthcare monitoring system. J Amb Intell Human Comput 1 29. Ogunleye A, Wang QG (2019) XGBoost model for chronic kidney disease diagnosis. IEEE/ACM Trans Comput Biol Bioinf 17(6):2131–2140

Masked Face Recognition and Liveness Detection Using Deep Learning Technique Mukul Mishra, Lija Jacob, and Samiksha Shukla

Abstract Face recognition has been the most successful image processing application in recent times. Most work involving image analysis uses face recognition to automate attendance management systems. Face recognition is an identification process to verify and authenticate the person using their facial features. In this study, an intelligent attendance management system is built to automate the process of attendance. Here, while entering, a person’s image will get captured. The model will detect the face; then the liveness model will verify whether there is any spoofing attack, then the masked detection model will check whether the person has worn the mask or not. In the end, face recognition will extract the facial features. If the person’s features match the database, their attendance will be marked. In the face of the COVID19 pandemic, wearing a face mask is mandatory for safety measures. The current face recognition system is not able to extract the features properly. The Multi-task Cascaded Convolutional Networks (MTCNN) model detects the face in the proposed method. Then a classification model based on the architecture of MobileNet V2 is used for liveness and mask detection. Then the FaceNet model is used for extracting the facial features. In this study, two different models for the recognition have been built, one for people with masks another one for people without masks. Keywords Face recognition · Image processing COVID-19 · MTCNN · MobileNet V2 · FaceNet

1 Introduction Previously, in any organization, attendance used to be recorded in the register where employees had to sign, which was time-consuming and caused inconvenience to both employees and the organization. There were thousands of employees in an organization, and keeping a record of their attendance in the register was quite tedious. Later,

M. Mishra (B) · L. Jacob · S. Shukla Christ University, Bangalore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_5

53

54

M. Mishra et al.

institutions and organizations implemented biometric systems such as a fingerprintbased attendance system. Still, employees had to scan their fingerprints in these systems, which was often not convenient due to moisture retained on fingertips or the presence of dust particles that led to problems in identifying the biometric. Following this, a face recognition-based attendance system came, where employees just had to stand in front of the camera and mark their attendance. Due to Coronavirus, wearing a mask is mandatory, which disabled the conventional taking attendance using a face recognition system. It led the employees to remove their masks and give their attendances, interfering with basic COVID-19 protocols. Further, employees tend to become frustrated with the attendance process when standing in long lines and removing masks. The current face recognition systems are also vulnerable to spoofing attacks, such as videos or photos featuring genuine employees placed in front of the camera. Every organization required masks before entering the system. As of now, this method relies on human support. Currently, the warning system is purely manual, where the human security staff ensures that the people entering the firm/system wear a mask. The idea is to develop a system where people do not have to take off their masks to receive their attendance. It is a generalized system that works whether or not the employee is wearing a mask. In this, the system will check whether the person is real or not; if the person is real, then the model will check whether the person is wearing the mask, then for recognition, two different models are used, one with and one without a mask. Motivation—In the pandemic, wearing a face mask is advised. Still, the currently available face recognition model cannot recognize the person in a mask. People have to take off their masks. The model will identify, and their attendance will get recorded. But by doing so, it will exploit safety measures. So, there is a need for a robust attendance system that can recognize the people either when they are wearing the mask or not. The current face recognition systems are at risk of spoofing attacks such as placing photos or videos of a genuine person in front of the camera. Contribution—The existing facial recognition-based attendance systems could not recognize a person wearing a mask and were vulnerable to spoofing attacks. The critical contribution of this research is the creation of a complete attendance management system capable of recognizing a person even when they are wearing a mask. The study can also avoid spoofing attacks, which may determine if a person is real or not. Organization—The rest of the paper is as follows. Section 2 gives a broad overview of relevant research. Section 3 depicts the problem definition, research challenges, and dataset description. The proposed architecture is shown in Sect. 4. Section 5 discusses the performance analysis and results. The study is concluded in Sect. 6 with a discussion on future work.

Masked Face Recognition and Liveness Detection Using Deep …

55

2 Related Works In Anwar et al. [1], the authors have tried to address the problem of the unavailability of masked images in the face recognition dataset. From a VGGFace2 dataset, they have taken a small portion of data for their research work and named that dataset VGGFace2 Mini. Then they have transformed the currently available facial dataset into a masked-faced dataset using an open-source tool MaskTheFace. For face detection, MTCNN has been used. Then they retrained a FaceNet face detection model on the VGGFace2 Mini dataset, and for testing, they have used the MFR2 dataset. The research paper Shiming et al. [2] also stated the problem of not identifying a person in a mask. So, in this research, the authors designed a model for recognizing the person even when they are wearing a mask. The authors introduced a new masked dataset named MAFA. In addition to the new dataset, a new technique called LLE-CNN is also presented in this study. The technique involved in the LLECNN method is the prediction of the missing facial region using KNN. Now for the features descriptor, the VGGFace model is used; along with this, a deep learning neural network is used for mask recognition. Qureshi et al. [3] took home security to the next level by building a facial recognition door unlock system that could make homes a more secure place. The Open-CV library has been used for face detection, and then segmentation takes place. The images have been converted into grayscale. Then preprocessing has been done on the Gray scaled image, and significant features have been extracted using the HOG classifier. In Ahmedi et al. [4], the authors tried to reduce the burden on the teachers for taking attendance and solving the problem of proxies. The method used by the authors in this research work is that initially, a video clip of the classroom is taken and is stored in the database, and these videos are converted to frames/images. An Adaboost algorithm is then used for the face detection part, followed by feature extraction Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) algorithms. After the detection of the faces is completed, the faces that have been detected are compared with the faces stored in the database during face recognition by using the support vector machines (SVM) classifier. The research work in Pooja et al. [5] also tried to reduce the burden on the teachers for the attendance and for solving the problems of proxies. In this work, the HAAR cascade classifier is used for face detection, and after the face detection is done, the features are extracted, and the classifier is built. The model used for performing the task of feature extraction and training is the Adaboost algorithm. In Rahman et al. [6], the authors have proposed a system that restricts COVID19’s quick spread can be traced by identifying persons who are not wearing face masks in a smart city network where CCTV cameras monitor all public spaces. The developed system faces difficulties classifying faces covered by hands since it almost looks like the person wearing a mask. While any person without a face mask travels on any vehicle, the system cannot locate that person correctly. The authors preprocessed the image and transformed it into a grayscale image because the RGB

56

M. Mishra et al.

color image contains so much redundant information, which is not necessary for face mask detection. Then the image is reshaped into 64 × 64 to maintain the uniformity of the input images into the architecture. The authors then created a CNN model from scratch that was trained for pattern recognition. The research work in Sengur et al. [7] states the vulnerability of face recognition systems to spoofing or presentation attacks, i.e., a photo, video, or a 3-D mask of a genuine user’s face may be utilized to fool the biometric system, authors are trying to breach this gap. The authors used NUAA and CSIA datasets. The features are extracted using fc6 and fc7 activations of both AlexNet and VGG16 models. After feature extraction, the SVM classifier is used for classification. Li et al. [8] states that face biometrics is the need of almost all security systems whose functionality depends upon accurate recognition. However, the major challenge most authentication systems face is the theft of identity or spoofing. There are plenty of liveness detection techniques to deal with it, so the authors have compared all the available techniques in this paper. The authors fine-tuned a VGGFace model, and then features were extracted. After this, the authors used SVM as a classifier for identifying the real and the fake images.

3 Problem Definition, Challenges, and Data Description 3.1 Problem Definition Due to advancements in technology, attendance methods in school, college, or any organization keep evolving. While entering, people have to sign in to the register— biometric systems like a fingerprint [9], iris detection, etc. Over the last few years, attendance has been recorded using face recognition technology [10]. But due to Coronavirus, wearing a face mask is mandatory, which results in problems like removing masks that breach the safety protocol for the individual and people around. Moreover, people get agitated to stand in queues and remove masks for facilitating the attendance system [11]. The current face recognition systems are prone to spoofing attacks [12], such as placing photos or videos of a genuine person in front of the camera.

3.2 Challenges Since it is an attendance system, the error should be at least as possible. When the attendance is being done using face recognition technology, there are many challenges like illumination variation, pose variations, occlusion, expression variation, low resolution, etc.; here illumination is nothing but lighting variations, the slight

Masked Face Recognition and Liveness Detection Using Deep …

57

Fig. 1 Real image

change in light can cause a challenge in automated face recognition-based attendance system. Occlusion is blockage when the face is not visible due to some object. In Covid face mask is the biggest reason for occlusion. Lighting conditions affect the liveness detection; even a tiny change in the lighting condition model will not perform well.

3.3 Dataset Description 3.3.1

Liveness Detection

The liveness dataset consists of 1336 total images containing two types of images. The first is capturing the photos of people directly, and the second is capturing the images of people from indirect means such as printouts or hard copies. Pictures clicked instantly will have standard lighting conditions, but photos clicked from printout will have some dull lighting. Figure 1 shows the actual image of a person, whereas Fig. 2 shows a fake image of a person, which is taken by displaying the image of a person on a mobile phone.

3.3.2

Mask Detection

Mask detection [13] data consists of two images; the first type includes pictures of people in masks, and the second type contains photos of people without masks. In the mask images, people’s faces are covered with masks as depicted in Fig. 3, and in

58

M. Mishra et al.

Fig. 2 Fake image

Fig. 3 With mask

the without mask category, their faces are visible clearly, as displayed in Fig. 4. In total, there are 2816 images.

Masked Face Recognition and Liveness Detection Using Deep …

59

Fig. 4 Without a mask

3.3.3

Masked Face Recognition

The masked face recognition dataset consists of images of people wearing the mask. More than 100 images have been collected for every individual, and then the masked region is cropped, as seen in Fig. 5. Pictures are gathered to make each side of the face clearly visible.

Fig. 5 Segmented portion from masked image

60

M. Mishra et al.

Fig. 6 Without masked face

3.3.4

Without Masked Face Recognition

More than 100 images have been collected for every individual in which each side of the face is visible, as shown in Fig. 6.

4 Methodology Following steps are followed in methodology: Step 1: The image of a person will get captured. Step 2: MTCNN model will detect the face from an entire image. Step 2: The liveness model will detect whether there is any attempt of spoofing attack. Step 3: If the model found that there is any spoofing attack, then warning message will be displayed; if not, proceed to the next step. Step 4: Mask detection model will check whether a person has worn the mask or not. Step 5: If the person has worn the mask, then the masked face recognition model will run and recognize the person, and if a person has not worn the mask, then without a mask, the face recognition model will run and recognize the person. Step 6: If the model can recognize the person, then their attendance will be marked.

4.1 Module Description 4.1.1

Image Capturing

While entering, employees just have to stand in front of the camera then their image will get captured. The face should be clearly visible because to recognize a face from

Masked Face Recognition and Liveness Detection Using Deep …

61

an image; the face should be visible properly. Since this is employee attendance management, only one employee should stand in front of the camera.

4.1.2

Face Detection

After capturing the image, the face has to be detected in the next step because the face recognition region of interest is the face, which has to be extracted. Other parts are of no use, so that the unnecessary features will be removed. Many models exist to detect the facial part, like Histograms of Oriented Gradients (HOG), Haar Cascades, Multi-task Cascaded Convolutional Networks (MTCNN) [14], etc., but in this study, MTCNN has been used. MTCNN is a modern face detection technology that uses a three-stage neural network detector. The image is first scaled numerous times to detect faces of various sizes. The P-network then analyses the images and performs the initial detection.

4.1.3

Liveness Detection

Liveness detection determines whether the person is real or not. Liveness detection is necessary because a user could hold up another person’s photo. Perhaps they even have an image or video on their smartphone that they might show to the camera that performs face recognition. A liveness model can be trained from scratch or using transfer learning techniques, training a model from scratch is not easy. It requires a lot of computational power. As a result, in this work, two distinct architectures, MobileNet V2 [15] and DenseNet 121 [16] are attempted for transfer learning, although MobileNet V2 is chosen because of its lower computing power.

4.1.4

Mask Detection

Mask detection is used to check whether the person is wearing the mask or not. To do so, the model needs two types of images. In the first one, the face should be covered with a mask and in another face should be clearly visible. The region below the nose part is covered in the masked image, and in without a mask, the entire face is visible clearly. If there is any spoofing attack, then a warning message will be displayed; if not, the mask detection model will get activated and check whether the employee has worn the mask.

62

4.1.5

M. Mishra et al.

Feature Extraction

With Mask If the people are in a mask, half of the face, i.e., face below the nose, is fully covered, so that covered region should be removed before extracting the features. For feature extraction, there are many models like FaceNet [17], VGG16 [18], etc. In this study, the FaceNet model has been used. It is considered a state-of-art model developed by Google and is based on the inception layer. It uses inception modules in blocks to reduce the number of trainable parameters. This model takes input as RGB images of 160 × 160 and generates an embedding of size 128 for an image. From the cropped face image, if the mask detection model found that the employee is in the mask, it will crop the masked region. After cropping the image, the FaceNet model will extract the facial features.

Without Mask If the people are without a mask, the entire face is visible clearly, so without a mask, extraction of features will be an easy task. Facial features extracting model like FaceNet can be applied directly.

4.1.6

Classification

After getting the facial features, an artificial neural network model is trained to classify the employee. If the model can recognize the employee, then attendance will be marked.

4.2 Proposed Model Figure 7 presents the architecture diagram of the proposed attendance system. It consists of four different face detection, spoof detection, mask detection, and face recognition. After entering the organization, people just have to stand in front of the camera to capture their image. In that image, the region of interest has to be found, done by the MTCNN model. It will find the coordinate of the face in the image then that facial region will be cropped. After this, the spoof detection model will check whether the person is real or not. If the model finds the person is not real, then a warning message will be displayed otherwise if the person is real, it will move the image forward for recognition. Two different models have been trained for recognition. In the first model, i.e., for the people without a mask, the entire face image will be used for feature extraction, but if a person is in the mask, then will crop the masked region, and the region above the mask will be kept. Then will

Masked Face Recognition and Liveness Detection Using Deep …

Fig. 7 Flowchart—the processes involved in the attendance management system

63

64

M. Mishra et al.

Table 1 Liveness detection results Architecture

Training accuracy (%)

Validation accuracy (%)

Training loss

Validation loss

MobileNet V2

99.91

99.29

0.0012

0.0558

DenseNet 121

98.50

98.07

0.225

0.1899

use that without masked region to extract the features for recognition because two different individuals can wear the exact type of mask. The FaceNet model extracts features, which produces output in 128 dimensions. These 128 features are then used to train the classifier model. In this study, an ANN classifier has been used for the classification. If the classifier can recognize the person, their attendance will be marked.

5 Experimental Result 5.1 Liveness Detection As seen in Table 1, training accuracy, validation accuracy, training loss, and validation loss for MobileNet V2 are 99.91%, 99.29%, 0.0012, and 0.0558, respectively, whereas for DenseNet 121, it’s 98.50%, 98.07%, 0.225, and 0.1899, respectively. The model can correctly classify whether the person is real or fake, as depicted in Fig. 8.

5.2 Mask Detection Table 2 shows mask detection model results. The training and validation accuracies are 98.86% and 99.79%, respectively. As seen in Figs. 9 and 10, the model can correctly classify whether a person is wearing a mask or not.

5.3 Face Recognition with Mask Table 3 shows face recognition with mask model results. The training and validation accuracies are 89.34% and 79.67%, respectively. As seen in Fig. 11 model can recognize the person when they are wearing the mask.

Masked Face Recognition and Liveness Detection Using Deep …

65

Fig. 8 Liveness detection Table 2 Mask detection results

Fig. 9 Mask detection (without mask)

Training accuracy (%)

Validation accuracy (%)

Training loss

Validation loss

98.86

99.79

0.0366

0.0397

66

M. Mishra et al.

Fig. 10 Mask detection (with mask)

Table 3 Face recognition with mask result

Training accuracy (%)

Validation accuracy (%)

Training loss

Validation loss

89.34

79.67

0.267

1.0955

Fig. 11 Recognition with a mask

5.4 Face Recognition Without a Mask Table 4 shows face recognition without mask model results. The training and validation accuracies are 90.67% and 83.68%, respectively. As seen in Fig. 12 model can recognize the person. Table 4 Face recognition without mask result

Training accuracy (%)

Validation accuracy (%)

Training loss

Validation loss

90.67

83.68

0.4911

0.8418

Masked Face Recognition and Liveness Detection Using Deep …

67

Fig. 12 Recognition without a mask

6 Conclusion and Future Work In recent times, the need for an automated attendance management system has become necessary to maintain social distancing. So, this research work is relevant and much needed. The gaps that existed with the current face recognition system can be eliminated by implementing this face recognition system. With this system, people do not have to take off their masks while registering their attendance, and this system will reduce the chances of spoofing attacks. From this system, five different products can be launched, like mask detection, liveness detection, masked face recognition, without masked recognition, and last but not least, a proper attendance management system. In this study, our primary focus is to make a robust attendance management system beneficial for school, college, or any organization. In the future, one can reduce the model complexity by making a robust model which would work for masked faces and also for without masked faces instead of training two different models, one for masked faces and another for without masks and in this study, the liveness model was purely based on the lighting conditions, which will not work in every situation so that a new liveness model can be trained which can give some tasks like waving your hand, etc.

References 1. AnwarA, Raychowdhury S (2020) Masked face recognition for secure authentication. arXiv: 2008.11104 2. Ge S, Li J, Ye Q, Luo Z (2017) Detecting masked faces in the wild with lle-cnns. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2682–2690 3. Prasath AR, Kumar A, Yadav A, Acharya B, Tauseef M (2020) Face recognition door lock system 4. Ahmedi A, Nandyal S (2015) An automatic attendance system using image processing. Int J Eng Sci (IJES) 4(11):1–8 5. Pooja GR, Poornima M, Palakshi S (2010) Automated attendance system using image processing. Int J Adv Netw Appl 6. Rahman MM et al (2020) An automated system to limit COVID-19 using facial mask detection in smart city network. In: 2020 IEEE international IOT, electronics, and mechatronics conference (IEMTRONICS). IEEE

68

M. Mishra et al.

7. Sengür ¸ A et al (2018) Deep feature extraction for face liveness detection. In: 2018 international conference on artificial intelligence and data processing (IDAP). IEEE 8. Bhat K et al (2017) Prevention of spoofing attacks in FR based attendance system using liveness detection 9. Ujan IA, Imdad AI (2011) Biometric attendance system. In: The 2011 IEEE/ICME international conference on complex medical engineering. IEEE 10. Khairnar V, Khairnar CM (2021) Face recognition based attendance system using Cv2. In: Techno-Societal 2020. Springer, Cham, pp 469–476 11. Pooja GR, Poornima M, Palakshi S (2010) Automated attendance system using image processing. Int J Adv Netw Appl 12. Li L et al (2016) An original face anti-spoofing approach using partial convolutional neural network. In: 2016 sixth international conference on image processing theory, tools and applications (IPTA). IEEE 13. Islam MdS et al (2020) A novel approach to detect face mask using CNN. In: 2020 3rd international conference on intelligent sustainable systems (ICISS). IEEE 14. Zhang K et al (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Sig Process Lett 23(10):1499–1503 15. Sandler M et al (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition 16. Huang G et al (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition 17. Schroff F, Dmitry K, James P (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition 18. Simonyan K, Andrew Z (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

Method of Optimal Threshold Calculation in Case of Radio Equipment Maintenance Oleksandr Solomentsev, Maksym Zaliskyi, Yuliya Averyanova, Ivan Ostroumov, Nataliia Kuzmenko, Olha Sushchenko, Borys Kuznetsov, Tatyana Nikitina, Eduard Tserne, Vladimir Pavlikov, Simeon Zhyla, Kostiantyn Dergachov, Olena Havrylenko, Anatoliy Popov, Valerii Volosyuk, Nikolay Ruzhentsev, and Oleksandr Shmatko Abstract The paper deals with a method for optimal threshold calculation during implementation of radio equipment condition-based maintenance. During radio equipment operation, sudden and gradual failures are possible. Such failures lead to deterioration of technical condition of radio equipment. Sudden failures are unpredictable and cannot be prevented. In case of gradual failure, its possible time moment of occurrence can be calculated during observation on diagnostic variables of equipment. Such variables are transmitter power, voltages at equipment units input and output, currents in the branches of equipment circuits. To prevent gradual failure, it is necessary to implement preventive maintenance. Such actions are performed based on the result of comparison of measured data with previously calculated preventive threshold. The problem of preventive threshold determination is solved for efficiency measure in the form of average specific operational costs in the case of known probability density function for parameters of diagnostic variables model. Keywords Data processing · Operation system · Diagnostic variables · Maintenance · Optimization

O. Solomentsev · M. Zaliskyi (B) · Y. Averyanova · I. Ostroumov · N. Kuzmenko · O. Sushchenko National Aviation University, Huzara av. 1, Kyiv 03058, Ukraine e-mail: [email protected] B. Kuznetsov State Institution “Institute of Technical Problems of Magnetism of the National Academy of Sciences of Ukraine”, Industrialna st. 19, Kharkiv 61106, Ukraine T. Nikitina Kharkiv National Automobile and Highway University, Ya. Mudroho st. 25, Kharkiv 61002, Ukraine E. Tserne · V. Pavlikov · S. Zhyla · K. Dergachov · O. Havrylenko · A. Popov · V. Volosyuk · N. Ruzhentsev · O. Shmatko National Aerospace University “Kharkiv Aviation Institute”, Chkalov st. 17, Kharkiv 61070, Ukraine © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_6

69

70

O. Solomentsev et al.

1 Introduction 1.1 Introduction to the Problem The intended use of radio equipment in civil aviation is generally associated with providing function of communication, navigation and surveillance. Such equipment technical condition effects on the safety and regularity of aircraft flights. The function of technical condition monitoring is implemented in the operation system (OS) of radio equipment [1, 2]. The OS contains not only radio equipment but also regulatory documents, personnel, operation means (including measuring equipment), processes, resources and others [3, 4]. The evolution of radio equipment OS can be considered in terms of data processing and decision-making. According to such approach, there are following types of maintenance: (a) descriptive (without data processing), (b) diagnostic (with analyzing the failure types), (c) predictive (with forecasting the possible failures), (d) prescriptive (with implementation of preventive actions to avoid the failure).

1.2 Motivation New technologies of maintenance correspond to doctrine of data-driven decisionmaking (DDDM). According to [5], predictive DDDM methods contain following steps: parameter identification, statistical data collection and preprocessing, learning methods, parameters estimation and predicting what will happen, efficiency analysis and using the prediction result to adapt the decision-making. The possibility of data monitoring and collecting during radio equipment operation motivates the implementation of new effective techniques of data processing. The maintenance cost much more exceeds the initial cost of radio equipment. This motivates to improve maintenance strategies in terms of minimizing operational costs and using advanced data processing methods.

1.3 Contribution This research contributes the development of reliability theory for radio equipment in terms of operation processes improvement. This improvement consists of operational data trend analysis and synthesis the methodology of calculation of preventive threshold under the condition of minimal operational cost.

Method of Optimal Threshold Calculation in Case of Radio …

71

1.4 The Organization of the Paper The paper consists of seven sections. The first section is introduction. The second section concentrates on analysis of related works and problem statement. Third section describes the model of radio equipment technical condition deterioration. Fourth section considers the method of optimal preventive threshold calculation. Fifth section presents results and discussions. Sixth section is conclusion. Seventh section describes the future research directions.

2 Literature Review and Problem Statement Maintenance is combination of all technical and administrative actions during the life cycle of an equipment intended to retain it in (or restore it to) a state in which it can perform the required function [6]. During maintenance, two types of datasets can be collected: data on reliability parameters [7] and data on diagnostic variables [8, 9]. The main aim of this data collection is to assure adequate performance consistent with minimal maintenance costs [10]. In the case of radio equipment operation, there are following scientific tasks: analysis of equipment functioning in degraded condition and operational costs optimization [11, 12]. The different methods of solving such problems are considered in the publications [13–22]. A mathematical model of condition-based maintenance with imperfect condition monitoring conducted at discrete times and entropy-based approach to find the optimal maintenance policy is presented in [13]. The same methodology obtained on the multi-optional basis is described in [14]. The paper [15] concentrates on calculation of optimal maintenance threshold and periodicity during condition monitoring to provide the given level of the operational reliability at the minimal maintenance costs. The paper [16] discusses the problem of calculating the optimal inspection schedule and the optimal replacement threshold for deteriorating system based on the use of the properties of the Markov chain. The multi-state Markov deteriorating system also is described in [17]. The optimal maintenance policy for critical systems according to the estimating the remaining useful life of equipment is considered in [18] using multi-stage degradation model based on the Wiener process, and in [19] using the renewal theory and discrete event-driven simulation algorithm. The paper [20] concentrates on degradation process in the form of Gamma process, for which the optimal preventive maintenance threshold is searched. Reliability-based preventive maintenance with the methodology of reliability threshold optimization is considered in [21, 22]. Publication analysis shows that average specific operational costs can be used as efficiency measure for maintenance process optimization. So mathematically, the goal of this research can be presented in such a way:    VO , TPM , TM , TR , CM , CR , VP opt = arg min C T VP / M,

72

O. Solomentsev et al.

where VP and VP opt are a preventive threshold and its optimal value, VO is an oper is a vector for ational threshold, C T is an average specific operational cost, M describing probabilistic models of diagnostic variable trend and degradation process, TPM is a duration necessary for preventive maintenance planning and implementation, TM , TR are durations of maintenance and repair, CM , CR are costs of maintenance and repair.

3 Deterioration Model Description Consider the case of linear deterioration of the diagnostic variable y(t). For this situation, diagnostic variable can be presented as follows: y(t) = x(t) + Z 0 h(t) + v(t − tsw )h(t − tsw ) + ϑ(t) +

n 

ai h(t − tfi ),

i=1

where x(t) is a random component of the diagnostic variable that takes into account inaccuracy of the description; ϑ(t) is a noise component due to the errors of measuring equipment; h(t) is Heaviside step function; Z 0 is initial value of diagnostic variable (after maintenance or repair implementation); tsw is a time moment of changepoint occurrence; v is a deterioration parameter; tfi and ai are random time moment of sudden failures or damage occurrence and drift in the change of the diagnostic variable associated with it. Assume that regulatory documentation establishes tolerances for possible changes of the diagnostic variable y(t) in the form of lower and upper operational thresholds VO − and VO+ . Let preventive thresholds are VP − and VP + . Suppose that sudden failure does not occur during observation interval and random component of the diagnostic variable is neglected, so x(t) = 0. Then the diagnostic variable can be presented as follows: y(t) = Z 0 h(t) + v(t − tsw )h(t − tsw ) + ϑ(t). This model is quite generalized; it reflects the possibility of deterioration in the trend of the diagnostic variable change associated with the random moment tsw . If VO− ≤ y(t) ≤ VO+ , then radio equipment is serviceable. Otherwise the failure occurs. Suppose that radio equipment repair is carried out after failure occurrence. After that the diagnostic variable will be equal to initial value Z 0 . If VP + ≤ y(t) ≤ VO+ or VO− ≤ y(t) ≤ VP − , then the deterioration of radio equipment occurs. In this case, preventive maintenance is implemented, after which the diagnostic variable will be equal to initial value Z 0 . According to such assumptions, the values of the diagnostic variable can be in one of three intervals:

Method of Optimal Threshold Calculation in Case of Radio …

Diagnostic variable y(t)

Fig. 1 Graphical explanation of data processing for the maintenance strategy

73

Vo+

Failure

Vp+

Preventive maintenance Normal operation

VpVo-

Preventive maintenance Failure Time t

(1) (2) (3)

normal operation when VP − ≤ y(t) ≤ VP + ; deterioration detection (implementation of preventive maintenance) when VP + ≤ y(t) ≤ VO+ or VO− ≤ y(t) ≤ VP − ; non-serviceable condition when y(t) > VO+ or y(t) < VO− .

During normal operation, the periodic inspections are performed using information technology. In the case of non-serviceable condition, current repairs are carried out. After its implementation, the radio equipment turns back to normal operation in terms of the monitored diagnostic variable. A graphical explanation of data processing for the maintenance strategy is shown in Fig. 1. The average specific operational costs can be presented as follows: C T =

kR kM CR + CM , k T k T

(1)

where k P and k M are quantities of repair and maintenance implementations, respectively, k is a total quantity of repair and maintenance implementations, T is the observation interval. The observation interval can be presented as follows: T =

kM  i=1

tMi + kM TPM +

kR 

ti + kR TR ,

i=1

where tMi is a time moment when the diagnostic variable intersects the value of the preventive threshold VP + , ti is a time moment when the diagnostic variable intersects the value of the operational threshold VO+ . The quantities of repair and maintenance implementations can be determined using probabilities Pr (·) as follows: kM = k Pr (t > TPM ), kR = k Pr (t ≤ TPM ),

74

O. Solomentsev et al.

where t time difference between moments of intersection of preventive and operational thresholds by the diagnostic variable.

4 Method of Optimal Preventive Threshold Calculation Let variable v is deterministic, and variables tsw and ϑi are random. Suppose that v = v0 and variables tsw and ϑi are normal distributed variables with following expected values and standard deviations m 1 (tsw ), σ (tsw ) and m 1 (ϑ) = 0, σ (ϑ) respectively. According to linear deterioration model, the time moment of threshold intersection can be presented as follows: ti =

VO+ − Z 0 ϑi + tsw − , v0 v0

tMi =

ϑ VP+ − Z 0 + tsw − i , v0 v0

/

/

where ϑi is value of measuring equipment error at the moment of preventive threshold intersection. Then   kM / ϑi VP+ − Z 0 tswi + kM + TM − T = v0 v i=1 i=1 0    kR VO+ − Z 0 ϑi + kR + TR − . v0 v i=1 0 k 



/

ti = ti − tMi =

VO+ − VP+ + ϑi − ϑi . v0

To find the probability density function for ti , it is necessary to calculate inverse function and its derivative: /

/

ϑi = v0 ti + ϑi + VP+ − VO+ ,

dϑi = v0 . dti

Then v0 1 − f (ti ) = √ √ e 2 2πσ (ϑ)

2  V −V v02 ti + P+v O+ 0 4σ 2 (ϑ)

.

(2)

Analysis of Eq. (2) shows that probability density function for ti is normal P+ , σ (ti ) = distributed with expected value and standard deviation m 1 (ti ) = VO+v−V 0

Method of Optimal Threshold Calculation in Case of Radio … √ 2σ (ϑ) . v0

75

Then  VP+ + v0 TPM − VO+ , √ 2σ (ϑ)   VP+ + v0 TPM − VO+ Pr (t > TPM ) = 1 −  , √ 2σ (ϑ) 

Pr (t ≤ TPM ) = 

where (·) is cumulative distribution function for normal random variable with zero mean and standard deviation equaled to one. So    VP+ + v0 TPM − VO+ T = m 1 (tsw )k + k 1 −  √ 2σ (ϑ)      VO+ − Z 0 VP+ + v0 TPM − VO+ VP+ − Z 0 + TM + k  + TR × √ v0 v0 2σ (ϑ) The obtained equation is quite complex for solving given problem using differential calculus. To find simpler analytical expressions, a piecewise linear approximation of the cumulative distribution function of random variable ti is used:   √ √ 0 TPM −VO+ = 0; (1) if VP+ ≤ VO+ − v0 TPM − π σ (ϑ), then  VP+ +v 2σ (ϑ)   √ VP+ +v0 TPM −VO+ √ = 1; (2) if VP+ > VO+ − v0 TPM + π σ (ϑ), then  2σ (ϑ)   VP+ +v0 TPM −VO+ VP+ +v0 TPM −VO+ 1 √ (3) otherwise  = √2π √2σ (ϑ) + 2 . 2σ (ϑ) Then   2 , T = k C1 + C2 VP+ + C3 VP+ where TM v0 TPM VO+ − 2Z 0 TM + TR TM VO+ − √ + + √ 2v0 2 2 π σ (ϑ) 2 π σ (ϑ) V2 TR v0 TPM TR VO+ TPM VO+ − + √ − √ + √ √O+ 2 π σ (ϑ) 2v0 π σ (ϑ) 2 π σ (ϑ) 2 π σ (ϑ) TR TM VO+ TPM C2 = √ − √ + √ − √ . 2 π σ (ϑ) 2 π σ (ϑ) v0 π σ (ϑ) 2 π σ (ϑ) 1 . C3 = − √ 2v0 πσ (ϑ) C1 = m 1 (tsw ) +

Let CR v0 TPM CR + CM CR VO+ CM v0 TPM CM VO+ , C4 = √ − √ − √ + √ , C5 = 2 2 π σ (ϑ) 2 π σ (ϑ) 2 π σ (ϑ) 2 π σ (ϑ)

76

O. Solomentsev et al.

then average specific operational costs C T =

C4 + C5 VP+ . 2 C1 + C2 VP+ + C3 VP+

The optimal value of preventive threshold VP+ opt =

C42 (C1 C5 − C2 C4 ) C4 + − . C3 C5 C5 C52

(3)

5 Results and Discussions

Availability

1

A

0.9

0.8

0.7 100

Vp 200

300

400

Preventive threshold

500

Average specific operational costs

The analysis of proposed method of preventive threshold calculation was carried out based on statistical simulation. Figure 2 shows results of statistical simulation for initial data set: Z 0 = 0, v = 1, VO+ = 500, VO− = −500, CR = 100, CM = 50, TPM = 100, ϑ is normal distributed random variable with zero mean and standard deviations σ (ϑ) = 12, tsw ϑ is normal distributed random variable with expected value m 1 (tsw ) = 200 and standard deviations σ (tsw ) = 60, time between repairs is exponentially distributed random variable with parameter λ = 0.005. According to Fig. 2, the following conclusions can be made. The use of data processing algorithm that implements a condition-based maintenance strategy with diagnostic variable monitoring increases the average time between failures in the case of decreasing the value of the preventive threshold. Analysis of the dependence of average specific operational costs on the preventive threshold value shows that the minimum of functions C T exists for the condition-based maintenance strategy of 2·10 -4

CT Σ

1.8·10 -4 1.6·10 -4 1.4·10 -4 1.2·10 -4 1.0·10 -4 0.8·10 -4 100

Vp

200

300

400

500

Preventive threshold

Fig. 2 Estimates of steady-state availability and average specific operational costs for different values of preventive threshold

Method of Optimal Threshold Calculation in Case of Radio …

77

radio equipment. The nature of this minimum follows from the existence of balance between repair and maintenance costs. To implement the proposed method of threshold calculation, it is necessary to analyze initial operational data, and first of all, to build the correct mathematical model for diagnostic variable trend. The features of modern information technologies provide to perform online data monitoring and processing that gives possibility to adapt the radio equipment to operational conditions with subsequent recalculation of preventive threshold at the each step of data measurement.

6 Conclusion The paper considers the problem of optimal preventive threshold calculation in case of radio equipment maintenance. The main attention is paid to the improvement of the condition-based maintenance strategy with preventive threshold. This strategy prevents possible failures and, consequently, reduces operational costs. The analysis of the effectiveness of the maintenance strategy with preventive threshold was carried out by analytical calculations and by statistical simulation. Analysis has shown that the functional dependence of average specific operational costs on the value of the preventive threshold always contains minimum. The results of the research can be used during design and improvement of data processing algorithms in the OS of radio equipment.

7 Future Scope The future research directions are associated with: • mathematical model building for operational data based on regression analysis and heteroskedasticity taking into account; • improvement of maintenance strategies for radio equipment with usage of system of preventive thresholds and their values optimization; • improvement of maintenance strategies for radio equipment with usage adaptive threshold; • intelligence data processing during radio equipment operation; • research of deteriorating radio system with redundancy. The main purposes of mentioned directions are: (a) to increase the veracity of decision-making during radio equipment operation, (b) to decrease the operational costs, (c) to provide the given level of radio equipment reliability.

78

O. Solomentsev et al.

References 1. Hryshchenko Y (2016) Reliability problem of ergatic control systems in aviation. In: IEEE 4th international conference on methods and systems of navigation and motion control, Kyiv, Ukraine, pp 126–129. 10.1109/ MSNMC.2016.7783123 2. Ostroumov I et al (2021) Modelling and simulation of DME navigation global service volume. Adv Space Res 68(8):3495–3507. https://doi.org/10.1016/j.asr.2021.06.027 3. Goncharenko A (2017) Aircraft operation depending upon the uncertainty of maintenance alternatives. Aviation 21(4):126–131. https://doi.org/10.3846/16487788.2017.1415227 4. Averyanova Y et al (2021) UAS cyber security hazards analysis and approach to qualitative assessment. In: Shukla S, Unal A, Varghese Kureethara J, Mishra DK, Han DS (eds) Data science and security. Lecture notes in networks and systems, vol 290, Springer, Singapore, pp 258–265. https://doi.org/10.1007/978-981-16-4486-3_28 5. Lu J, Yan Z, Han J, Zhang G (2019) Data-driven decision-making (D3M): framework, methodology, and directions. IEEE Trans Emerg Top Comput Intell 3(4):286–296. https://doi.org/10. 1109/tetci.2019.2915813 6. British Standards Institution, BS EN 13306: Maintenance terminology, p 31 (2001) 7. Solomentsev O, Zaliskyi M, Shcherbyna O, Kozhokhina O (2020) Sequential procedure of changepoint analysis during operational data processing. In: Microwave theory and techniques in wireless communications, Riga, Latvia, pp 168–171. https://doi.org/10.1109/MTTW51045. 2020.9245068 8. Solomentsev O, Zaliskyi M (2018) Correlated failures analysis in navigation system. In: IEEE 5th International conference on methods and systems of navigation and motion control (MSNMC), Kyiv, Ukraine, pp 41–44. https://doi.org/10.1109/MSNMC.2018.8576306 9. Zaliskyi M et al (2021) Heteroskedasticity analysis during operational data processing of radio electronic systems. In: Shukla S, Unal A, Varghese Kureethara J, Mishra DK, Han DS (eds) Data science and security. Lecture notes in networks and systems, vol 290, Springer, Singapore, pp 168–175. https://doi.org/10.1007/978-981-16-4486-3_18 10. Smith DJ (2005) Reliability, maintainability and risk. Practical methods for engineers, Elsevier, London (2005) 11. Solomentsev O, Zaliskyi M, Herasymenko T, Kozhokhina O, Petrova Y (2019) Efficiency of operational data processing for radio electronic equipment. Aviation 23(3):71–77. https://doi. org/10.3846/aviation.2019.11849 12. Jardine AKS, Tsang AHC (2017) Maintenance, replacement, and reliability: theory and applications, 2nd edn. CRC Press, Boca Raton 13. Raza A, Ulansky V (2019) Optimization of condition monitoring decision making by the criterion of minimum entropy. Entropy (Basel) 21(12):1193. https://doi.org/10.3390/e21 121193 14. Goncharenko AV (2017) Optimal UAV maintenance periodicity obtained on the multi-optional basis. In: IEEE 4th international conference on actual problems of UAV developments, Kyiv, Ukraine, pp 65–68. 10.1109/ APUAVD.2017.8308778 15. Ulansky V, Raza A (2017) Determination of the optimal maintenance threshold and periodicity of condition monitoring. In: 1st World congress on condition monitoring, London, UK, pp 1343–1355 16. Dieulle L, Berenguer C, Grall A, Roussignol M (2001) Continuous time predictive maintenance scheduling for a deteriorating system. In: International symposium on product quality and integrity. Annual reliability and maintainability symposium, Philadelphia, USA, pp 150–155. https://doi.org/10.1109/rams.2001.902458 17. Wang N, Sun S, Si S, Cai Z (2010) Optimal predictive maintenance policy for multi-state deteriorating system under periodic inspections. In: 2nd international workshop on intelligent systems and applications, Wuhan, China, pp 1–4 18. Du D-B, Pei H, Zhang J-X, Si X-S, Pang Z-N, Yu Y (2020) A new condition-based maintenance decision model for degraded equipment subjected to random shocks. In: Chinese control and

Method of Optimal Threshold Calculation in Case of Radio …

19.

20.

21. 22.

79

decision conference (CCDC), Hefei, China, pp 2142–2147. https://doi.org/10.1109/CCDC49 329.2020.9164729 Zhao F, Zhang Y, Liu X (2019) Joint optimization of spare ordering and preventive replacement policy based on RUL. In: Prognostics and system health management conference, Qingdao, China, pp 1–5. https://doi.org/10.1109/PHM-Qingdao46334.2019.8942897 Chuang C, Ningyun L, Bin J, Yin X (2020) Condition-based maintenance optimization for continuously monitored degrading systems under imperfect maintenance actions. J Syst Eng Electron 31(4):841–851. https://doi.org/10.23919/JSEE.2020.000057 Huang Y, Chen E, Ho J (2013) Two-dimensional warranty with reliability-based preventive maintenance. IEEE Trans Reliab 62(4):898–907. https://doi.org/10.1109/TR.2013.2285051 Yuting J, Xiaodong F, Chuan L, Zhiqi G (2014) Research on preventive maintenance strategy optimization based on reliability threshold. In: Prognostics and system health management conference (PHM-2014 Hunan), Zhangiiaijie, China, pp 589–592. https://doi.org/10.1109/ PHM.2014.6988240

Swarm Intelligence-Based Smart City Applications: A Review for Transformative Technology with Artificial Intelligence Anusruti Mitra, Dipannita Basu, and Ahona Ghosh

Abstract Humans have their own intelligence and dynamics, who are always hawk to a sophisticated lifestyle. To achieve such a lifestyle, people tend to migrate from rural to urban civilization. Integrating human intelligence (HI) with artificial intelligence (AI) people make an effort to prosper their city with cutting edge technologies. To keep an eye on urban civilization and their quality of life, this study focused on amenities available in the smart city. Smart cities are composite and enormous distributed systems, distinguished by their multiplicity, to cut out the use of nonrenewable resources, to dominate traffic signals, to protect homes from intruders and to beat the death rate by applying swarm intelligence-based smart city applications. This paper traverses, conceptually and practically, how the burgeoning AI bisects the design of the city by disputing numerous swarm intelligence-based algorithms. Swarm intelligence is a very encouraging paragon to hand out with complicated and dynamic systems. It introduced vigorous, extensible and self-categorized natures to distribute with high-powered and fast replacing systems. Existing literature regarding smart city development and security concerns which have been immensely benefited by employing swarm intelligence-based approaches have been reviewed thoroughly which can provide a research direction also to the future researchers in this domain. Keywords Ant colony optimization · Artificial bee colony · Elephant herd optimization · Particle swarm optimization · Swarm intelligence · Smart city · Security · Matriarch

A. Mitra · D. Basu Department of Information Technology, Maulana Abul Kalam Azad University of Technology, Kolkata, West Bengal, India A. Ghosh (B) Department of Computer Science and Engineering, Maulana Abul Kalam Azad University of Technology, Kolkata, West Bengal, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_7

81

82

A. Mitra et al.

1 Introduction This transformation’s social, economic, environmental and engineering problems will determine the twenty-first century. The employment of smart technology which may increase the competence and efficacy of urban structures may improve the lives of individuals who live in those cities—and lessen the growth’s effect on the circumstances. The incorporation of technology into the strategy to promote sustainability, human well-being, as well as financial growth can be termed as a smart city [1, 2]. The smart city provides a combined idea for carrying together creative solutions to meet the problems that modern cities face, but there are still many obstacles to overcome. Creation of control and automation systems began through wired technology and now has touched the age of wireless technology developing Z-Wave, ZigBee, and EnOcean, and different other technologies. Wireless sensor networks (WSNs) often get used to collect data for intelligent environments, such as smart cities and urban planning. The digital partner is not only limited to a Building Information Modeling corresponding to the physical framework; agile city models assemble and integrate into the artificial metropolis: the infobahn and the network. To increase the intricacy and efficacy of a swarm intelligence-based smart city, we need to flourish higher cogitation. As a result, there is an increasing demand for open platforms that can integrate various sensing technologies across wireless technologies, merge datasets from disparate sources, and intelligently manage the created information. Although there are some architectures or frameworks that allow the interconnection of sensors [3], they offer limited services with very limited functionality in real. As a result, the preinstallation of infrastructure limits current solutions, forcing integrators to choose between different technologies or altering their current schemes and organization. Integrators also have a difficult time combining the data collected from different WSNs as no sufficient tools exist there for doing so. Swarm-based algorithms have been increasingly popular in recent years as a result of their ability to solve complicated issues. It shows how to cope with dynamic and fast-changing systems by presenting resilient, scalable and self-organized behaviors. A swarm of digital telecommunication networks (nerves), ubiquitously embedded intelligence (brains), sensors and tags (sensory organs) and software can be used to mimic the intelligence of cities (the knowledge and cognitive competence). Swarm intelligence-based approaches and benchmark solutions for smart city based on swarm intelligence will be discussed in this paper. In addition, a swarm-based smart city framework will be shown. Then, to convert the smart towns as more scalable and adaptable, a set of tendencies for applying swarm intelligence will be examined. The motivations behind considering the topic as our research challenge are summarized as follows. (a)

Swarm intelligence optimization approaches open the door to a bright forthcoming for geospatial technical advancements, and they also aid in the reduction of intrinsic fuzziness through the use of appropriate metaheuristics techniques.

Swarm Intelligence-Based Smart City Applications: A Review …

(b)

83

The greatest method for building scalable systems is decentralized control; however, coordination concerns must be handled. Swarm robots could serve as a model for resolving coordination challenges. The contributions may be abridged as follows: (i)

(ii) (iii)

Providing a pathway to find a good solution to distributed challenges by swarm intelligence’s main properties, such as decentralization, resilience, adaptability, scalability, as well as self-organization and flexibility. Disseminating recent research and development efforts in the area of smart cities and investigates trends and challenges to make cities smarter. Showing an initial reading point to explore many of related IoT-based smart city applications.

The next section describes and analyzes the state of the art in the concerned domain. They have been compared by their performance evaluation, and the drawbacks or limitations have been attempted to find out for getting a direction for future research in this topic. The implementation perspectives of different swarm intelligence algorithms used in the existing literature have been explained in detail in Sect. 3, and the application areas along with their prospects will be described in Sect. 4. Finally, the concluding statements and future scope in this area have been presented in Sect. 5.

2 Related Works Alleviation, shelter and clean living using intelligent techniques of association with the citizens are the foremost objective of a smart city. Zedadra et al. agitated to circulate recent research development using the Internet of Things (IoT) and exploring ongoing demands for adaptable smart cities [4]. Golubchikov et al. showed their interest in city automation using techniques like artificial intelligence (AI) and robotics. Automation awards us sustainable urbanization [5]. Li et al. showed continuous worldwide development of intelligent trump up. He fastened intelligent instruments, humans and the facts to analyze intelligently to sanction smarter decisionmaking technology. In this study, enhancement of capability for windmills, trains, multistoried buildings, gas turbines and in the field of medical science has been improved with data analysis and swarm intelligence-based artificial intelligence algorithms [6]. Nikitas et al. narrated few essential transport elements that are extrapolated to be original to the AI-centric smart city in hand future. Bond of AI with the smart city and transportation mentioned to connected and autonomous vehicles, unnamed and personal aerial vehicles which influenced transport in different ways and smart city goals. A vehicle that can recognize its movement, navigation, behavior and the surrounding area without mortal help and has connectivity to enable motivation, collaborative, collective is known as connected and autonomous vehicles. Unmanned

84

A. Mitra et al.

aerial vehicles are used in military and secret missions with the pilot, whereas personal aerial vehicles cut down the gap between the ground and hover airlines at our own pace in urban development [7]. Serrano has worked with big data, IoT, and artificial intelligence integrated as a service to build smart cities [8]. Ragavan et al. discussed some major problems of boulevard surplus as population increasing day by day and urbanization. To overcome the problems, he used the Internet of Vehicles to determine the traffic intensity and the fastest way to reach the destination [9]. Ramadan et al. proposed a sinkhole detection, WSN as the main network, where an intruder affects the network traffic, which is a big problem for a smart city. In this survey, the network is subdivided into different groups, each group has a source node, when a message is ready to transform is initiated by the proposed source node [10]. Barns et al. proposed the concept of a smart city that supports to accomplish buoyant and livable with enhancement in transformative technology [11]. Koenig et al. integrated artificial intelligence with Internet of Things-enabled devices, that customized a different production for downtown. Those manufacturers enhanced to accessible sources, like self-driving cars, and uplifted imperishable ways by designing lanes for bicycles and walking pathways, thus reducing the supply of fossil fuel utilization [12]. When a city developed rapidly, it is necessary to look after the city to retain the changes. Li et al. juxtaposed various support vector machinebased development methods for detecting damages with a Gaussian radial basis function to analyze the reason for damage in a smart city [13].

3 Swarm Intelligence Algorithms This section discusses the working mechanisms of swarm intelligence algorithms applied in recent smart city-based transformative technologies.

3.1 Artificial Bee Colony Optimization This algorithm abbreviated as ABC is a high-level problem-independent algorithm in which artificial bees of a colony co-operate in finding good solutions to optimization problems. This algorithm helps to ensure the quality of the food supply as shown in Fig. 1. In a smart city, it is important to ameliorate the standard of living, for that we need to spot certain electronic crimes and security happenings [8]. There are three groups in ABC, i.e., employed bees which deal with specific foods, onlooker bees watch the dance of employed bees and the scout bees search for food sources randomly.

Swarm Intelligence-Based Smart City Applications: A Review …

85

Fig. 1 Working mechanism of artificial bee colony optimization

3.2 Ant Colony Optimization This approach abbreviated as ACO is a handpicked approach for outlining various arrangements such as train schedules, capacity and much more to detect a faultless path as shown in Fig. 2. The optimization algorithm is established gleaned from browsing etiquette in which ants look for an aisle between their exclave and food supply. The ants thrive for food here and there, and when it gets an aura of food, a group of ants liaises with each other and deposits some pheromone on the ground so that other ants can follow the way to the food easily. This optimization plans to control smart traffic signals in a smart city. ACO used the Internet of Vehicles to determine the traffic intensity and the fastest way to reach the target [8]. In a smart city, the city is divided into several segments having equal size, and AOC is applied

86

A. Mitra et al.

Initialization

Ants organize paths till all the areas been inspected

Number of gathering ants is larger than u

M

X

Grooming ants

Constructing paths till the required paths are complete.

Updating of Pheromone

M

Closing condition X

End

Fig. 2 Working mechanism of ant colony optimization

to the segments to find the best way to reach the destination without congestion in the pavement. ACO depends on population and works independently on the highlevel program. ACO is used to solve complex problems. In this algorithm, a group of agents is implemented through the software called artificial ants, that search for good feedback. This algorithm depends on various parameters: number of ants in a population p, factor for management of the relative importance of pheromone I and the factor for controlling the relative importance of local heuristic factor f . Algorithm A graph g contains v vertices and e number of edges which forms a connective graph. The purpose is to connect the least path from colony to food sources. In ACO, there are three equations to simulate ant foraging, the Eq. (1) denotes pheromone evaporation. τ pq = τ pq + pq

(1)

where p represents the present state of ants and the ants are moving toward q. The τpq represents the value of the pheromone connection between p and q.

Swarm Intelligence-Based Smart City Applications: A Review …

τ pq = (1 − λ) ∗ τ pq

87

(2)

The Eq. (2) represents the increase of pheromone with time, where (1 − λ) is static and it shows the decrement of an ant with an artificial ant that is hopping from one place to other. X pq = τ pq q ∈ N p τ pq if q ∈ N p = 0, otherwise

(3)

The probability of ant is moving from vertex q to vertex p at vertex d where Np is the nearby process and the algorithm is sensitive with the pheromone changes.

3.3 Elephant Herd Optimization This algorithm abbreviated as EHO is a swarm intelligence algorithm as shown in Fig. 3. Elephants are categorized based on clans and a Matriarch is a leader. This algorithm prompts the entire clan and separates the detriment of elephants. In terms of elephant behavior, the study divides into two parts, one is the clan updating part and another one is the separating part. Based on a population of a city, the searching capabilities of EHO increase. Fig. 3 Working mechanism of elephant herd optimization

88

A. Mitra et al.

Clan Updating Operator. People of a smart city are compared with a group of elephants. Using the concept of EHO, in which part of the city people tends to move more can be determined. When elephants in this case people are moving together, in the clan or here in the group, by that population of a city measured. Separating Operator. When male elephants hit a certain age or puberty, they leave their clan and start living on their own alone. Here in this study, in a swarm intelligence-based smart city, when a city started growing rapidly. In search of food, a job and better lifestyle, people move here and there and get separated from their family or zone.

3.4 Particle Swarm Optimization This algorithm abbreviated as PSO is an optimization algorithm based on swarms in nature. Here population in a city is considered as a particle as shown in Fig. 4. This algorithm works in a few steps. Step 1: Define the population size in a city, the sum of iterations, and learning factors. Step 2: Disarrange the personal best solutions. Step 3: For each step, compute the cost corresponding to each solution and update personal best. At the end of each step, find the global best solution and update the value. Step 4: The velocity vector and position vector for the next step are computed. Fig. 4 Particle swarm optimization

Define the swarm posion, inera weight and count randomly

Calculate the fitness of the swarm, find the personal and global fit

N

Velocity and posion of each swarm is updated Y Inera weight and count are updated

Output the best soluon and the final swarm posion

Swarm Intelligence-Based Smart City Applications: A Review …

89

4 Application Areas Many ingenious implementations came into force in collaboration with artificial intelligence with swarm intelligence which is a blessing for a smart city.

4.1 Application Technologies of Intelligent Manufacturing The application areas of swarm intelligence algorithms in intelligent manufacturing are discussed in this section. Smart traffic. More people tend to live in smart cities, and thus to control that excessive traffic, software-defined networking (SDN) is used. SDN virtually dissociates from the hardware, by moving the control plane that decides where to set traffic to the program and the data redirects the traffic in the appliances [6]. Chamoso and Prieta proposed a traffic locating and regulating system for smart city [14] where the architecture’s layer 0 has been designed to receive data from different protocols through communication standards were utilized to serve an advanced layer. The layer 1 converted all the previous layer’s data to a MQTT protocol. Thus, in this level, it doesn’t matter how sensors gathered data; it’s the data itself that needs to be encapsulated in order for low-level services to provide it to the top layer in a well-defined manner. Heavyweight agents are found in the following layer and are structured in VO. As a result, the system is separated into simpler and less complex structures. The information is received by heavyweight agents in this layer. They have more capabilities than lightweight agents and are IF specialists. Their job is to relay the information gathered from other agents in this layer, perform the appropriate data transformations and serve it to the layer 3 agents. Furthermore, in this situation, there is a VO made up of agents who use statistical methodologies and, especially, to this case study, the ant-based control algorithm, which is based on hierarchical ant, is implemented by agents, and it may be used to vast networks, such as metropolitan streets. Smart transport. Smart transportation systems implement budget transportability for every person in the community [7]. The concordance of transport professionals and urban planners enabled sustainable urban transport and smart city using usercentric tools. Smart home. Smart home systems integrate with home automation using artificial intelligence algorithms. When we are in the home or away from home in both cases, we have access to our smart devices like television, AC, refrigerator and if we wish to switch on or off it by using only a smartphone or a smart band which is feasible [6].

90

A. Mitra et al.

4.2 Automated Digitation The various components of automated digitation are as follows where swarm intelligence has been applied to develop improved systems. Security system. Security systems safeguard and supervise electronically the smart architecture in case of menace, diminishing danger and susceptibility and prohibiting potential muggers. The way vigilance can be done using video analytics which enables automatic alarm propagation and is broadly endorsed as face recognition technology. This is a transformative technology of closed-circuit television (CCTV) and video surveillance systems (VSS) with artificial intelligence. Biometric finger recognition via smart card and smart card certificate prohibits trespassers entry denied. In large spaces where there are people with different priorities, security tracts were enabled there. In special situations, intruder detection monitors are used to detect physical attackers. The main sensors are combined on such resources as infrared, sound, pressure and volumetric [8]. Voice and telephony. Telephony systems conduct mostly full-duplex communication between two users where artificial intelligence may bring down a person’s input on accepting the call. In smart infrastructure, an intercom allows users to transmit with a security arranger or security director to invoke access from any floor or any position of a building such as a terrace, or a parking area. Intercoms use Session Internet Protocol (SIP) of voice-over Internet protocol blended in a laptop or phone [8]. Emergency system. A sustainable ecosystem brings down emissions, and squeakyclean cities greatly increase the standards of living, happiness and lives to commercial prosperity. Smart cities have high-rise buildings where it is not possible to exit manually when a fire breaks out. Voice alarm and fire system provide an emergency artificial intelligence-enabled control. Voice alarm (VA) layout exigency announcements give control in case of fire blaze. The fire system monitors flakes using heat or a sprinkler system. It supplies audio-visual alarms via lantern pollsters to be aware of fire and start an expansion. Fire system and voice alarm systems are directly linked, consequently when the fire is revealed, innately activates smart alert. Power grids. Smart cities and artificial intelligence (AI) have the prospectus to escalate the safety of the power grid. AI upgrades the performance of power grid management. In a smart and large city, integrating a smart grid with computational technology makes lifestyle easier. This integration helps in smart meter reading of large areas, and with this data, it can predict the load and the time of the day when it requires electricity most. The prediction models are also able to determine the cost for that specific use.

Swarm Intelligence-Based Smart City Applications: A Review …

91

Fig. 5 Percentages of literature using different optimizing SI approaches

5 Comparative Study In the state-of-the-art literature, it is observed that the application of swarm intelligence (SI) in smart city suffers from local optima, convergence speed issues. To improve the performance, most of the approaches have considered parameter hypertuning, combination with machine learning algorithm and combination with other swarm intelligence algorithm. Figure 5 has shown the percentages, respectively.

6 Conclusion The possibility for using the swarm intelligence concept and approaches to make smart cities smarter and more scalable has been examined in this research. Objects could be assimilated to creatures with very low capacities as individuals, resulting in high-level and sophisticated actions at the collective level, according to the presented vision. However, many research difficulties must yet be solved, and appropriate middleware infrastructures must be established, in order to achieve this goal.

References 1. Nam T, Pardo TA (2011) Conceptualizing smart city with dimensions of technology, people, and institutions. In Proceedings of the 12th annual international digital government research conference: digital government innovation in challenging times, pp 282–291. ACM 2. Renuka N, Nan NC, Ismail W (2013) Embedded RFID tracking system for hospital application using WSN platform. In: 2013 IEEE International conference on RFID-technologies and applications (RFID-TA), pp 1–5. IEEE 3. Daniel F, Eriksson J, Finne N, Fuchs H, Gaglione A, Karnouskos S, Voigt T et al (2013) makeSense: real-world business processes through wireless sensor networks. In: CONET/UBICITEC, pp 58–72 4. Zedadra O, Guerrieri A, Jouandeau N, Spezzano G, Seridi H, Fortino G (2019) Swarm intelligence and IoT-based smart cities: a review. The internet of things for smart urban ecosystems, pp 177–200

92

A. Mitra et al.

5. Golubchikov O, Thornbush M (2020) Artificial intelligence and robotics in smart city strategies and planned smart development. Smart Cities 3(4):1133–1144 6. Li BH, Hou BC, Yu WT, Lu XB, Yang CW (2017) Applications of artificial intelligence in intelligent manufacturing: a review. Front Inf Technol Electron Eng 18(1):86–96 7. Nikitas A, Michalakopoulou K, Njoya ET, Karampatzakis D (2020) Artificial intelligence, transport and the smart city: definitions and dimensions of a new mobility era. Sustainability 12(7):2789 8. Serrano W (2018) Digital systems in smart city and infrastructure: digital as a service. Smart Cities 1(1):134–154 9. Ragavan K, Venkatalakshmi K, Vijayalakshmi K (2021) Traffic video-based intelligent traffic control system for smart cities using modified ant colony optimizer. Comput Intell 37(1):538– 558 10. Ramadan RA (2020) Efficient intrusion detection algorithms for smart cities-based wireless sensing technologies. J Sens Actuator Netw 9(3):39 11. Barns S (2016) Mine your data: open data, digital strategies and entrepreneurial governance by code. Urban Geogr 37(4):554–571 12. Koenig R, Miao Y, Knecht K, Buš P, Mei-Chih C (2017) Interactive urban synthesis. In: International conference on computer-aided architectural design futures, pp 23–41. Springer, Singapore 13. Li R, Gu H, Hu B, She Z (2019) Multi-feature fusion and damage identification of large generator stator insulation based on Lamb wave detection and SVM method. Sensors 19(17):3733 14. Chamoso P, De La Prieta F (2015) Swarm-based smart city platform: a traffic application. ADCAIJ: Adv Distribut Comput Artif Intell J 4(2):89–97

Performance Evaluation of Machine Learning Classifiers for Prediction of Type 2 Diabetes Using Stress-Related Parameters Rohini Patil and Kamal Shah

Abstract Diabetes mellitus is a concern all over the world. Early prediction is necessary to prevent complications associated with it. The author evaluated performance of six classifiers, namely support vector machine (SVM), random forest (RF), decision tree (DT), logistic regression (LR), gradient boosting classifier (GBC) and K-nearest neighbor (KNN) for early prediction of type 2 diabetes. The accuracy of models was calculated using feature selection. Optimization was done using grid search method. Performance measures precision, recall and F1-score and area under the receiver operating characteristics (AUROC) curve were calculated. After applying several feature selection methods, SVM provided an accuracy of 82.14%. After applying the grid search technique, accuracies of GBC and SVM had 86.67% accuracy. Precision and recall values of SVM and GBC were highest, i.e., 0.87. Overall, GBC and SVM had best precision, recall and F-1 scores. According to ROC curve analysis, best performance was observed with SVM and RF. Keywords Type 2 diabetes · Stress · Machine learning

1 Introduction 1.1 Diabetes Mellitus a Global Problem Diabetes is one of the challenging chronic diseases from a psychosocial and behavioral perspective. It is a chronic, metabolic disease which is characterized by raised level of blood sugar. The prevalence of diabetes is predictable to reach from 9.3% in 2019 to 10.2% by 2030 [1]. Factors including genetics, growing age, race, ethnicity and lifestyle increase the risk of type 2 diabetes. R. Patil (B) · K. Shah Thakur College of Engineering and Technology, Mumbai, India e-mail: [email protected] R. Patil Terna Engineering College, Navi Mumbai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_8

93

94

R. Patil and K. Shah

1.2 Stress and Diabetes Health is not merely the condition of being free from illness, injury or pain; it is overall state of wellness of a person on all levels [2]. Stress is a state of emotional strain suffered from fatigue and emotional tension. Stress is the component that may trigger development of type 2 diabetes. Due to modern way of living, stress is causing huge burden on health care. Sleep deprivation, poor eating habits and sedentary lifestyle have contributed to growth of lifestyle diseases. ML is a division of artificial intelligence (AI) which learns from prior examples and identifies patterns from large, noisy or complex datasets that can be used to formulate hypotheses [3]. ML is a challenging area in healthcare research. With increasing advances in machine learning, it helps to diagnose the disease at an early phase and reduce the burden of disease and its complications. It is important to identify the relevant attribute used for prediction. In this regards, feature selection helps to removes unimportant variables and improves the performance of classification.

2 Related Work Kelly et al. and Lloyd et al. have shown that through physiological and behavioral nature, stress affects diabetes. Authors identified stressful working conditions is one of the risk factors of type 2 diabetes [4, 5]. Stress affects diabetes. It has been shown by Martinez et al. In this study, it is found that risk for T2D people is increased due to stressful working conditions, depression, personality traits or mental health problems [6–8]. A 12-year longitudinal study in women showed stress levels were associated with a higher risk of diabetes 3 years later [9]. Kumar et al. developed a model for prediction of the occurrence of anxiety, depression and stress with an application of 8 machine learning algorithms on DASS-42 tool and showed good accuracy of hybrid model, i.e., 90.40% [10]. Priya et al. showed predictions of anxiety, depression and stress on five severity levels by five different machine learning algorithms. In their study, random forest model showed best results [11]. Papini et al. showed the post-traumatic stress disorder prediction through ensemble technique [12]. Sanchez et al. have highlighted the importance of stress in job-related functions and developed a model for recognizing a stress which showed good accuracy of RF [13]. Kiranashree et al. worked on similar approach, analyzed stress among employees and developed a machine learning model for stress detection using physiological variables. SVM provided highest accuracy of 96.67% [14]. Ahuja and Banga worked on a model for identification of stress among university students and showed SVM gives an 85.71% accuracy [15]. Sneha and Gangil developed a model using optimal feature selection algorithm. According to the results of their study, RF and DT have the highest specificity of 98.00% and 98.20%. Whereas Naïve Bayesian has best accuracy of 82.30% [16]. Kumari et al. used soft voting approach for diabetes prediction. In this study, the

Performance Evaluation of Machine Learning Classifiers …

95

proposed algorithm provided an accuracy of 79.09% [17]. Chen et al. used three different feature selection methods and showed RF provided better result among them [18]. The review by Faraz et al. highlighted the importance of stress and its early detection using machine learning techniques [19]. Lama et al. performed research in middle aged people using machine learning and highlighted the importance of stress along with BMI, diet and tobacco consumption [20].

3 Methodology 3.1 Data Description In this research, data collected from general adult population with more than 18 years of age using a pre-developed prevalidated questionnaire. The questionnaire consisted of items related to demographic parameters including age, gender, weight, height and body mass index (BMI) and stress-related factors. Anxiety, workload, poor salary, deadline, unplanned work, travel, repetitive work, career, job security, powerlessness and no satisfaction with class label were the other collected features. The stress features were set to values from 1 to 5 as low, medium, average, high and very high.

3.2 Model Architecture Machine learning approach along with different feature selection techniques was used. A predictive model using six machine learning classifiers, namely support vector machine (SVM), random forest (RF), decision tree (DT), logistic regression (LR), gradient boosting classifier (GBC) and K-nearest neighbor (KNN), was used. Methodology was divided into three phases. In phase I, preprocessing of data was done. All these six models were trained on 80% of data, and remaining 20% of data was used for testing purpose. Accuracy of different machine learning models was calculated for training and testing dataset separately. In order to improve the assessment of model performance, tenfold cross-validation technique was applied on all the six models. In phase II, a study used feature selection techniques, namely filter methodSelect-K-best method, feature importance technique, information gain, correlation technique and hybrid method recursive feature elimination (RFE). The number of features providing highest accuracy for each model using all five feature selection methods was noted along with highest accuracy. In phase III, the model optimization was done using grid search method for all selected classifiers. The optimal hyperparameters were ranked based on their accuracy and “area under the receiver operating characteristic curve (ROC-AUC curve)”.

96

R. Patil and K. Shah

Fig. 1 Process flow of research

Overall methodology used in our research is shown in (Fig. 1). The algorithm has been illustrated as below. Algorithm: 1: procedure Label Encoder (LS_Diabetes dataset) 2: return ( LS_Diabetes dataset) 3: procedure Split_ Data (LS_Diabetes dataset) 4: Train_data,Test_data= split( attributes,label_var) 5: return Train_data,Test_data 6: procedureL1_Crossvalidation(LS_Diabetesdataset,clf,K=10) 7: return 8: procedure Feature Selector(LS_Diabetes dataset ,F) 9:

F= (Select k best | Feature importance | correlation |

Information gain |Recursive feature elimination) 10: return F 11: procedure L2_Crossvalidation ( clf, F) 12: return 13: procedure grid search estimator(clf,F,parameter grid) 14: return estimator 15: procedure L3_Crossvalidation(clf,F,estimator) 16: return

Performance measures including precision, recall and F1-score were calculated, and receiver operating characteristic (ROC) curve was plotted for checking the robustness and efficiency of the algorithms. The results of six models were compared based on these performance measures.

Classification Accuracy%

Performance Evaluation of Machine Learning Classifiers …

97

100 80 60 40 20 0 LR

RF

GBC

DT

SVM

KNN

Classifier Training _acc

Testing_acc

K-fold_acc

Fig. 2 Accuracy comparison of classification algorithms

4 Result and Analysis The proposed methodology used six machine learning models. The research used LS_diabetes dataset generated through a survey-based questionnaire. The study included 374 participants with 16 feature columns. Out of these, 186 were males and 88 had diabetes. Training accuracy of RF was 100%, whereas for LR, GBC, DT, SVM and KNN were 81%, 98%, 82%, 83% and 82%, respectively (Fig. 2). The testing accuracy of LR, RF, GBC, DT, SVM and KNN was 84%, 82.67%, 85.33%, 80%, 81.33% and 81.33%, respectively. The respective K-fold accuracy for these algorithms was 78.39%, 76.27%, 77.36%, 76%, 80.28% and 76.56%, respectively. Table 1 shows the number of selected features along with accuracy based on classifier and feature selection method. Accuracy comparison by applying feature selection and model optimization shown in in below (Fig. 3). After feature selection, LR, RF, GBC, DT, SVM, KNN provided accuracy of 81.34%, 78.37%, 81.08%, 77.08%, 82.14% and 80.54%. After applying the grid search technique, the respective accuracies were 85.33%, 81.33%, 86.67%, 82.67%, 86.67% and 80.26%, respectively. Table 2 shows performance measures of selected classifiers. Precision and recall values of SVM and GBC were highest, i.e., 0.87. Highest F-1 score was observed with GBC and SVM, i.e., 0.85. Overall, GBC and SVM have best precision, recall and F-1 scores. ROC curve analysis of different algorithms is depicted in (Fig. 4). According to ROC curve analysis, best performance was observed with SVM and RF.

5 Conclusion The author has done a study to develop predictive model for estimation of risk of type 2 diabetes mellitus using machine learning algorithms. The dataset was

98

R. Patil and K. Shah

developed by collection information from 374 people. Demographics and stressrelated parameters were collected from the included people. The dataset had people with and without diabetes. Performance measures in our study suggest stress is a risk factor for the development of type 2 diabetes mellitus. Overall, GBC and SVM provided best accuracy, precision, recall and F-1 scores. According to ROC curve analysis, best performance was observed with SVM and RF. Strategies for stress reduction should be employed in the high-risk population for reducing risk of type 2 diabetes development.

Table 1 Classification accuracy comparison with and without feature selection Method used

No of selected features

Accuracy (%)

LR

16

78.39

LR + K-Best

9

80.79

LR + Feature importance

12

80.8

LR + Correlation

6

81.07

LR + Information gain

7

81.07

LR + RFE

7

81.34

RF

16

76.27

RF + K-Best

5

80.28

RF + Feature importance

3

77.9

RF + Correlation

6

78.68

RF + Information gain

7

77.35

RF + RFE

15

78.95

GBC

16

77.36

GBC + K-Best

5

78.68

GBC + Feature importance

6

81.08

GBC + Correlation

6

80.01

GBC + Information gain

5

79.77

GBC + RFE

6

80.55

DT

16

76

DT + K-Best

6

77.08

DT + Feature importance

8

75.75

DT + Correlation

10

75.22

DT + Information gain

7

75.23

DT + RFE

8

77.06

SVM

16

80.28

SVM + K-Best

11

82.14

SVM + Feature importance

10

81.35

SVM + Correlation

8

81.6 (continued)

Performance Evaluation of Machine Learning Classifiers …

99

Table 1 (continued) Method used

No of selected features

Accuracy (%)

SVM + Information gain

8

81.33

SVM + RFE

9

81.35

KNN

16

76.56

KNN + K-Best

11

80

KNN + Feature importance

8

78.14

KNN + Correlation

10

80.26

KNN + Information gain

7

80

The bold values indicate highest accuracy in respective models after applying feature selection

90

Accuracy%

Fig. 3 Comparison of feature selection and model optimization accuracy

85 80 75 70 LR

RF

GBC

DT

SVM

KNN

Classifier After feature selection

Table 2 Performance measures of selected classifiers

Hypertuning_acc

Classifier

Precision

Recall

F1-score

LR

0.85

0.85

0.84

RF

0.81

0.81

0.77

GBC

0.87

0.87

0.85

DT

0.83

0.83

0.83

SVM

0.87

0.87

0.85

KNN

0.80

0.81

0.79

100

R. Patil and K. Shah

Fig. 4 ROC curve analysis of different algorithms

References 1. IDF Homepage. https://www.idf.org/our-network/regions-members/south-east-asia/.../94india.html. Accessed on 22 Feb 2019 2. WHO Homepage. http://www.who.int/en/news-room/fact-sheets/detail/diabetes. Accessed on 21 Feb 2019 3. Dutt S, Das AK (2018) Machine learning. Pearson Education, India 4. Kelly SJ, Ismail M (2015) Stress and type 2 diabetes: a review of how stress contributes to the development of type 2 diabetes. Annu Rev Public Health 36:441–462 5. Lloyd C, Smith J, Weinger K (2005) Stress and diabetes: a review of the links. Diabetes Spectrum 18:121–127 6. Martinez A, Sanchez W, Benitez R, Gonzalez Y, Mejia M, Otiz J (2018) A job stress pre-dictive model evaluation through classifier’s algorithms. IEEE Latin America Trans 16:178–185 7. Reddy S, Thota V, Dharun A (2018) Machine learning techniques for stress prediction in working employees. In: IEEE international conference on computational intelligence and computing research, pp 1–4 8. Patil R, Shah K (2019) Assessment of risk of type 2 diabetes mellitus with stress as a risk factor using classification algorithms. Int J Rec Technol Eng 8:11273–11277 9. Harris ML, Oldmeadow C, Hure A, Luu J, Loxton D, Attia J (2017) Stress increases the risk of type 2 diabetes onset in women: a 12-year longitudinal study using causal modelling. PLoS ONE 12:1–13 10. Kumar P, Garg S, Garg A (2020) Assessment of anxiety, depression and stress using ma-chine learning models. In: Third international conference on computing and network communications, procedia computer science, p 171 11. Priya A, Garg S, Tigga N (2020) Predicting anxiety, depression and stress in modern life using machine learning algorithms. In: International conference on computational intelligence and data science, procedia computer science, p 167 12. Papini S, Pisner D, Shumake J et al (2018) Ensemble machine learning prediction of posttraumatic stress disorder screening status after emergency room hospitalization. J Anxiety Disord 60:35–42 13. Sanchez W, Martinez A, Hernandez Y, Estrada H, Mendoza MG (2018) A predictive model for stress recognition in desk jobs. Jof Amb Intell Human Comput 14. Kiranashree BK, Ambika V, Radhika AD (2021) Analysis on machine learning techniques for stress detection among employees. Asian J Comput Sci Technol 10:35–37

Performance Evaluation of Machine Learning Classifiers …

101

15. Ahuja R, Banga A (2019) Mental stress detection in university students using machine learning algorithms. In: International conference on pervasive computing advances and applications, procedia computer science, 152 16. Sneha N, Gangil T (2019) Analysis of diabetes mellitus for early prediction using optimal features selection. J Big Data 6–13 17. Kumari S, Kumar D, Mittal M (2021) An ensemble approach for classification and prediction of diabetes mellitus using soft voting classifier. Int J Cogn Comput Eng 2:40–46 18. Chen R, Dewi C, Huang S, Caraka RE (2020) Selecting critical features for data classification based on machine learning methods. J Big Data 7:1–26 19. Faraz S, Ali SSA (2018) Machine learning and stress assessment: a review. In: 3rd international conference on emerging trends in engineering, sciences and technology, IEEE 20. Lama L, Wilhelmsson O, Norlander E et al (2021) Machine learning for prediction of diabetes risk in middle-aged swedish people. Heliyon 7:1–6

Parametrised Hesitant Fuzzy Soft Multiset for Decision Making Sreelekshmi C. Warrier, Terry Jacob Mathew, and Vijayakumar Varadarajan

Abstract The strive to improve accuracy in various forms of decision making is on the rise among researchers. With the advent of data science, complex algorithms and statistical tools are utilised to derive meaningful patterns from data. Several mathematical theories such as the fuzzy soft multiset theory attempt to deal with different universes under uncertainty for optimised decisions. After Torra introduced the hesitant fuzzy set in 2010, which is characterised by different membership values, many extensions of hesitant sets have come out successfully for the aid of data science decisions. To incorporate more information and robustness in the decision-making process, we have introduced the new concept of parameterised hesitant fuzzy soft multiset along with its basic properties and operations. The algorithm for decision making and its application in a real-world problem is also demonstrated. On similar lines, some useful operations for hesitant fuzzy soft multisets are also defined along with its decision-making algorithm. The proposed technique gives more precision in judgements in comparison with the predecessor model. Keywords Parametrised hesitant fuzzy soft multiset · Fuzzy soft multiset · Decision making · Data science

S. C. Warrier Department of Mathematics, Sree Ayyappa College, Chengannur, Kerala, India T. J. Mathew (B) School of Computer Sciences, Mahatma Gandhi University, Kottayam, India e-mail: [email protected] MACFAST, Thiruvalla, India V. Varadarajan School of Computer Science and Engineering, The University of New South Wales, Sydney, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_9

103

104

S. C. Warrier et al.

1 Introduction Human reasoning is more sensible in relating real-life problems with acceptable reasoning than concrete reasoning. But, the classical theories in mathematics have adopted a crisp and definite notion for simulations and computations. This created issues while solving real-life problems, which are characterised by various levels of uncertainties. As the traditional methods fail to deliver the best results, many theories came up with better solutions to deal with uncertain problems, such as the theory of probability [11], theory of fuzzy sets [29], theory of intuitionistic fuzzy sets [7], theory of vague sets [9], theory of interval mathematics [17], theory of rough sets [20], etc. But, they are not fully capable and successful in handling problems under uncertainty. Molodtsov in 1999 came up with soft set theory [16] to overcome these difficulties. He defined the soft set as a mapping from a parameter set to the power set of the universe. The improvements obtained with soft sets prompted many researchers to come up with several extensions [1–3] (see also [13]) and applications [15]. But, in spite of these developments, there are also situations where we need to model decision making with elements from multiple universes. Hence, Alkhazaleh [6] introduced soft multisets as a generalisation of Molodtsov’s soft set and defined its basic operations such as complement, union and intersection. Even though uncertain information was successfully represented by fuzzy sets, the results are not so impressive in all practical cases. However, finer extensions and generalisations of fuzzy sets [28] have proven to be successful in providing better insights. Hesitant fuzzy elements, as units of hesitant fuzzy sets [25] are capable of including multi-valued sets of memberships for managing decision making under uncertainty. Decision making is always influenced by the elements of instincts and human biases. Decision making is a vital factor for organisations to deliver sustained performance. The synergy between data science and managers can provide predictions based on hypothetical decisions and can calibrate its effect as visualisations of market parameters. Health care is another area that has benefitted from advancements in automated decision making. Medical practitioners are now working in tandem with data science to enable more personalised decisions on treatment options. Data science empowers organisations to make decisions based on solid datadriven evidence by converting existing data into useful recommendations. Thus, the data-driven institutions of today perform with an edge over the others even though there are reports of stalling progress in the wake of COVID-19 pandemic [22]. While data science is often described as a new discipline, those in the mathematical sciences have been engaged with data science for decades. Data science can be applied on a firm footing only if one has a strong background of the mathematical principles of abstraction and modelling. The mathematics required for data science mainly includes probability, calculus, linear regression and optimisation. Optimisation is a vital component for the success of data science [21]. Companies use optimisation to efficiently schedule the production of their goods according to the customer demands. Researchers from imaging sciences [23, 24] and machine

Parametrised Hesitant Fuzzy Soft Multiset for Decision Making

105

learning [14] are exploring hybrid methods to bring more efficiency in large scale algorithms. There is a need to advance further with different cases of parameterised models, as they have applications in solving inverse problems by image processing and deep learning. Parameterisation is a significant preprocessing step in decision making and is capable of influencing the output of any decision-making process. Parameterisation is the process of selecting the most optimised parameter values by reducing, replacing, reassigning or rearranging the data values. Hybrid parameterisation enhances the decision-making process and provides a new mechanism for solving problems with complex data sets. Decision making with uncertain and inconsistent data often requires parameterisation to obtain stable results as proven by the comparative study in [10]. Even though, multiset and soft multiset theory have been studied by many researchers including Alkhazaleh et al. [5], Babitha and John [8], Mukherjee and Das [18], etc., these methods are not fully capable of taking in all information for the purpose of decision making. Hence, we are motivated to propose a new combined approach that complements the above two solutions, namely soft multisets and hesitant fuzzy sets, availing the benefits of parameterisation. We introduce the concept of parametrised hesitant fuzzy soft multiset (P H F S M) as a decision-making alternative, where the aggregation of information is done by weighted sum method on fuzzy alternatives provided by experts. Apart from this main contribution, we also propose some new operations for hesitant fuzzy soft multisets, followed by a simple algorithm for its decision making. The proposed mathematical method of decision making has direct application to problem solving in data science. The rest of this paper is organised as follows. In Sect. 2, we introduce some new operations of use in hesitant fuzzy soft multisets, followed by its decision-making algorithm in Sect. 2.1. In Sect. 3, the novel concept of P H F S M is introduced, followed by its algorithm in Sect. 4. We also present an application of P H F S M in decision making in Sect. 4.1. A discussion and comparison is included in Sect. 5, before finally concluding in Sect. 6.

2 Some New Operations in Hesitant Fuzzy Soft Multiset The definitions of soft set [29], fuzzy soft set [12], fuzzy soft multiset [4], hesitant fuzzy set [25], hesitant fuzzy soft set [27] and hesitant fuzzy soft multiset theories [19] form the basis of the proposed new operations. To handle situations with multiple universes, theories such as soft multisets [6] can be utilised. But when the elements and parameters involve fuzziness and hesitancy, hesitant fuzzy soft multiset (H F S M) can be of more use. Here, we define some new mathematical operations related to H F S M, which can aid the process of decision making. The operations defined are AN D, O R, restricted union and restricted intersection.

106

S. C. Warrier et al.

Definition 1 The AN D operation on two hesitant fuzzy soft multiset (F, A) and (G, B), denoted by (F, A) ∧ (G, B) is defined as (F, A) ∧ (G, B) = (H, A × B), where H (a, b) = F(a) ∩ G(b), ∀(a, b) ∈ A × B. Definition 2 The O R operation on two hesitant fuzzy soft multiset (F, A) and (G, B), denoted by (F, A) ∨ (G, B), is defined as (F, A) ∨ (G, B) = (I, A × B), where I (a, b) = F(a) ∪ G(b), ∀(a, b) ∈ A × B. Definition 3 The restricted union of two hesitant fuzzy soft multisets (F, A) and (G, B) over U is a hesitant fuzzy soft multiset (H, C), where C = A ∩ B, H (e) = ˜ R (G, B) F(e) ∪ G(e) = ({ max{h F(e)uu,h G(e) u} ; u ∈ Ui }, i ∈ I ) and is written as (F, A)∪ = (H, C). Definition 4 The restricted intersection of two hesitant fuzzy soft multisets (F, A) and (G, B) over U is a hesitant fuzzy soft multiset (H, D), where D = A ∩ B, ∀e ∈ D. Then, ⎧ ⎪ ⎨{F(e); e ∈ A − B H (e) = G(e); e ∈ B − A ⎪ ⎩ F(e) ∩ G(e); e ∈ A ∩ B where

F(e) ∩ G(e) = ({ min{h F(e)uu,h G(e) u} ; u ∈ Ui }, i ∈ I )

and

is

written

as

˜ R (G, B) = (H, D). Similarly, it is possible to define restricted intersection (F, A)∩ for ‘n’ sets. The algorithm in Sect. 2.1 shows such an application with 3 sets.

2.1 A Hesitant Fuzzy Soft Multiset Approach to Decision-Making Problem A simple decision-making algorithm based on membership values is proposed in this section to solve a H F S M-based problem. The algorithm is written as: (a) Input the ‘n’ hesitant fuzzy soft multisets, obtained from the problem. (b) Apply restricted AN D operator on to the generated hesitant fuzzy soft multisets. (c) Compute the average membership values for each universe on the H F S M obtained in the previous step. (d) Select the H F S M with maximum value. Example 1 We take up an example to demonstrate the application of H F S M in a practical situation. The problem of selecting the most ideal candidate from a set of three different groups is given here. Let U1 = {a1 , a2 , a3 }, U2 = {b1 , b2 , b3 } and U3 = {c1 , c2 , c3 } be three universes representing American, British and Chinese candidates for an interview.

Parametrised Hesitant Fuzzy Soft Multiset for Decision Making

107

Let EU1 = {e11 = educational qualification, e12 = degree marks, e13 = presentation skills}, EU2 = {e21 = good looking, e22 = decent dressing, e23 = well-mannered} EU3 = {e31 = total experience, e32 = industry experience, e33 = overseas experience}, where E = EUi and E 1 , E 2 and E 3 ⊆ E. Consider E 1 = {e1 = (e11 , e21 , e31 ), e2 = (e12 , e22 , e32 ), e3 = (e12 , e23 , e32 )} , E 2 = {e1 = (e12 , e23 , e32 ), e2 = (e13 , e23 , e33 ), e3 = (e11 , e21 , e31 )}. and E 3 = {e1 = (e13 , e22 , e33 ), e2 = (e11 , e21 , e31 ), e3 = (e13 , e22 , e33 )}. The hesitant fuzzy soft multiset   a2 a3 b1 a1 , , , , {0.2, 0.4, 0.6} {0.3, 0.4, 0.8} {0.2, 0.8, 0.9} {0.3, 0.5, 0.6}    b2 b3 c1 c2 c3 , , , , {0.2, 0.5, 0.6} {0.1, 0.9, 1} {0.8, 0.9, 1} {0, 0.4, 0.9} {0.4, 0.5, 0.9}      a2 a3 b2 b3 a1 e2 , , , , , , {0.2, 0.6, 0.8} {0.3, 0.4, 0.8} {0.5, 0.8, 0.9} {0.3, 0.9, 1} {0.1, 0.2, 0.8}   c1 c3 , {0.4, 0.8, 1} {0.6, 0.9, 1}       a3 b1 b2 c1 a2 e3 , , , , , , {0.1, 0.4, 0.7} {0.7, 0.8, 0.9} {0.2, 0.4, 0.8} {0.5, 0.8, 0.8} {0.2, 0.3, 0.4}  c2 . {0.2, 0.7, 0.8}

(H F S M)(F, E 1 ) =



e1 ,



The hesitant fuzzy soft multiset   a2 a3 b1 a1 , , , , {0.2, 0.3, 0.4} {0.1, 0.2, 0.3} {0.8, 0.9, 1} {0.3, 0.5, 0.7}    b2 b3 c1 c2 c3 , , , , {0.2, 0.5, 0.8} {0.2, 0.5, 0.9} {0.3, 0.7, 1} {0.4, 0.7, 0.9} {0.4, 0.5, 0.9}      b2 a1 a2 a3 b3 e2 , , , , , , {0.2, 0.4, 0.6} {0.3, 0.4, 0.8} {0.5, 0.7, 0.8} {0.3, 0.6, 1} {0.2, 0.7, 0.8}   c1 c3 , {0.3, 0.7, 1} {0.6, 0.8, 0.9}       a3 b1 b2 c1 a2 e3 , , , , , , {0.1, 0.2, 0.4} {0.7, 0.8, 0.9} {0.2, 0.4, 0.8} {0.5, 0.6, 0.8} {0.2, 0.3, 0.4}  c2 . {0.2, 0.5, 0.7}

(H F S M)(G, E 2 ) =



e1 ,



The hesitant fuzzy soft multiset

108

S. C. Warrier et al.

 a2 a3 b1 a1 , , , {0.1, 0.5, 0.7} {0.2, 0.2, 0.4} {0.9, 0.7, 1} {0.2, 0.7, 0.8}    b3 c1 c2 c3 b2 , , , , {0.3, 0.4, 0.5} {0.2, 0.6, 0.9} {0.7, 0.9, 1} {0.4, 0.5, 0.9} {0.4, 0.8, 1}      a2 a3 b2 b3 a1 e2 , { , , , , , {0.2, 0.3, 0.6} {0.3, 0.4, 0.8} {0.4, 0.7, 0.8} {0.5, 0.6, 0.9} 0.1, 0.6, 0.8}   c1 c3 , {0.2, 0.7, 1} {0.6, 0.7, 0.9}    a2 a3 b1 b2 c1 e3 , , , , ,{ , {0.2, 0.4, 0.6} {0.3, 0.8, 0.9} {0.1, 0.7, 0.8} {0.5, 0.6, 0.8} {0.3, 0.8, 0.9}  c2 . {0.2, 0.4, 0.5} (H F S M)(I, E 3 ) =



e1 ,



According to step (b) of the algorithm given in Sect. 2.1, we compute ˜ R (G, E 2 )∩ ˜ R (I, E 3 ) (F, E 1 )∩     a2 a3 b1 b2 a1 , , , , , = e1 , {0.1, 0.2, 0.3} {0.1, 0.2, 0.3} {0.2, 0.7, 0.8} {0.2, 0.3, 0.5} {0.2, 0.5, 0.6}    b3 c1 c2 c3 , , , {0.1, 0.2, 0.5} {0.3, 0.7, 0.8} {0, 0.4, 0.5} {0.4, 0.5, 0.8}      a2 a3 b2 b3 a1 e2 , , , , , , {0.2, 0.3, 0.4} {0.3, 0.4, 0.8} {0.4, 0.5, 0.7} {0.3, 0.5, 0.6} {0.1, 0.2, 0.6}   c1 c3 , {0.2, 0.3, 0.4} {0.6, 0.7, 0.8}       a3 b1 b2 c1 a2 e3 , , , , , , {0.1, 0.2, 0.4} {0.3, 0.7, 0.8} {0.1, 0.2, 0.4} {0.5, 0.6, 0.8} {0.2, 0.3, 0.4}  c2 . {0.2, 0.4, 0.5}

Further, the individual membership values are averaged and added to obtain the consolidated membership value for each universe. The values for a1 , a2 and a3 are calculated as 0.50, 0.93 and 1.69. The highest membership a3 = 1.69 is selected from among a1 , a2 and a3 . Similarly, b2 = 1.43 and c1 = 1.5 is identified as the most eligible candidate from each universe. A final selection from these choice parameters will decide the ultimate selection, if the interview is for only one vacancy. In this example, the American candidate with the maximal value of 1.69 (a3 = 1.69) turns out to be the selected candidate.

3 Parametrised Hesitant Fuzzy Soft Multiset In this section, parametrised hesitant fuzzy soft multiset (P H F S M) is introduced as a generalisation of hesitant fuzzy soft multiset (H F S M), by assigning membership value for each parameter. The basic operations and its decision-making algorithm

Parametrised Hesitant Fuzzy Soft Multiset for Decision Making

109

are also demonstrated. This can be of use in precision decision-making problems. The P H F S M is defined along with an example as follows. We will use the notations η f for parametrised hesitant fuzzy soft multiset over U (P H F S M(U )) and μ f for fuzzy approximate functions over U .

Definition 5 Let {Ui ; i ∈ I } be a collection of universes such that i∈I Ui = ∅ and {Pi , i ∈ I } be a collection of parameters with membership function μ f : P → [0, 1], where F is a fuzzy set over P and μ F ( f ) is a fuzzy set over U . Let U = i∈I H F S(Ui ), where H F S(Ui ) is the set of all hesitant fuzzy subsets of Ui , P = i∈I Pu i and A ⊆ P. Then, the P H F S M over U can be represented by the function, η f = {( μ f p( p) , h f ( p)) : p ∈ A, h f ( p) ∈ H F(U ), μ f : P → [0, 1]}. Example 2 An example to show the representation of P H F S M is given here. Let U1 = {a1 , a2 , a3 }, U2 = {b1 , b2 , b3 } and U3 = {c1 , c2 , c3 } be three universes representing candidates attending a job interview. Let PU1 = { p11 , p12 , p13 }, PU2 = { p21 , p22 , p23 } and PU3 = { p31 , p32 , p33 }, where P = PUi and A ⊆ P. p11 p21 p31 p12 p22 p32 p12 p23 p32 , 0.4 , 0.7 ), p2 = ( 0.2 , 0.5 , 0.8 ), p3 = (0.2 , 0.6 , 0.8 ) and Consider A = { p1 = ( 0.1 p13 p21 p33 p4 = 0.3 , 0.4 , 1 )}. Here pi represents the combined parameter elements from multiple universes. The pi values are calculated by averaging each fuzzy approximation values. We define a P H F S M for this example by giving hesitancy values for multiple universes as follows:   p1  a1 a2 a3 b1 = , , , 0.4 {0.5, 0.6, 0.7} {0.2, 0.4, 0.6} {0.1, 0.3, 0.5} {0.3, 0.5, 0.7}    b2 b3 c1 c2 c3 , , , , , {0.4, 0.5} {0.6, 0.7} {0.1, 0.2, 0.3} {0.7, 0.8} {0.4, 0.7, 0.8}    p2  a1 a2 a3 b2 = , , , , η 0.5 {0.5, 0.6, 0.8} {0.2, 0.4, 0.8} {0.1, 0.2, 0.5} {0.7, 0.9}    c1 c3 b3 , , , {0.3, 0.7} {0.4, 0.8, 0.9} {0.8, 0.9}   p3  a2 a3 b1 b2 = , , , , η 0.53 {0.3, 0.4} {0.6, 0.7, 0.8} {0.2, 0.4, 0.6} {0.1, 0.2}   c2 c1 , , {0.5, 0.8} {0.4, 0.5, 0.6}   p4  a1 b1 a2 a3 = η , , , 0.56 {0.4, 0.6, 0.7} {0.5, 0.7, 0.8} {0.6, 0.8} {0.4, 0.6}  c1  c2 c3 b3 , , , {0.2, 0.3, 0.5} {0.6} {0.2, 0.4} {0.3, 0.5} η

We can rewrite the above values in to the notation of P H F S M as:

110

S. C. Warrier et al.

  p1  a1 a2 a3 b1 , , , , 0.4 {0.5, 0.6, 0.7} {0.2, 0.4, 0.6} {0.1, 0.3, 0.5} {0.3, 0.5, 0.7}    b3 c1 c2 c3 b2 , , , , {0.4, 0.5} {0.6, 0.7} {0.1, 0.2, 0.3} {0.7, 0.8} {0.4, 0.7, 0.8}     p2  b2 a1 a2 a3 b3 , , , , , , 0.5 {0.5, 0.6, 0.8} {0.2, 0.4, 0.8} {0.1, 0.2, 0.5} {0.7, 0.9} {0.3, 0.7}   c3 c1 , , {0.4, 0.8, 0.9} {0.8, 0.9}     p3  b1 a2 a3 b2 , , , , , 0.53 {0.3, 0.4} {0.6, 0.7, 0.8} {0.2, 0.4, 0.6} {0.1, 0.2}   c2 c1 , , {0.5, 0.8} {0.4, 0.5, 0.6}     p4  a1 a2 a3 b1 b3 , , , , , , 0.56 {0.4, 0.6, 0.7} {0.5, 0.7, 0.8} {0.6, 0.8} {0.4, 0.6} {0.2, 0.3, 0.5}    c1 c2 c3 , , {0.6} {0.2, 0.4} {0.3, 0.5}

ηf =

Definition 6 Let η f and ηg be two P H F S M over U . Consider the mapping μ f : A → U and μg : B → U, A ⊂ B. Then, η f is called parametrised hesitant fuzzy soft multi subset of ηg , if 1. μ f ( p) ≤ μg ( p), ∀ p ∈ P, 2. h f ( p) ≤ h g ( p), ∀ p ∈ P. Definition 7 Two parametrised hesitant fuzzy soft multisets η f and ηg are equal if η f ⊆ ηg and ηg ⊆ η f . It can be written as η f ∼ = ηg . Definition 8 The intersection of two parametrised hesitant fuzzy soft multiset η f and ηg is denoted as η f ( p) ∩ ηg ( p), which is defined as follows. h f ( p) ∩ h g ( p) = ({ min{h f ( p)uu,h g( p) u} ; u ∈ Ui }, p ∈ P) and μ f ( p) ∩ μg ( p) = min{μ f ( p), μg ( p)}, where μ f : A → U and μg : B → U, provided A, B ⊆ P. Definition 9 The AN D operation on two parametrised hesitant fuzzy soft multiset η f and ηg is defined as η f ( p) ∧ ηg ( p) = η f ( p) ∩ ηg ( p), ∀ p ∈ P. Definition 10 The union of two parametrised hesitant fuzzy soft multiset η f and ηg is denoted as η f ( p) ∪ ηg ( p), which is defined as follows: h f ( p) ∪ h g ( p) = ({ max{h f ( p)uu,h g( p) u} ; u ∈ Ui }, p ∈ P) and μ f ( p) ∩ μg ( p) = max{μ f ( p), μg ( p)}, where μ f : A → U and μg : B → U, provided A, B ⊆ P. Definition 11 The O R operation on two parametrised hesitant fuzzy soft multiset η f ( p) and ηg( p) is defined as η f ( p) ∨ ηg ( p) = η f ( p) ∪ ηg ( p).

Parametrised Hesitant Fuzzy Soft Multiset for Decision Making

111

4 A Parametrised Hesitant Fuzzy Soft Multiset Approach to Decision Making In this section, we suggest an algorithm to solve parametrised hesitant fuzzy soft multisets based decision making. (a) Input the ‘n’ parametrised hesitant fuzzy soft multisets, obtained from the decision problem. (b) Apply AND operation on the parametrised hesitant fuzzy soft multisets. (c) Compute the maximum values from the (P H F S M) using the equation 1 ( (μ h k k )), |P| where h k is the average membership value of each universe. (d) Finally, select the (P H F S M) with the maximum score.

4.1 Application in a Decision-Making Problem A case study is presented to exhibit the application of P H F S M in a decision-making scenario. Example 3 Let U1 = {a1 , a2 , a3 }, U2 = {b1 , b2 , b3 } and U3 = {c1 , c2 , c3 } be three universes representing three electronic appliances, namely vacuum cleaner, washing machine and air sanitiser. Let PU1 = { p11 = power rating, p12 = brand name, p13 = warranty}, PU2 = { p21 = automatic, p22 = size, p23 = dimension} and PU3 = { p31 = refillable, p32 = portable, p33 = sensor controlled}, where P = PUi and A, B and C ⊆ P. p11 p21 p31 p12 p22 p32 p12 p23 p32 , 0.1 , 0.1 ), p2 = ( 0.3 , 0.6 , 0.9 ), p3 = ( 0.2 , 0.3 , 0.7 )}, Consider A = { p1 = ( 0.1 p12 p23 p32 p13 p23 p33 p11 p21 p31 B = { p1 = ( 0.2 , 0.3 , 1 ), p2 = ( 0.5 , 0.2 , 0.8 ), p3 = ( 0.6 , 0.9 , 0.5 )} and C = { p1 = p13 p22 p33 p11 p21 p31 p13 p22 p33 , 0.3 , 0.2 ), p2 = ( 0.5 , 0.2 , 0.5 ), p3 = ( 0.6 , 0.4 , 0.5 )}. ( 0.1 The value of the parameters is obtained by averaging the membership values and p1 p2 p3 p1 p2 p3 p1 p2 p3 , 0.6 , 0.4 }, B = { 0.4 , 0.7 , 0.3 }, and C = { 0.2 , 0.4 , 0.5 }. is obtained as A = { 0.1 The hesitant fuzzy soft multiset (PHFSM),    p1  a1 a2 a3 b1 , , , , , 0.1 {0.2, 0.4, 0.6} {0.3, 0.4, 0.8} {0.2, 0.8, 0.9} {0.3, 0.5, 0.6}    b3 c2 c3 b2 c1 , , , , {0.2, 0.5, 0.6} {0.1, 0.9, 1} {0.8, 0.9, 1} {0, 0.4, 0.9} {0.4, 0.5, 0.9}     p2  a1 a2 a3 b2 b3 , , , , , , 0.6 {0.2, 0.6, 0.8} {0.3, 0.4, 0.8} {0.5, 0.8, 0.9} {0.3, 0.9, 1} {0.1, 0.2, 0.8}   c1 c3 , {0.4, 0.8, 1} {0.6, 0.9, 1}     p3  a3 b2 a2 b1 , , , , , 0.4 {0.1, 0.4, 7} {0.7, 0.8, 0.9} {0.2, 0.4, 0.8} {0.5, 0.8, 0.8}   c1 c2 , . {0.2, 0.3, 0.4} {0.2, 0.7, 0.8}

ηf =

112

S. C. Warrier et al.

The hesitant fuzzy soft multiset (PHFSM)   p1  a1 a2 a3 b1 , , , , 0.5 {0.2, 0.3, 0.4} {0.1, 0.2, 0.3} {0.8, 0.9, 1} {0.3, 0.5, 0.7}    b2 b3 c1 c2 c3 , , , , {0.2, 0.5, 0.8} {0.2, 0.5, 0.9} {0.3, 0.7, 1} {0.4, 0.7, 0.9} {0.4, 0.5, 0.9}     p2  a2 a3 b3 a1 b2 , , , , , , 0.5 {0.2, 0.4, 0.6} {0.3, 0.4, 0.8} {0.5, 0.7, 0.8} {0.3, 0.6, 1} {0.2, 0.7, 0.8}   c1 c3 , {0.3, 0.7, 1} {0.6, 0.8, 0.9}  p3     a3 b2 a2 b1 , , , , , 0.67 {0.1, 0.2, 0.4} {0.7, 0.8, 0.9} {0.2, 0.4, 0.8} {0.5, 0.6, 0.8}   c1 c2 , . {0.2, 0.3, 0.4} {0.2, 0.5, 0.7}

ηg =

The hesitant fuzzy soft multiset (PHFSM)    p1  a1 a2 a3 b1 , , , , , 0.2 {0.1, 0.5, 0.7} {0.2, 0.2, 0.4} {0.2, 0.7, 0.8} {0.2, 0.7, 0.8}    b3 c2 c3 c1 b2 , , , , {0.3, 0.5, 0.8} {0.2, 0.6, 0.9} {0.7, 0.9, 1} {0.4, 0.5, 0.9} {0.4, 0.8, 1}  p2     a1 a2 a3 b2 b3 , , , , , , 0.4 {0.2, 0.3, 0.6} {0.3, 0.4, 0.8} {0.4, 0.7, 0.8} {0.5, 0.60.91} {0.1, 0.6, 0.8}   c1 c3 , {0.2, 0.7, 1} {0.6, 0.7, 0.9}  p3    a3 b2 a2 b1 , , , , , 0.5 {0.2, 0.4, 0.6} {0.3, 0.8, 0.9} {0.7, 0.4, 0.8} {0.5, 0.6, 0.8}  c2 c1 , . { {0.3, 0.8, 0.9} {0.2, 0.4, 0.5} ηi =

Compute η f ∩ ηg ∩ ηi = ηk (say)   p1  a2 a3 b2 a1 b1 , , , , , 0.1 {0.1, 0.2, 0.3} {0.1, 0.2, 0.3} {0.2, 0.7, 0.8} {0.2, 0.3, 0.5} {0.2, 0.5, 0.6}    c1 c2 c3 b3 , , , {0.1, 0.2, 0.5} {0.3, 0.7, 0.8} {0, 0.4, 0.5} {0.4, 0.5, 0.8}  p2     a2 a3 b3 a1 b2 , , , , , , 0.6 {0.2, 0.3, 0.4} {0.3, 0.4, 0.8} {0.4, 0.5, 0.7} {0.3, 0.5, 0.6} {0.1, 0.2, 0.6}   c3 c1 , {0.2, 0.3, 0.4} {0.6, 0.7, 0.8}  p3     a2 a3 b1 b2 , , , , , 0.3 {0.1, 0.2, 0.4} {0.3, 0.7, 0.8} {0.1, 0.2, 0.4} {0.5, 0.6, 0.8}   c2 c1 , . {0.2, 0.3, 0.4} {0.2, 0.4, 0.5} {

Parametrised Hesitant Fuzzy Soft Multiset for Decision Making

113

According to the algorithm given in Sect. 4, we obtain a1 = 0.31, a2 = 0.11 and a3 = 0.19. The highest membership value is obtained for a1 = 0.31 from among a1 , a2 and a3 . Similarly, b1 = 0.33 and c3 = 0.49 are the values obtained for the desired electronic appliances from the other sets. A selection based on the highest value at this stage decides the final selection. Accordingly, c3 = 0.49 records the highest value, thus making air sanitiser, the most favoured household electronic item from among the three sets under consideration. It should be noted that the choice of the best three gadgets depends on the priority of the decision maker. After selecting the desired gadget, one may stop processing, if the problem confines to the selection of a group of electronic items from each universe. Thus, the scope of the problem can be adjusted according to the requirement of the selection process.

5 Discussion and Comparison The business organisations of today work in a dynamic mode challenged by many risk factors. The risk factors lead to dwindled profit, poor flexibility and are capable of crippling the organisation to failure. It is essential to identify and categorise the risk factors before applying optimisation techniques for effective decision making. Machine learning techniques such as classification, clustering and feature selection methods have been used to deal with parameterised risk factors in business [26]. A well-structured decision-making strategy for multivariate risk factors is an effective tool for optimised decision making. The proposed algorithm for decision making in a multivariate parameterised environment promotes quality decisions. The advantage of this approach is its ability to provide a more holistic decision making. This enables the decision maker to select the alternatives that are more inclined with her interests. In this section, we compare the decision-making process of P H F S M with H F S M. Same values are fed as input for both the algorithms, except for the parameterisation factor in P H F S M. The values, namely 0.1, 0.6 and 0.3, are supplied as parameterisation values for p1 , p2 and p3 , respectively. These cases are described in Examples 1 and 3. In the H F S M method, a3 is the choice of the algorithm, while in P H F S M, b1 is selected. The rankings of alternatives for both methods are shown in Table 1. Even though the parameterisation index is not increasing on a linear scale for p1 , p2 and p3 , it is seen that b1 is the ideal choice according to the P H F S M technique. This change in the ordering highlights the significance of this proposal. As this paper presents a novel mathematical structure for decision making, there is no scope for comparison with public data sets. We have compared the proposed decision-making technique on numerical values unlike computer science papers, with metrics on accuracy, precision and recall, as it is the only way for comparison in mathematics.

114

S. C. Warrier et al.

Table 1 Rankings of alternatives with H F S M and P H F S M approaches Method Ranking of alternatives Selected choice H FSM P H F S M (proposed)

a3  c1  b2 c3  b1  a1

a3 c3

6 Conclusion The selection of different products from a set of choices can be better handled with parameterisation. In this paper, we have introduced the concept of parametrised hesitant fuzzy soft multisets, their basic operations and an algorithm for decision making. We have also explored some mathematical operations of hesitant fuzzy soft multisets. A comparison of decision outcomes between P H F S M and H F S M shows that P H F S M is better poised to handle more information fusion than H F S M. More studies and comparisons with other related structures have to be done to reaffirm the robustness of P H F S M. We have supplemented illustrative examples of decision making with practical cases of P H F S M. This proposal can be utilised to launch efficient systems for precise applications in soft computing, data mining, expert systems, etc., where selection has to be made from multiple universes. Acknowledgements The authors thank Prof. J. C. R. Alcantud, University of Salamanca, for his expertise and assistance in reviewing and writing the manuscript. The second author is also grateful to the SOCS, Mahatma Gandhi University, Kottayam, for his postdoctoral fellowship.

References 1. Alcantud JCR (2016) A novel algorithm for fuzzy soft set based decision making from multiobserver input parameter data set. Inf Fusion 29:142–148 2. Alcantud JCR (2016) Some formal relationships among soft sets, fuzzy sets, and their extensions. Int J Approx Reason 68:45–53 3. Alcantud JCR, Mathew TJ (2017) Separable fuzzy soft sets and decision making with positive and negative attributes. Appl Soft Comput 59:586–595 4. Alkhazaleh S, Salleh AR (2012) Fuzzy soft multiset theory. In: Abstract and applied analysis, vol 2012. Hindawi 5. Alkhazaleh S, Salleh AR, Hassan N (2011) Fuzzy parameterized interval-valued fuzzy soft set. Appl Math Sci 5(67):3335–3346 6. Alkhazaleh S, Salleh AR, Hassan N (2011) Soft multisets theory. Appl Math Sci 5(72):3561– 3573 7. Atanassov K (2016) Intuitionistic fuzzy sets. Int J Bioautomation 20:1 8. Babitha K, John SJ (2013) On soft multi sets. Ann Fuzzy Math Inform 5(1):35–44 9. Bustince H, Burillo P (1996) Vague sets are intuitionistic fuzzy sets. Fuzzy Sets Syst 79(3):403– 405 10. Fujita H et al (2020) Effectiveness of a hybrid deep learning model integrated with a hybrid parameterisation model in decision-making analysis. In: Knowledge innovation through intelligent software methodologies, tools and techniques: proceedings of the 19th international con-

Parametrised Hesitant Fuzzy Soft Multiset for Decision Making

11. 12. 13.

14.

15.

16. 17. 18.

19. 20. 21. 22. 23.

24.

25. 26.

27. 28. 29.

115

ference on new trends in intelligent software methodologies, tools and techniques (SoMeT_20), vol 327. IOS Press, p 43 Jeffreys H (1998) The theory of probability. OUP Oxford Maji P, Biswas R, Roy A (2001) Fuzzy soft sets. J Fuzzy Math 9:589–602 Mathew TJ, Alcantud JCR (2017) Corrigendum to “a novel algorithm for fuzzy soft set based decision making from multiobserver input parameter data set”. Information Fusion 29 (2016) 142–148. Inf Fusion 33(C):113–114 Mathew TJ, Sherly E (2018) Analysis of supervised learning techniques for cost effective disease prediction using non-clinical parameters. In: 2018 international CET conference on control, communication, and computing (IC4). IEEE, pp 356–360 Mathew TJ, Sherly E, Alcantud JCR (2017) An adaptive soft set based diagnostic risk prediction system. In: The international symposium on intelligent systems technologies and applications. Springer, Cham, pp 149–162 Molodtsov D (1999) Soft set theory—first results. Comput Math Appl 37:19–31 Moore RE, Kearfott RB, Cloud MJ (2009) Introduction to interval analysis. SIAM Mukherjee A, Das AK (2016) Application of fuzzy soft multi sets in decision-making problems. In: Proceedings of 3rd international conference on advanced computing, networking and informatics. Springer, pp 21–28 Onyeozili I, Balami H, Peter C (2018) A study of hesitant fuzzy soft multiset theory. Ann Fuzzy Math Inform Pawlak Z (1982) Rough sets. Int J Comput Inf Sci 11(5):341–356 Peyre G, Chambolle A (2020) Preface to the special issue on optimization for data sciences. Appl Math Optim 82(3):889–890 Richards T, Scowcroft H, Doble E, Price A, Abbasi K (2021) Healthcare decision making should be democratised Sreedevi S, Mathew TJ (2019) A modified approach for the removal of impulse noise from mammogram images. In: International symposium on signal processing and intelligent recognition systems. Springer, pp 291–305 Sreedevi S, Mathew TJ, Sherly E (2016) Computerized classification of malignant and normal microcalcifications on mammograms: using soft set theory. In: 2016 international conference on information science (ICIS). IEEE, pp 131–137 Torra V (2010) Hesitant fuzzy sets. Int J Intell Syst 25(6):529–539 Ullah I, Raza B, Malik AK, Imran M, Islam SU, Kim SW (2019) A churn prediction model using random forest: analysis of machine learning techniques for churn prediction and factor identification in telecom sector. IEEE Access 7:60134–60149 Wang F, Li X, Chen X (2014) Hesitant fuzzy soft set and its applications in multicriteria decision making. J Appl Math 2014 Warrier SC, Mathew TJ, Alcantud JCR (2020) Fuzzy soft matrices on fuzzy soft multiset and its applications in optimization problems. J Intell Fuzzy Syst (Preprint) 1–12 Zadeh L (1965) Fuzzy sets. Inf Control 8:338–353

Some Variations of Domination in Order Sum Graphs Javeria Amreen and Sudev Naduvath

Abstract An order sum graph of a group G, denoted by os (G), is a graph with vertex set consisting of elements of G and two vertices say a, b ∈ os (G) are adjacent if o(a) + o(b) > o(G). In this paper, we extend the study of order sum graphs of groups to domination. We determine different types of domination such as connected, global, strong, secure, restrained domination and so on for order sum graphs, their complement and line graphs of order sum graphs. Keywords Order sum graphs · Domination · Complement of a graph · Line graphs MSC 2020 05C25 · 05C75

1 Introduction Applications of group theory are widely seen in communication networks. For example, in operation of pipeline system, a pipeline model is built for the transmission of data in the pipeline system to analyse the functionality of pipeline entities. The entire network can be classified into three layers, namely application layer which is the source of flow-altering behaviour, business object layer which deals with packaging the flow and transporting it and the network node layer which finds the optimal path and completes the data flow distribution (see Zhang et al. [1]). Some of the applications of algebra seen are in the field of computer science in the form of coding theory and cryptography which ensures the security and accuracy of the data transmitted from one device to other. Some other applications of algebra are in space time codes which uses closure property of groups under addition and multiplication. Algebra can also be applied in network coding, signal processing and image processing (see Boston [2]). J. Amreen (B) · S. Naduvath Department of Mathematics, CHRIST (Deemed to be University), Bangalore, India e-mail: [email protected] S. Naduvath e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_10

117

118

J. Amreen and S. Naduvath

Investigating graphs associated with algebraic structures like groups and rings are of much interest to researchers across the globe. A few among such graphs are intersection graphs (see Chakrabarty et al. [3]), power graphs (see Cameron and Ghosh [4]), total graphs (see Anderson and Badawi [5]) and so on. Motivated by the literature stated above, we introduced a graph called order sum graph of a group in [6]. Definition 1 Amreen and Naduvath [6], An order sum graph of a group G, denoted by os (G), is a graph with vertex set consisting of elements of G and two vertices say a, b ∈ os (G) are adjacent if o(a) + o(b) > o(G). Some significant results on order sum graphs which are needed for our study in this paper are stated below: Proposition 1.1 Amreen and Naduvath [6], The order sum graph associated with a group G is a null graph if and only if G is not a cyclic group. Theorem 1.2 Amreen and Naduvath [6], Let G be a cyclic group with exactly one generator, then os (G) is a path of order 2. Theorem 1.3 Amreen and Naduvath [6], χ (os (G)) = ϑ +1, where, ϑ is the number of generators of the group G. Domination is another branch of graph theory which is extensively worked on mainly because of its applications. To name a few applications of domination, to minimize the distance between emergency locations such as hospitals and fire stations, social network security, electrical networks and so on. This has fascinated us to extend our study of order sum graphs associated with groups to domination. It is interesting to see how dominating sets could be applied in real life specifically in today’s modern life where computer networks and mobile networks have a huge impact on our lives. Mobile ad hoc wireless network is one such network which uses connected dominating set of a graph where the vertices are the mobile devices. It is mainly used for routing and broadcasting the data to the mobile devices from the wireless network connecting them (see Sasireka and Kishore [7]). Dominating sets could also be used in wireless sensor network in which sensors are distributed to supervise the physical conditions such as pressure, temperature and so on. It is used to broadcast the information through the wireless network to the desired location (see Sasireka and Kishore [7]). The foundations of algebraic structures are applied in the dynamic networks in which vertices’ pair is linked for a finite set of closed intervals of time within a fixed time period (see Kontoleon et al. [8]). All the above-mentioned aspects make the studies on graphs and networks generated from different algebraic structures important and significant. In this paper, we discuss various types of domination in order sum graphs, their complement and line graphs of order sum graphs. For the terms and definitions in graph theory, refer to [9] and for those in group theory, refer to [10]. For various topics in domination, we refer to [11, 12]. Throughout the paper, G refers to a group of order n with ϑ generators.

Some Variations of Domination in Order Sum Graphs

119

2 Domination in Order Sum Graphs The domination number of the order sum graph associated with a group G of order n is either 1 or n if G is cyclic or non-cyclic group, respectively (see Amreen and Naduvath [6]). A connected dominating set of  is a set S ⊂ V (, such that S is connected. The minimum size of all such dominating sets of  is said to be its connected domination number, written by γc () (see Balamurugan et al. [13]). A total dominating set of  is a set S ⊆ V () such that for any v ∈ V , N (v)∩ S = ∅. The minimum size of such a set is said to be total domination number of  represented by γt () (see Haynes et al. [11]). The theorem below discusses the relation between γc (os ) and γt (os ) for a cyclic group. Theorem 2.1 For the order sum graph of a cyclic group G, γc (os (G)) = γt (os (G)) = 2. Proof Let os (G) be the order sum graph of the cyclic group G. Then, it must contain at least one generator say a. By Definition 1, the vertex corresponding to a in os (G) is a neighbour to all the other vertices. That is, the vertex a dominates os (G), but the subgraph induced by the vertex a is not connected. The set S containing the vertex a along with a vertex adjacent to it will be a connected induced subgraph. The set S is also a total dominating set as every vertex of V in os (G) is a neighbour to the generator vertex in S. Hence, γc (os (G)) = γt (os (G)) = 2. A restrained dominating set of  is a set S ⊆ V () such that for any v ∈ V − S, N (v) ∩ S = ∅ and N (v) ∩ (V − S) = ∅. The restrained domination number of , represented by γr (), is the minimum size of all such sets of  (see Domke et al. [14]). Then, we have. Proposition 2.2 For the order sum graph of a cyclic group,  γr (os (G)) =

2; ϑ = 1, 1; ϑ > 1.

(1)

Proof Let os (G) be the order sum graph of a cyclic group G with ϑ generators. Consider the following cases to determine the restrained domination number of os (G): Case 1: Let ϑ = 1. Then, by Theorem 1.2, os (G) is a K 2 . The restrained dominating set contains both the vertices of os (G). Hence, γr (os (G)) = 2. Case 2: Let ϑ > 1. Let the singleton set S contains one of the generators and therefore, V − S must contain at least one generator. Hence, each vertex in V − S is a neighbour to the vertex in S and to at least one more vertex in V − S. Therefore, γr (os (G)) = 1.

120

J. Amreen and S. Naduvath

Theorem 2.3 For the order sum graph of a cyclic group G, γg (os (G)) = ϑ + 1. Proof Let os (G) be the order sum graph of a cyclic group G with ϑ generators. The vertices corresponding to the ϑ generators of G are universal vertices in os (G), and hence, they are isolated vertices in  os (G). All the remaining vertices corresponding to the non-generators of G are mutually non-adjacent to each other in os (G) and hence, they form a complete subgraph in  os (G). Therefore, in order to dominate both os and its complement  os (G), it requires ϑ vertices that dominate all the isolated vertices in  os (G) and one more vertex corresponding to the non-generator which dominates the complete subgraph in  os (G). Hence, γg (os (G)) = ϑ + 1. Theorem 2.4

 γs (os (G)) =

1; if ϑ = 1, 2; otherwise.

(2)

Proof Let os (G) be the order sum graph of a cyclic group G with ϑ generators. Case 1: Let ϑ = 1. Therefore, by Definition 1, os (G) is a star graph K 1,n−1 , where the generator vertex is the universal vertex. T set S containing any n − 1 vertices of os (G) is the minimum secure dominating set since for the only vertex in V − S, there exists a vertex in S so that they are neighbours and removal of that vertex from S and addition of the vertex from V − S will still be a dominating set. Therefore, γs (os (G)) = n − 1, but by Theorem 1.2, os (G) is a path of order 2. Hence, γs (os (G)) = 1. Case 2: Let ϑ > 1. That is, there are at least two generators in G. The set S containing any two generator vertices is the minimum secure dominating set since removal of any of the vertex from S and addition of any other vertex from V − S to S will still make it a dominating set. Therefore, γs (os (G)) = 2. Theorem 2.5 The co-secure domination number of the order sum graph of a cyclic group G of order n with ϑ generators is 1. Proof Let os (G) be the order sum graph of a cyclic group G with ϑ generators. Case 1: Let ϑ = 1. The set S with all the non-generator vertices is the minimum co-secure dominating set since S itself is a dominating set and for each vertex of S, there is a generator vertex in V − S so that removal of that vertex from S and addition of the generator vertex from V − S to S is again a dominating set. Therefore, γcs (os (G)) = n − 1 but by Theorem 1.2, os (G) is a path of order 2. Hence, γcs (os (G)) = 1. Case 2: Let ϑ > 1. That is, there are at least two generators in G. The singleton set S containing one of the generators is the minimum co-secure dominating set since, the vertex in S can be replaced by the generator vertex which is in V − S still making S a dominating set. Therefore, γcs (os (G)) = 1. Theorem 2.6 Kulli et al. [15], For any graph  with n vertices, the cototal domination number γct () = n if and only if each component of  is a star.

Some Variations of Domination in Order Sum Graphs

121

Theorem 2.7 For the order sum graph of G,  γct (os (G)) =

2; if ϑ = 1, 1; if ϑ > 1.

(3)

Proof Let os (G) be the order sum graph of G with ϑ generators. To determine the cototal domination number of os (G), consider the following cases: Case 1: Let ϑ = 1. Then, by Theorem 1.2, os (G) is a path of order 2. Therefore, by Theorem 2.6, γct (os (G)) = 2. Case 2: Let ϑ > 1. That is, G has at least two generators. The singleton set S containing one of the generator vertices is the cototal dominating set since, V − S has minimum one generator vertex which is neighbouring to all other vertices and hence, V − S has no isolated vertices. Hence, γct (os (G)) = 1. The following theorem discusses the relation between strong and perfect domination number for order sum graphs: Theorem 2.8 The strong domination number of the order sum graph of a group G is 1 if and only if G is cyclic. Proof Let γst (os (G)) = 1. Assume that G is a non-cyclic group. Then, os (G) is a null graph by Proposition 1.1. Therefore, γst (os (G)) = n which is contradicting that γst (os (G)) = 1. Therefore, G must be cyclic. Conversely, let G be a cyclic group of order n with ϑ generators. Then, by Definition 1, degree of a generator vertex in os (G) is n − 1 and that of a non-generator vertex is ϑ. Clearly, the singleton set S containing a generator vertex is a dominating set and ϑ ≤ n − 1 because the order of the identity element is 1 in any group G. Therefore, S is the strong dominating set of os . Hence, γst (os (G)) = 1. At this point, it is interesting to investigate the domatic number d() of order sum graphs. Theorem 2.9 The domatic number of the order sum graph associated with a cyclic group G with ϑ generators is ϑ + 1. Proof Let os (G) be the order sum graph of a cyclic group G with ϑ generators. The singleton sets each containing a vertex corresponding to a generator are all dominating sets in os (G). The set containing all the non-generator vertices is a dominating set in os (G). Therefore, the vertex set of os (G) can be partitioned maximum into ϑ + 1 dominating sets. Hence, d(os (G)) = ϑ + 1.

122

J. Amreen and S. Naduvath

3 Domination in Line Graphs and Complement of Order Sum Graphs In this section, we determine various types of domination for the complement of order sum graphs and line graphs of order sum graphs. Theorem 3.1         γ  os (G) = γst  os (G) = γ p  os (G) = γs  os (G) = χ (os (G)) = ϑ + 1 (4) Proof Consider the order sum graph os (G) of G with ϑ generators. By Definition 1, the  os (G) is a disconnected graph with ϑ isolated vertices and a complete graph K n−ϑ . Therefore, there are ϑ + 1 components in  os (G). Let the set S contain all the generator vertices and a non-generator vertex and V − S contain the remaining nongenerator vertices. That is, the cardinality of S is ϑ + 1. Clearly, S is a dominating set. There is a non-generator vertex in S whose degree is same as the degree of vertices in V − S for each vertex in V − S. Therefore, S is a strong dominating set of  os (G). Each vertex in V − S is a neighbour to exactly one vertex which is corresponding to a non-generator in S. Therefore, S is also a perfect dominating set of  os (G). Corresponding to each vertex d in V − S, there is a non-generator vertex say, c in S so that they are neighbours and (S − {c}) ∪ {d} is a dominating set. Therefore, the set S is also a secure dominating set. Therefore, by combining all the above statements and by Theorem 1.3,         γ  os (G) = γst  os (G) = γ p  os (G) = γs  os (G) = χ (os (G)) = ϑ + 1. (5) Theorem 3.2 For the complement of order sum graph of a cyclic group G with ϑ generators,     γr  os (G) = γct  os (G) =



ϑ + 1; if ϑ < n − 2, n; otherwise.

(6)

Proof For the complement of order sum graph  os (G) with ϑ generators, let S be the set with all the generator vertices and one of the non-generator vertices. Therefore, V − S is a set containing the remaining non-generator vertices. Now, consider the following cases: Case 1: Let ϑ < n − 2. Then, each vertex in V − S is a neighbour to the nongenerator vertex in S and to another non-generator vertex in V − S and the induced

Some Variations of Domination in Order Sum Graphs

123

subgraph V − S of  os (G) has no isolated vertices. Therefore, S is botha restrained   dominating set and a cototal dominating set. Hence, γr  os (G) = γct  os (G) = ϑ + 1. Case 2: Let ϑ = n − 2 or n − 1. Then, S will contain n − 1 or n vertices respectively. V − S will  Therefore,   be a singleton set or an empty set respectively. Hence, γr  os (G) = γct  os (G) = n. Theorem 3.3   ϑ γ (L(os (G))) = γr (L(os (G))) = 2

(7)

Proof Let os (G) be the order sum graph associated with a cyclic group G with vertex set V = {a1 , a2 , a3 , . . . , an }. Let there be ϑ generators. If ϑ = 1, then, os is a star graph K 1,n−1 and therefore, L(os (G)) isa complete graph of order n − 1. Clearly, γ (L(os (G))) = γr (L(os (G))) = 1 = ϑ2 . Let ϑ = 2. Let a1 , a2 be the two generators of G. Then, there is a vertex a1 a2 in L(os (G)) which is adjacent to all the remaining vertices of L(os (G)). Therefore, the set S = {a1 a2 } is a dominating set and a restrained dominating set of L(os (G)) since each vertex in V − S is a neighbour to the vertex in S and isalso  adjacent to a vertex in V − S. Hence, γ (L(os (G))) = γr (L(os (G))) = 1 = ϑ2 . If ϑ = 3. Let a1 , a2 , a3 be the three generators of G. Then, the set S = {a1 a2 , a3 a4 } is the dominating set of L(os (G)). Clearly, S will also be the restrained   dominating set of L(os (G)). Therefore, γ (L(os (G))) = γr (L(os (G))) = 2 = ϑ2 . in the similar manner, we get, γ (L(os (G))) = γr (L(os (G))) =   ϑ Proceeding . 2 Theorem 3.4  γc (L(os )) =

2; if ϑ = 1 or ϑ = 2, ϑ − 1; ϑ ≥ 3.

(8)

Proof Consider the vertex set of os (G) to be V = {a1 , a2 , a3 , . . . , an } and let ϑ be the number of generators in G. Case1 : Let ϑ = 1 and let a1 be the generator vertex in os (G). Then, os (G) is a star graph K 1,n−1 and therefore, L(os (G)) is a K n−1 . The set S containing the vertex a1 a2 of L(os (G)) along with a vertex adjacent to it will form a connected dominating set of L(os (G)). Therefore, γc (L(os (G))) = 2. The set S will be a connected dominating set L(os (G)) even if ϑ = 2. Therefore, γc (L(os (G))) = 2 when G has either 1 or 2 generators. Case2 : Let ϑ ≥ 3. If there are three generators say, a1 , a2 , a3 in G, then, the set S = {a1 a2 , a2 a3 } is the connected dominating set L(os (G)) as S is a dominating set and the induced subgraph S of L(os (G)) is connected. Hence, γc (L(os (G))) = 2 = ϑ − 1. Similarly, if there are four generators say, a1 , a2 , a3 , a4 in G, then, the set S = {a1 a2 , a2 a3 , a3 a4 } is the connected dominating set of

124

J. Amreen and S. Naduvath

L(os (G)). Hence, γc (L(os (G))) = 3 = ϑ − 1. Proceeding in the similar manner, we get γc (L(os (G))) = ϑ − 1.

4 Conclusion and Scope for Future Work In this paper, we discussed different types of domination such as connected domination, total domination, restrained domination, global domination, secure domination, co-secure domination, cototal domination, strong domination and perfect domination in order sum graphs and their complements. We also determined the domatic number of order sum graphs. Further, we obtained domination number along with connected and restrained number for the line graphs of order sum graphs. The following are some open problems we have found out during our present studies, which seem to be promising for further studies: (i) (ii) (iii) (iv) (v)

To investigate graphs associated with groups in which two vertices are adjacent if sum of their orders is equal to the order of the group. To investigate graphs in which inverse of an element of the group is considered for the adjacency criteria. To construct the graphs associated with cosets of the group. To construct signed graphs for the order sum graphs and other algebraic graphs. To extend the current study of order sum graphs associated with groups to the concept of rings.

References 1. Zhang J, Xiong F, Kang J (2018) The application of group theory in communication operation pipeline system. Math Prob Eng 2. Boston N (2012) Applications of algebra to communications, control, and signal processing. Springer 3. Chakrabarty I, Ghosh S, Mukherjee TK, Sen MK (2009) Intersection graphs of ideals of rings. Discrete math 309(17):5381–5392 4. Cameron PJ, Ghosh S (2011) The power graph of a finite group. Discrete Math 311(13):1220– 1222 5. Anderson DF, Badawi A (2008) The total graph of a commutative ring. J Algebra 320(7):2706– 2719 6. Amreen J, Naduvath S (2022) On order sum graph of a group. Baghdad Sci J, to appear. 7. Sasireka A, Kishore AHN (2014) Applications of dominating set of a graph in computer networks. Int J Eng Sci Res Technol 3(1):170–173 8. Kontoleon N, Falzon L, Pattison P (2013) Algebraic structures for dynamic networks. J Math Psychol 57(6):310–319 9. West DB (2001) Introduction to graph theory, 2nd edn. Prince Hall of India-New Delhi 10. Cohn PM (2003) Basic algebra. Springer 11. Haynes TW, Hedetniemi ST, Henning MA (2020) Topics in domination in graphs. Springer 12. Haynes TW, Hedetniemi ST, Slater P (1998) Fundamentals of domination in graphs. CRC Press

Some Variations of Domination in Order Sum Graphs

125

13. Balamurugan S, Anitha M, Kalaiselvi S (2019) Chromatic connected domination in graphs. J Discrete Math Sci Crypt 22(5):753–760 14. Domke GS, Hattingh JH, Hedetniemi ST, Laskar RC, Markus LR (1999) Restrained domination in graphs. Discret Math 203(1–3):61–69 15. Kulli VR, Janakiram B, Iyer RR (1999) The cototal domination number of a graph. J Discrete Math Sci Crypt 2(2–3):179–184

Emotion Detection Using Natural Language Processing and ConvNets Akash Das, Kartik Nair, and Yukti Bandi

Abstract Emotion detection is developing technology with ongoing research mainly focused on identifying various aspects of human emotion and applying different machine learning algorithms to find out which category does it belong to. Emotion detection is gaining importance increasingly since the human voice reflects the underlying emotion which is often lost when it is converted into text. This paper is mainly focused on analyzing different human behaviors and accordingly planning or making certain decisions based on these emotions. There are various methods for identifying a person’s emotion and advanced machine learning algorithms like NLP using deep learning and neural networks (like dual RNNs) which are famous research fields of AI discussed in this paper. Keywords Mel-frequency cepstrum coefficients (MFCC) · Natural language processing (NLP) · Recurrent neural network (RCC) · Artificial intelligence (AI)

1 Introduction Human emotion detection is a highly useful tool that can be used by modern AI systems to make real-time decisions. For a human eye, recognizing the emotions of a person based on their face and voice is a trivial task, but for a computer, this task requires heavy complex computations and various techniques to process and extract features from the input samples. This concept of human emotion detection using voice and face data is applicable in multiple areas where additional information about a person is required. Business operations are a major field where the importance A. Das (B) · K. Nair · Y. Bandi Electronics and Telecommunication Department, D. J. Sanghvi College of Engineering, Vile-Parle (W), Mumbai, India e-mail: [email protected] K. Nair e-mail: [email protected] Y. Bandi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_11

127

128

A. Das et al.

of emotion detection is critical. Many businesses need to analyse customer reviews for their products and services and also study the offers and discounts to prosper in their domain. An artificially intelligent system can distinguish between the products that the customer liked and disliked by capturing the emotions of the customer in real-time using visual and audio signals. Automatic speech emotion recognition is quickly gaining traction to become a new and exciting research field in human– computer interaction. Speech emotion recognition is an integral component of the ongoing research being done for recognizing emotions with the help of electronic devices. The recognition of emotion through speech with high accuracy can be beneficial for designing precise emotion detection models for human–computer intelligent interaction. For making the human–computer interaction seems more seamless, the machine needs to respond to emotions similarly as humans in that situation would do. Therefore, to achieve this goal, the computer must identify emotions using facial expressions, speech patterns, or both. In the field of human–computer interaction, along with images and video, speech is certainly a significant attribute for emotion recognition. This paper discusses the previous research done using various modes of emotion detection including face, speech, and textual data. Following that the paper explains the working principle of the project which discusses how the data is acquired and processed and results are generated. Finally, it describes the implementation and displays the results obtained from the various models deployed in this project and their accuracy parameters.

2 Literature Survey Emotion is deep-rooted in human beings, and as a result, comprehending these emotions is a key part of human-like artificial intelligence (AI) [1]. Emotion recognition in conversation (ERC) in computers has gained significant popularity in recent years as a topic of research and development in the domain of natural language processing (NLP). ERC requires certain scalable and effective conversational emotion recognition algorithms to mine opinions from different types of conversational data [2]. Emotion-aware dialogues are generated by the ERC which requires an understanding of user emotions. The final model can be used to detect emotions in the present conversation between the human and computer and take decisions on what response to give. ERC has immense potential on social media platforms for understanding and filtering out hateful sentiments conveyed on the platform which can have an impact on the user’s mental health [3]. Opinion mining and recommendation systems can also be made more efficient using Emotion recognition. To tackle the problem of emotion recognition in text, several emotion lexicons have been developed [4]. The abundant amount of conversational data created and accumulated over these years has amassed great attention to the field of ERC. It is useful in areas like health care, banking, education, etc. ERC also provides aid to teachers and students in the field of E-learning by providing information about

Emotion Detection Using Natural Language Processing and ConvNets

129

the emotional state of students [5]. Speech emotion recognition is a heavy computational task that consists of two chief components: feature extraction and classification of emotions. RNN method with the basic method MLR and the most widely used method SVM was compared [6]. Classification is the last step of speech emotion recognition primarily involves classifying the raw data in the form of categorization into a category of emotion based on features extracted from the data. In the paper, “A Multi-level Classification Approach for Facial Emotion Recognition” by Drume and Jalal, they have designed a multi-level classification framework that included feature extraction, training, and classification. Principal component analysis and support vector machines were simultaneously used in this paper to build the machine learning which had an accuracy of 93% for the classification of emotions from image data [7]. Banu et al. in their paper, “A Novel Approach for Face Expressions Recognition” have used Haar functions to detect the face, eyes, and mouth; edge detection is used for the extraction of the eye boundaries precisely, and Bezier curves were applied to approximate the extracted regions. The performance achieved here is 82%. But their system is unable to achieve a satisfactory result where there are strong illumination variations or if the eyes of the person are closed [8]. Implementation of an adaptive sublayer compensation based on facial emotions recognition method for human emotions is used in the paper “Human Emotions Recognition using Adaptive Sublayer Compensation and various Feature Extraction Mechanism” by Bharate et al. With accuracy of 86.5% by principal component analysis and 85.1% by Wavelet features [9].

3 Working Principle As shown in Fig. 1, image data and audio signals are simultaneously processed to detect the emotion from the audio file. Making full use of the data, the model built analyses the voice signals including both the signal level and language level making an appropriate classification. First, the data is collected in the form of sound files containing short clips of sounds, and these files are labeled according to the category of the emotion to which they belong [10]. Once the labeled dataset is ready, it is analyzed to find any patterns in those clips using certain visualization techniques along with signal processing with the help of libraries in Python such as Librosa. Proceeding this, different neural networks (CNN), dual recurrent neural networks (RNN) like LSTMs were used for model building and feature extraction from the sound, Mel-frequency cepstrum coefficient (MFCCs). The accuracies achieved by the train and test sets of each of the above models are then compared to generate the model with the best results with the highest accuracy [11]. Once the model is trained, it is implemented with real-time voice input from a microphone. This is executed using an application that can take voice from the microphone as input and send the signal to the input of the trained model and display the emotion detected at the output of the model on the applications screen [12]. The speech is also converted into text, and another model is created to do sentimental analysis on that text and show the

130

A. Das et al.

Fig. 1 Final block diagram

category to which the output belongs. This will help in increasing the accuracy in detecting the emotion by making use of the speech signal as well as the text for the classification process [13]. The application will give a response based on the emotion detected and the text analyzed by the machine learning models.

4 Description 4.1 2D Convoluted Neural Network As the facial images are 2D in nature, singular neurons in a CNN contain 2D planes for weights, called the kernel, and input and output, known as the feature map. Figure 2 illustrates a conventional CNN consisting of two convolutions and two pooling layers that classify an image into two categories [14]. A single fully connected layer processes the output of the last pooling layer which is followed by an outer layer. The output layer consists of n—fully connected neurons which correspond to the n-number of classes the image needs to be classified between.

4.2 Recurrent Neural Networks Google Assistant, Cortana, Alexa, and Siri use RNN algorithms to give accurate results or information to the users. Bidirectional RNN utilizes the information gathered from past conversations as well as from future data. This paper proposes an

Emotion Detection Using Natural Language Processing and ConvNets

131

Fig. 2 Facial emotion prediction block diagram

architecture that uses convolutions which are in turn used to extract short-term dependencies, RNNs, and attention that extract long-term dependencies [15]. Regularization is vital for good performance with RNNs because they provide flexibility that makes them prone to overfitting.

4.3 Intent-Based ChatBot An intent-based ChatBot uses natural language processing (NLP) to identify the intent of the message conveyed by the user and to write back a relevant tailored response [16]. This system works on a case-by-case basis. Specific buttons with pre-defined terms are present in the ChatBot that the user can click to continue the conversation, making it easy for the users to express themselves. The dataset consists of tags, patterns, responses, and context which are available in the JSON format, which are then classified into separate groups based on which category they fall into [17]. For example, in one of the use cases, it is being used to classify if the user is feeling depressed due to relationship problems or due to work-related problems.

4.4 Datasets The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 7356 audio clips of different emotional classes. The database has voices of 24 professional actors out of which 12 are female whereas 12 are male, vocalizing sentiments in a neutral North American accent. The TESS dataset is a collection of audio clips of two women expressing several emotions. Surrey Audio-Visual Expressed Emotion (SAVEE) consists of audio clips from four male actors in different

132

A. Das et al.

classes of emotions, overall, 480 British English utterances. CREMA-D is a dataset constituting 7442 original audio clips from 91 different actors. The sentences are represented using different emotions and emotional levels.

5 Implementation and Result After carefully evaluating the efficiency and accuracy of different models and applying any data pre-processing techniques necessary to clean the data, the best models will be used finally. The model will be successfully distinguishing the different emotions based on the speech input and the text analyzed. The project will help solve problems arising in domains where actions need to be taken based on the emotions shown by a group of people. Then, the voice and text are given as input to two different models which give a score of what emotion does the input describes. This output is in the form of a vector of length equal to the number of classes and a probability between 0 and 1, where the value will describe how much the input tends toward a particular emotion. The mean of the two models is calculated and the class of emotion with the most probability gets selected. Based on the output, the application then replies accordingly with sophisticated responses that are closely related to the text input.

5.1 Results The training accuracy curve for the facial emotion recognition model is displayed in Fig. 3. This model has generated an accuracy of 96% with the validation accuracy approaching 60%. The model is effectively able to detect emotions in the voice using a 2D CNN. Fig. 3 Accuracy curve for face emotion model

Emotion Detection Using Natural Language Processing and ConvNets

133

The confusion matrix in Fig. 4 depicts the accuracy by which the model detects each class of emotion on a test set. As shown, the model detects the emotion “angry” with the best precision whereas has difficulty predicting between “sad” and “fear.” In Fig. 5, the ability of the intent-based ChatBot to detect the query of the user and predict the intent of the question to give answers from the pre-existing database is shown. For example, the user is displaying signs of anxiety and the ChatBot gives appropriate answers. Fig. 4 Confusion matrix for speech recognition model

Fig. 5 Sample test case for Intent-based emotion detection

134

A. Das et al.

6 Conclusion Recently, recognizing and analyzing emotions in conversations have gained considerable popularity in NLP research and technologies. The importance of using emotion recognition technology alongside highlighting many research possibilities for the future has been summarized in this paper. Overall, it is proved that an emotion recognition model can produce better results compared to a normal chit-chat dialogue. It works better for task-oriented or speaker-specific chats too. Also, tracking emotions during long speeches can become monotonous for a human, whereas an emotion recognition system could do the task better and give insightful results even for data existing for a longer duration of time. The research done in this paper will be beneficial for the advancement of ChatBots and other dialogue systems by using specified algorithms to detect various kinds of emotions present in the underlying conversation.

References 1. Poria S et al (2019) Emotion recognition in conversation: research challenges, datasets, and recent advances. IEEE Access 7(2019):100943–100953 2. Yang B, Luggar M (2010) Emotion recognition from speech signals using new harmony features. 90(5), May 2010. ISSN 0165-1684 3. Chavan VM, Gohokar VV (2012) Speech emotion recognition by using SVM classifier. Int J Eng Adv Technol (IJEAT) 1(5), June 2012. ISSN: 2249-8958 4. Bisio I, Delfino A, Lavagetto F, Marchese M, Scirrone A (2013) Gender driven speech recognition through speech signals for ambient intelligent applications. IEEE 1(2), December 2013 5. Sirisha Devi J, Srinivas Y, Nandyala SP (2014) Automatic speech emotion and speaker recognition based on hybrid GMM and FFBNN. Int J Comput Sci Appl (IJCSA) 4(1), February 2014 6. Yu L, Zhou K, Huang Y (2014) A comparative study on support vector machines classifiers for emotional speech recognition. Immune Comput (IC) 2(1), March 2014 7. Drume D, Jalal AS (2012) A multi-level classification approach for facial emotion recognition. In: IEEE international conference on computational intelligence and computing research 8. Banu SM et al (2012) A novel approach for face expressions recognition. In: IEEE 10th jubilee international symposium on intelligent systems and informatics 9. Bharate VD et al (2016) Human emotions recognition using adaptive sublayer compensation and various feature extraction mechanism. In: IEEE WiSPNET, 2016 10. Nwe TL, Foo SW, De Silva LC (2003) Speech emotion recognition using hidden Markov models. Speech Commun 41(4):603–623 11. Wu S, Falk TH, Chan W-Y (2011) Automatic speech emotion recognition using modulation spectral features. Speech Commun 53(5):768–785 12. Wang K et al (2015) Speech emotion recognition using Fourier parameters. IEEE Trans Affect Comput 6(1):69–75 13. El Ayadi M, Kamel MS, Karray F (2011) Survey on speech emotion recognition: features, classification schemes, and databases. Patt Recogn 44(3):572–587 14. Huang Z et al (2014) Speech emotion recognition using CNN. In: Proceedings of the 22nd ACM international conference on multimedia 15. De Silva LC, Ng PC (2000) Bimodal emotion recognition. In: Proceedings fourth IEEE international conference on automatic face and gesture recognition (Cat. No. PR00580). IEEE

Emotion Detection Using Natural Language Processing and ConvNets

135

16. Song M et al (2004) Audio-visual based emotion recognition-a new approach. In: Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, CVPR 2004, vol 2. IEEE 17. Ding W et al (2016) Audio and face video emotion recognition in the wild using deep neural networks and small datasets. In: Proceedings of the 18th ACM international conference on multimodal interaction

Analysis and Forecasting of Crude Oil Price Based on Univariate and Multivariate Time Series Approaches Anna Thomas and Nimitha John

Abstract This paper discusses the notion of multivariate and univariate analysis for the prediction of crude oil price in India. The study also looks at the longterm relationship between the crude oil prices and its petroleum products price such as diesel, gasoline, and natural gas in India. Both univariate and multivariate time series analyses are used to predict the relationship between crude oil price and other petroleum products. The Johansen cointegration test, Engle–Granger test, vector error correction (VEC) model, and vector auto regressive (VAR) model are used in this study to assess the long- and short-run dynamics between crude oil prices and other petroleum products. Prediction of crude oil price has also been modeled with respect to the univariate time series models such as autoregressive integrated moving average (ARIMA) model, Holt exponential smoothing, and generalized autoregressive conditional heteroskedasticity (GARCH). The cointegration test indicated that diesel prices and crude oil prices have a long-run link. The Granger causality test revealed a bidirectional relationship between the price of diesel and the price of gasoline, as well as a unidirectional association between the price of diesel and the price of crude oil. Based on in-sample forecasts, accuracy metrics such as root mean square logarithmic error (RMSLE), mean absolute percentage error (MAPE), and mean absolute square error (MASE) were derived, and it was discovered that VECM and ARIMA models can efficiently predict crude oil prices. Keywords ARIMA · GARCH · Granger causality · Petroleum products · Cointegration · VAR · VECM

A. Thomas · N. John (B) CHRIST (Deemed to be University), Bengaluru, India e-mail: [email protected] A. Thomas e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_12

137

138

A. Thomas and N. John

1 Introduction Crude oil is a vital source of energy and a valuable commodity in the global economy. It is a natural resource that’s mined from the ground and refined into commodities like gasoline, jet fuel, and other petroleum products. Many economists consider crude oil to be the most important commodity on the earth because it is the primary source of energy production at the moment. Brew et al. [1] investigated the link between crude oil prices and petroleum products such as gasoline, gas oil, residual fuel oil (RFO), and premix fuel in Ghana. Aduda et al. [2] examined multivariate cointegrating relationships and granger causality between crude oil and distillate fuel prices. Minimol [3] evaluated the association between spot and future crude oil prices, which will aid in estimating crude oil prices using a cointegration approach. Modak and Mukherjee [4] examined the effects of oil price fluctuations on the growth of the Indian economy, using time series data from 2000 to 2014. Musakwa and Odhiambo [5] examined the causal relationship between the variables remittances and poverty in South Africa using cointegration approach. Though all the above papers examined the cointegration relationship between the crude oil price and many other variables, the prediction of crude oil prices through other micro-economic variables is not reasonably studied in the literature. Since the availability of data is not a big deal these days, instead of analyzing one single variable, it is more convenient and more realistic to analyze more than one variable simultaneously. Taking this into account, the study seeks to determine whether there is cointegrating relation between the prices of crude oil and its petroleum product prices, as well as to forecast crude oil prices for the next six years using the VAR and VECM approaches, as well as the ARIMA, Holt’s smoothing, and GARCH models. If two time series are integrated at the same level and if their linear relationship is stationary, then the series is cointegrated [6]. To examine the long relationship between the economic variables, two predominant methodologies are the Engle– Granger test and the Johansen test procedure. In multivariate analysis, VAR and VECM models are built to predict the future values of crude oil prices, diesel prices, gasoline prices, and natural gas prices. Univariate analysis of crude oil prices has also been examined using the techniques such as ARIMA and Holt’s exponential smoothing GARCH to predict the future values. Fei and Bai [7] investigated autocovariance non-stationary time series on a time series family. Cavicchioli [8] studies mixing GARCH models and suggests a new type EM iteration algorithm for model parameter estimation. Kelly et al. [9] investigated an ARIMA model for examining the gas exchange threshold. In [10], the NYSE industrial index, the NYSE utility index, and the NASDAQ industrial index are used to examine their long-term relationship. The remaining part of the paper is laid out as follows. Section 2 describes the data description, Sect. 3 illustrates the methodology used in the study, Sect. 4 illustrates the data analysis, Sect. 5 deals with results and discussions obtained from the study, and Sect. 6 presents the conclusion from the study.

Analysis and Forecasting of Crude Oil Price Based …

139

2 Data Description The data set consists of 180 monthly observations of crude oil, diesel, gasoline, and natural gas prices collected from 2006 to 2021. The log-transformed variables have been used in the study. The data sets for gasoline prices and diesel prices are taken from the Energy Information Administration, and the data sets for crude oil and natural gas prices were obtained from the World Bank’s Web site.

3 Methodology The main goal of this research is to figure out how crude oil prices and other petroleum products are related in the short and long run. Although the plot seems to be non-stationary, it is important in to test whether the variables are stationary or not by using a suitable statistical test. Dickey and Fuller [11] developed a unit root test, called the augmented Dickey fuller (ADF) test, to examine whether the data set under consideration is unit root non-stationary. The null (H 0 ) and alternative (H 1 ) hypothesis for the ADF test are: H 0 : Data is unit root non-stationary H 1 : Data is stationary. The stationarity of the series is examined using ADF test. After confirming the stationarity of all the variables at the same level of integration, we tested variables for cointegration using the Johansen and Engle–Granger test. An overview of the Johansen test and the Engle–Granger cointegrating test is described below. The trace test and the maximum eigen value test are two likelihood ratio tests introduced by Johansen. [12]. We must first verify whether all the variables are nonstationary of same order (I(d)), before applying the test procedures of cointegration. The number of cointegrating relationships between the dependent and independent variables may be determined using Johansen’s co-integration test. The Engle–Granger method is a two-step estimation procedure to determine the long-run cointegration among the economic variables. The order of integration is determined first, and then, the error correction model is estimated. If the variables are determined to be non-stationary in the same order, the residuals from the static regression must be used to determine whether they are cointegrated. Rejecting of non-stationarity is an indication that the variables are cointegrated. Once we have identified the cointegration relationship between the variables, we can perform Granger causality test that examines whether the variables have a unidirectional or bidirectional causality. Once the causal relationship among the variables is obtained, for predicting the future values of economic variables using a multivariate approach, we have performed VAR and VECM models. A VAR is a technique for connecting more than one stationary time series models. The idea is that each variable can be expressed as a linear function of its and the

140

A. Thomas and N. John

other variables previous lags. A VAR (p) model takes of the following structure Z t = φ0 + φ1 Z t−1 + φ2 Z t−2 + · · · + φ p Z t− p + at .

(1)

where L j Z t = Z t-j . Z t : p × 1 random vector. φ 0 : p × 1 vector of constant terms. φ i : p × p autoregressive coefficient matrix. at : p × 1 white noise process. Z t = Multivariate time series. For variables with stationary differences, the VECM is a modified version of the VAR. Any cointegrating relationships between the variables can also be taken into consideration by the VEC. In the final stage, we have also performed univariate analysis to predict the future values of crude oil prices by using ARIMA, Holt’s exponential smoothing, and the GARCH model. We construct forecasts for the variable of interest using an ARIMA model, which is a linear combination of past values from its own series. The general form of ARIMA (p, d, q) is represented as z t = c + φ1 z t−1 + · · · + φ p z t− p + θ1 ηt−1 + · · · + θq ηt−q + ηt .

(2)

where p and q represents the order of AR and MA part, respectively, d represents the order of non-stationary component, φ 1 , φ 2 , φ 3 , … are AR parameters and θ 1 , θ 2 , θ 3 … are MA parameters. Holt’s exponential smoothing can be used in time series data that contains only a trend component. It is expressed as Z t = m t + et .

(3)

where mt is the trend component and et is the error component. While examining these models, it is noticed that there is volatility in the prices of crude oil. So, it is also important to assess the price series in the presence of the volatile models that exist in the literature. One of the highly useful models that describe volatile data is the ARCH and GARCH models. A GARCH model takes the following form zt =



h t ηt , h t = α0 +

p  i=1

2 αi Z t−i +

q 

B j h t− j

(4)

j=1

where εt is iid random variables, α 0 > 0, α i ≥ 0, β j ≥ 0 and

 max( p,q)  αi + β j < 1. i=1

Analysis and Forecasting of Crude Oil Price Based …

141

After fitting the above mentioned models, we analyzed the residual series to verify the model assumptions. The residuals are defined as e = z t − z t

(5)

where zt is the observed value and z t is the fitted values. The above-discussed models are built and in-sample forecast and accuracy measures based on those models have been found. The accuracy measures used are RMSLE, MASE, and MAPE. The formula used for above accuracy measures are   n   RMSLE = 1/n ((log Dt + 1) − log(Ft+1 ))2

(6)

i=1

MASE = MAPE =

 |Dt − Ft |2 /(n − 1) 

(|et /Dt |/n) × 100

(7) (8)

where Dt is observed value for time period t, F t is the predicted value for time period t, n is specified number of time periods, and et is forecast error = (Dt – F t ). Based on the accuracy measures, the best models for multivariate analysis and univariate models have been identified and the prediction of crude oil prices for the next six months has been carried out. The first part of the research deals with multivariate analysis, and the second part deals with univariate analysis.

4 Analysis 4.1 Multivariate Approach Figure 1 presents the time series plot of the log transformed variables, and it reveals that all of the variables are non-stationary in nature. We have also performed an ADF test to identify the order of non-stationarity. The p values for each of the series are 0.2846, 0.261, and 0.2769, 0.1481, indicating that none of the variables is stationary in nature. The ADF test was again performed for the first differenced series and found that they are all stationary. This lays the groundwork for determining if the series is cointegrated. Engle–Granger Method To use the Engle–Granger technique, we first regress each of the economic variables against each other and then evaluate the model fit.

142

A. Thomas and N. John

Fig. 1 Time series plot of log transformed data of crude oil price, diesel price, gasoline price, and natural gas price

The p values obtained from all of the regression equations are less than 0.05, indicating that the regression is statistically significant at 5%, and the R-Squared value obtained in all cases is greater than 0.95, indicating that the independent variables explain at least 95% of the variation in the response variable. The residuals from the fitted regression are then tested for stationarity to examine if the variables form a cointegration relationship. So, we perform an ADF test for the residuals of the regression equation. The p value obtained for the ADF test is 0.04761, 0.0187, 0.01, and 0.0576, respectively, which implies that the residuals are stationary. As a result, we can deduce that the two variables are cointegrating in a stationary manner. In the second step, we estimate the error correction model for the cointegrated series. The estimated ECM for the crude oil price, diesel price, gasoline price, and natural gas price series is given by



Crude oil price = −0.0003296 + 0.5893654 ∗ Diesel price

+ 0.4681719 ∗ Gasoline price ± 0.0217576

ˆ t−1 ∗ Natural gas price − 0.1431769 a1

(9)



Diesel price = 0.0006684 + 0.7567834 ∗ Crude oil price

+ 0.0010921 ∗ Gasoline price + 0.0700779

ˆ t−1 ∗ Natural gas price − 0.1066643 a2

(10)

Analysis and Forecasting of Crude Oil Price Based …

143



Gasoline price = 0.0005254 + 0.9735209 ∗ Crude oil price

± 0.0439616 ∗ Diesel price − 0.0214450

∗ Natural gas price − 0.2152348 a3 ˆ t−1

(11)



Natural gas price = −0.002261 ± 0.404098 ∗ Crude oil price

+ 0.937870 ∗ Diesel price ± 0.136326

∗ Gasoline price − 0.126800 a4 ˆ t−1

(12)

From Fig. 2, it is evident that the estimated linear combinations based on all the four series are stationary, which confirms that the variables are co-integrated. Johansen’s Procedure The null and alternative hypotheses for Johansen’s test are given below. H 0 : There is no cointegrating vectors. H 1 : There is cointegrating vectors. From Tables 1 and 2, it can be seen that, for r = 0, the table value is greater than the critical value, so we reject the null hypothesis. At r = 3, we do not reject the null hypothesis, which implies that there exist at-most three cointegrating vectors. So, both methods examine the existence of a long-run cointegrating relationship between economic variables.

Fig. 2 Error correction model plots

144

A. Thomas and N. John

Table 1 Summary of trace test 1pct

Decision

r≤3

Test 4.89

10pct 7.52

5pct 9.24

12.97

Accepting H 0

r≤2

13.36

17.85

19.96

24.60

Accepting H 0

r≤1

32.13

32.00

34.91

41.07

Accepting H 0

r=0

89.31

49.65

53.12

60.16

Reject H 0

1pct

Decision

Table 2 Summary of maximum eigen value test Test

10pct

5pct

r≤3

4.89

7.52

9.24

12.97

Accepting H 0

r≤2

8.48

13.75

15.67

20.20

Accepting H 0

r≤1

18.76

19.77

22.00

26.81

Accepting H 0

r=0

57.18

25.56

28.14

33.24

Reject H 0

Granger Causality Test Using the cointegration approach, we have found that there exists a causal relationship between the economic variables. Now, the direction of causal relationship can be examined using the Granger causality test. It is used to see whether one time series can be used to forecast another. The null and alternative hypotheses are given below. H 0 : The series X do not cause the series Y. H 1 : The series X cause the series Y. From Table 3, it is observed that diesel prices cause crude oil prices and gasoline prices, and gasoline prices cause diesel prices. That is, diesel prices and gasoline prices have bidirectional causality. VAR and VEC Models VAR is a useful multivariate time series approach to capture the relationship between multiple time series, and hence, it is useful in predicting one series based on the other. We have performed VAR to predict the future values of one variable with respect to other variables. In addition, the forecast error variance decomposition (FEVD) shows how important a shock is in explaining the fluctuations in the model’s variables. From Fig. 3, it can be seen that the in-sample forecast for crude oil and diesel prices has bidirectional influence. Gasoline prices have an influence on crude oil prices, and natural gas prices have an influence on both. In VAR approach, all the variables are considered symmetrically by incorporating an equation for each variable that explains its own past values as well as the past values of all other variables in the model. The optimal lag length is found and then diagnosed for the VAR model. The model diagnosis of the VAR model is performed by examining the residual series, and all the assumptions are satisfied. The normality assumption is examined using the

Analysis and Forecasting of Crude Oil Price Based … Table 3 Results of Granger causality

145

Variables

Granger causality test statistic

Decision

Crude oil price–diesel price

0.3811

Accept H 0

Diesel price–crude oil price

0.001011

Reject H 0

Crude oil price–gasoline 0.4099 price

Accept H 0

Gasoline price–crude oil 0.1864 price

Accept H 0

Crude oil price–natural gas price

0.7254

Accept H 0

Natural gas price–crude oil price

0.6858

Accept H 0

Diesel price–gasoline price

0.001611

Reject H 0

Gasoline price–diesel price

0.03476

Reject H 0

Diesel price–natural gas 0.4249 price

Accept H 0

Natural gas price–diesel 0.7021 price

Accept H 0

Gasoline price–natural gas price

0.4931

Accept H 0

Natural gas price–gasoline price

0.718

Accept H 0

Kolmogorov Smirnov test, and the p value for residuals is obtained to be as 0.9088, which confirms that the residual series are normally distributed. Figure 4 explains the histogram of the residual plot based on the VAR and VECM models, and it is evident that the residual series are normally distributed. After the VAR and VECM models are built, the in-sample forecasting of three months and its accuracy measures are done. Table 4 gives the in-sample forecasted values of multivariate analysis, and Table 5 shows the accuracy measures of in-sample forecasted values of VAR and VECM model. From Table 5, the VECM model is the best model as it has an accuracy measure closer to 0.

146

A. Thomas and N. John

Fig. 3 Plot of forecast error variance decomposition

Fig. 4 Normality plot of VAR and VECM model Table 4 In-sample forecasted values of VAR and VECM model Variables

Actual values

Predicted VECM

Actual differenced value

Predicted VAR

Crude oil price

4646.36

4503.103

246.95

95.82534

4684.64

4348.858

38.28

36.11893

Diesel price

4879.21

4132.594

194.57

120.28569

134.89

131.2794

4.78

1.2162878

138.49

127.8296

3.60

1.0338173

148.73

123.0307

10.24

0.1468237

Analysis and Forecasting of Crude Oil Price Based … Table 5 Accuracy measures of in-sample forecasted values of VAR and VECM model

147

Variables

Accuracy measures

VECM

VAR

Crude oil price

RMSLE

0.1065822

1.645131

MAPE

0.08517639

0.535092

MASE

0.50914

0.6129805

RMSLE

0.11903

1.504955

MAPE

0.0921782

0.8146789

MASE

1.25352

1.383041

Diesel price

4.2 Univariate Approach ARIMA Model For crude oil prices, the model obtained is ARIMA (3, 1, 0). After the model is built, the residual analysis is done. The Kolmogorov–Smirnov test was used to test the normality assumption. Since the p value obtained for the residual series is 0.9551, the residuals are normally distributed. From Fig. 5, it is observed that residuals are uncorrelated and the histogram confirms the normality of the residual series. Since all the assumptions involved with the residual series have been satisfied, we can use this model for prediction purposes, and the accuracy measures are reported in Table 6.

Fig. 5 Time series plot, ACF plot and normal histogram of residuals using ARIMA

148

A. Thomas and N. John

Table 6 In-sample forecast of ARIMA, Holt’s winters smoothing, GARCH models Variable

Actual values

ARIMA

Holt winters

GARCH model

Crude oil price

4646.36

4531.360

4435.782

4522

4684.64

4509.545

4472.155

4467

4879.21

4428.226

4508.527

4415

GARCH In order to examine the presence of GARCH effect, we have tested using ARCH LM test. The null hypothesis and alternative hypothesis are given below. H 0 : There is no heteroskedasticity present in the data. H 1 : There is heteroskedasticity present in the data. The null hypothesis is rejected because the p value is less 0.05, implying that there is an ARCH effect for crude oil price. Normality has been examined using the Kolmogorov–Smirnov test, and the p value obtained is 0.9101, which implies that the residuals are normal. Figure 6 shows the time series plot, ACF plot, and normal histogram for the residual series of the crude oil price using GARCH. From Fig. 6, it is observed that residuals are uncorrelated and the histogram shows that the residuals are normally distributed.

Fig. 6 Time series plot, ACF plot, and normal histogram of residuals using GARCH

Analysis and Forecasting of Crude Oil Price Based …

149

Holt’s Exponential Smoothing Since the crude oil price contains only a trend component, we have also performed Holt’s exponential smoothing to predict its future behavior. Normality has been examined using the Kolmogorov–Smirnov test, and the p value obtained is 0.8409, which implies that the residuals are normal. Figure 7 represents crude oil price using Holt’s exponential smoothing. From the plot, it is observed that residuals are uncorrelated and it follows a normal distribution. Table 6 and 7 shows the In sample forecast of ARIMA, Holt’s Winters Smoothing, GARCH models, and Accuracy measures of ARIMA, Holt’s Exponential Smoothing, and GARCH models respectively. Table 8 shows the predicted values of crude oil prices from June 2021 to November 2021. Based on the accuracy values obtained, the ARIMA model has the least accuracy measures compared to the Holt exponential and GARCH models. So, the ARIMA model is used for forecasting future values of crude oil prices.

Fig. 7 Time series plot, ACF plot and normal histogram of residuals using Holt’s exponential smoothing

Table 7 Accuracy measures of ARIMA, Holt’s exponential smoothing, and GARCH models Crude oil price

Accuracy measures

ARIMA

Holt exponential

GARCH model

RMSLE

0.04186049

0.05928545

0.06579919

MAPE

0.05151889

0.05555027

0.05612122

MASE

2.121764

2.272553

2.308238

150

A. Thomas and N. John

Table 8 Forecasted values of crude oil price June 2021

Forecasting using ARIMA

Forecasting using VECM

4927.100

4826.424

July 2021

4898.754

4680.303

August 2021

4820.115

4548.067

September 2021

4749.022

4471.314

October 2021

4684.751

4449.559

November 2021

4626.68

4465.819

5 Results and Discussion From the study of multivariate analysis, the best model obtained is the VECM, and from univariate analysis, the best model obtained is the ARIMA model. Using VECM and ARIMA models, the forecasting of crude oil prices is done for the next 6 months of 2021.

6 Conclusion The study investigates the causal relationship between the prices of crude oil and other petroleum products in India. Both multivariate and univariate time series approaches have been performed. We have found a bidirectional causality between diesel prices and gasoline prices as well as a unidirectional causality between diesel prices and crude oil prices. Also, we examined different multivariate time series models and the best model obtained was VECM. And, among the class of univariate models, the ARIMA model is found to be the best one. Based on the best fitted classes of model, out-of-sample forecast for the next six months has been performed and reported.

References 1. Brew L, Ettih BK, Wiah EN (2020) Cointegration analysis of the relationship between the prices of crude oil and its petroleum products in Ghana. J Math Fin 10(4):717–727. Scientific Research Publishing 2. Aduda J, Weke P, Ngare P (2018) A co-integration analysis of the interdependencies between crude oil and distillate fuel prices. J Math Fin 8(2):478–496. Scientific Research Publishing 3. Minimol MC (2018) Relationship between spot and future prices of crude oil: a cointegration analysis. Theor Econ Lett 8(3):330–339. Scientific Research Publishing 4. Modak KC, Mukherjee P (2015) A study on impact of crude oil price fluctuation on Indian economy, vol 2. International Bulletin of Management and Economics, Unnayan 5. Musakwa MT, Odhiambo NM (2019) The causal relationship between remittance and poverty in South Africa: a multivariate approach. In: UNISA economic research working paper series, international journal of social science

Analysis and Forecasting of Crude Oil Price Based …

151

6. Engle RF, Granger CWJ (1987) Cointegration and error correction: representation, estimation and testing. Econometrica 55:251–276 7. Fei WC, Bai L (2009) Time-varying parameter auto-regressive models for autocovariance nonstationary time series. In: Lecture notes in networks and systems, vol 52. Springer, Science in China series A: mathematics, pp 577–584 8. Cavicchioli M (2021) Statistical inference for mixture GARCH models with financial application. In: Lecture notes in networks and systems, vol 36. Springer, Computational statistics, pp 2615–2642 9. Kelly GE, Thin A, Daly L, McLoughlin P (2002) Estimation of the gas exchange threshold in humans: a time series approach. In: Lecture notes in networks and systems, vol 87. Springer, European journal of applied physiology, pp 588 10. Hughes MP, Winters DB, Rawls JS (2005) What is the source of different levels of time-series return volatility? the intraday U-shaped pattern or time-series persistence. In: Lecture notes in networks and systems, vol 29. Springer, Journal of economics and finance, pp 300–312 11. Dickey DA, Fuller WA (1979) Distribution of the estimators for autoregressive time series with a unit root. J Am Stat Assoc 74(366a):427–431 12. Johansen S (1995) Likelihood-based inference in cointegrated vector autoregressive models. Oxford University Press, Oxford

Deep Learning-based Gender Recognition Using Fusion of Texture Features from Gait Silhouettes K. T. Thomas and K. P. Pushpalatha

Abstract The gait of a person is the manner in which he or she walks. The human gait can be considered as a useful behavioral type of biometric that could be utilized for identifying people. Gait can also be used to identify a person’s gender and age group. Recent breakthroughs in image processing and artificial intelligence have made it feasible to extract data from photographs and videos for various classifying purposes. Gender can be regarded as soft biometric that could be useful in video captured using surveillance cameras, particularly in uncontrolled environments with erratic placements. Gender recognition in security, particularly in surveillance systems, is becoming increasingly popular. Popularly used deep learning algorithms for images, convolutional neural networks, have proven to be a good mechanism for gender recognition. Still, there are drawbacks to convolutional neural network approaches, like a very complex network model, comparatively larger training time and highly expensive in computational resources, meager convergence quickness, overfitting of the network, and accuracy that may need improvement. As a result, this paper proposes a texture-based deep learning-based gender recognition system. The gait energy image, that is created by adding silhouettes received from a portion of the video which portrays an entire gait cycle, can be the most often utilized feature in gait-based categorization. More texture features, such as histogram of oriented gradient (HOG) and entropy for gender identification, have been examined in the proposed work. The accuracy of gender classification using whole body image, upper body image, and lower body image is compared in this research. Combining texture features is more accurate than looking at each texture feature separately, according to studies. Furthermore, full body gait images are more precise than partial body gait images.

K. T. Thomas (B) · K. P. Pushpalatha School of Computer Sciences, Mahatma Gandhi University, Kottayam, India e-mail: [email protected] K. T. Thomas Christ University, Pune, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_13

153

154

K. T. Thomas and K. P. Pushpalatha

Keywords Biometrics · Behavioral biometrics · Gender recognition · Gait silhouettes · Gait energy image (GEI) · Histogram of oriented gradient (HOG) · Convolutional neural network (CNN)

1 Introduction Gait analysis is a passive behavioral biometric technique that examines a person’s walking style. A person’s walking manner is referred to as their gait [1]. Walking is a locomotion technique that involves moving and lowering one foot at a time, rather than lifting both feet off the ground at the same time. Some of the most widely used image-based biometric systems are the front face, eye, ear, and fingerprint. Gait recognition is a relatively new technology when compared to older technologies like fingerprint/facial recognition. When compared with the traditional biometric techniques, gait owns certain distinct attributes. The most apparent characteristic of gait, when used for biometrics, is the notion that many other forms of biometrics require the subject’s consent, whereas a gait can be effortlessly grabbed even a little away without requiring subject’s cooperation. Gender classification is critical in today’s culture for surveillance and even criminal investigative help. It can be considered extremely beneficial to have software that would be able to classify between male and female, accurately. A market security cam, for example, could be quite useful in determining the gender of customers so that a correct plan can be established. A recorded voice or a face image can be used to classify gender, which is an active and promising area of research. SexNet is a ground breaking gender classification system based on facial features. The gender classifier was trained with a neural network’s backpropagation technique, and the system had an 8.1 percentage of error. Grounded on this optimistic outcome, the technique suggests a computer-assisted recognition system that is viable. However, using voice and facial attributes for the purpose of classifying gender when the objects are far away from the sensor has limitations because it is hard to obtain a high-quality audio speech or face image from such a good distance. The most prevalent feature used as input to deep learning models was gait energy images. The goal of this research is to determine and analyze the contribution of texture features when used in conjunction with deep learning methods. The tests were carried out on two different datasets to assess performance. Gait energy images (GEI), histogram of oriented gradient (HOG), and gait entropy were utilized to assess performance in this study. CNN is used to determine which component of gender identity is the most important. The following is a list of the proposed paper’s sections. The related works have been presented in Sect. 2. Section 3 delves into the methodology used for the proposed computer vision-based gender recognition using gait analysis. Section 4 covers the

Deep Learning-based Gender Recognition Using …

155

gender recognition experiments and findings utilizing the CNN algorithm, then by the conclusion in Sect. 5.

2 Related Works Gait characteristics could be a promising biometric for gender recognition. There are numerous works that use gait features to identify gender. The majority of the research was done with gait energy images. Deep learning techniques, such as convolutional neural networks, are also utilized. The following section provides an overview of current gender recognition techniques based on gait features. Zhang et al. in [2] give a combined CNN-based technique for a thorough examination of gait biometric. Gait in is claimed as a type of biometric trait (behavioral) having distinct benefits such as large distance, cross-view, and no consent from subject. The study discusses the problems with using gait as a predictor of gender and age. When using a convolutional neural network, the researchers look into which regions of the human body can contribute more to gender recognition. The experiments were conducted using the CASIA B, OU-ISIR, and USF datasets. For gender categorization, the article advocated employing a very deep convolutional neural network with a two-dimensional fully connected layer. To maintain more temporal information, the authors employed raw silhouettes as input instead of the most widely used characteristic, the GEI. Fuse temporal information at the picture level and combine temporal information at the final completely linked layer. The data given below shows the accuracies for gait identification when deep or shallow networks are combined with AveImage or AveFeature. The accuracy levels for the network design DNN with AveImage, DNN with AveFeature, shallow NN with AveImage, and Shallow NN with AveFeature were 82.10%, 97.33%, 79.22%, and 96.91%, respectively. A study and analysis of gender classification based on human gait was undertaken by Shiqi et al. in [3]. Experiments in psychology were carried out. These investigations revealed that humans can deduce gender from gait data and that the contributions of various body components vary. Appearance-based gait characteristics over model-based ones were chosen because they are easier to obtain and have a lower computational cost. In order to discover the discriminative bodily components, the authors investigated the effects of several human body parts. The gait energy image was employed as the feature in their testing, and each GEI was split into five sections: head as well as hair, chest area, back body, waist and buttocks, and legs. A support vector machine (SVM) was employed in this experiment, with a dataset of silhouettes containing 31 males and 31 females with a 95.97% correct classification rate. A study employing inertial measurement units to predict gender (IMU) was conducted by Van Hamme et al. [4]. The OU-ISIR dataset was used to conduct this study; a number of standard deep learning algorithms applied in order to forecast age and gender data were discussed and contrasted. The approach was evaluated using the OU-ISIR gait activity dataset. A total of 495 participants was included in

156

K. T. Thomas and K. P. Pushpalatha

the data collection, with roughly similar gender ratio. The participants in the study range in age from 8 to 78 years old. The participants wore a belt that had three IMU sensors: one on each hip, while one on the back. Each participant completes two different walking routines, one at each level. A CNN-based solution for identifying clothing-invariant gait [5] was presented by Yeoh et al. in their publication Convolutional Neural Networks for Clothing-Invariant Gait Recognition. From gait energy snapshots, the algorithm learns to isolate the most exclusionary changes in gait parameters. When tested on the problematic clothinginvariant of the OU-ISIR Treadmill B dataset, which is an indoor dataset, the new strategy outperforms existing common methods. In their study [6], the researchers used the linear SVM to replace the softmax function in VGGNet-16. Experiments show that while solving the gait-based gender detection test, the linear SVM function outperforms the softmax function. In the VGGNet-SVM model, the authors discovered that the FC6 and FC8 play a key role in feature extraction and correction. The accuracy of the VGGNet-SVM model, which was tested on the CASIA B dataset, was 89.62%. Li et al. provide a thorough examination of several gait components for use in gender recognition applications [7]. The experimental photographs were obtained from the HumanID outdoor gait database at the University of South Florida (USF). The authors provide a comprehensive picture of the numerous parameters that influence the use of gait for gender detection. Camera parameters such as view position, the time elapsed, carrying state, and clothing state and kinematic elements such as walking speed, bounciness, and rhythm are all considered. It was identified that the accuracy of gender detection using gait can be affected by factors like injury, disguising, picture quality, and even illumination circumstances. The averaged gait picture is segmented into seven components by the authors: the head, the arm, the trunk, the thigh, the front leg, the back leg, and the feet. For classification, a linear SVM classifier was employed. Barra et al. published [8] a study in forensics where the team tackled the gender classification problem by employing a geometric method based on postures inferred from human stride. The experiments were employed with random forest and KNN classifiers. According to their findings, random forest provided superior accuracy. Guo et al. [9] investigated the advantages of determining a person’s gender based on bone structure, and they conducted a systematic study of some key issues in bodybased GC, such as how educative every piece is, how many parts of the body are necessary, what are good portrayals of parts of the body, and also how precise the GC system can achieve in difficult challenges, unrestricted, actual pictures.

3 Proposed Methodology A gait-based gender classification strategy has been proposed, which includes a feature extraction method and deep learning-based image recognition. Figure 1 depicts the architecture of the proposed system. To boost performance, preprocessing

Deep Learning-based Gender Recognition Using …

157

Fig. 1 Proposed architecture of gender recognition using gait texture features

steps and feature vector extraction from the image are performed in this work. The result is obtained by passing the feature image through a classifier. The class in the proposed work is divided into two categories: female and masculine. The gender of the image is predicted by the model. Gait energy, histogram of oriented gradients, and entropy of the images are some of the primary features explored in this paper. Gait Energy Images (GEI) The gait sequences of a full gait cycle are managed to fit the binary silhouette. The GEI is a grayscale image made up of averages of black and white silhouettes derived from a clip of a whole gait cycle [10]. The GEI is a grayscale image that may be used to illustrate a full gait sequence cycle. The gait energy image can be calculated using the formula below: GEI(x, y) =

n 1 S(x, y) n 1

(1)

where • • • •

S(x, y) is the preprocessed black and white gait silhouettes, at time t in a sequence. n is the count of frames in a silhouette sequence’s full cycle(s). x and y are 2D image coordinate values. GEI(x, y) is the GEI obtained.

Histogram of Oriented Gradients (HOG) HOG is a characteristic used for object recognition in computer vision field. The HOG descriptor technique counts the number of times a gradient orientation appears in a certain location inside a picture detection window, often known as a region of interest (ROI). The GEI is divided into tiny cells that are connected. Within each cell, the edge orientation or HOG direction of the pixels is calculated [11].

158

K. T. Thomas and K. P. Pushpalatha

Entropy Silhouettes can be used to calculate Shannon’s entropy [12]. Over the course of a gait cycle, it encodes the randomness of pixel values in silhouette images. Shannon entropy calculates the uncertainty associated with a random variable from a single image. We can calculate the entropy of this variable over the course of a gait cycle by treating the intensity value of the silhouettes at a fixed pixel point as a discrete random variable, as shown in the equation below. E(x, y) = −

K 

pk (x, y) pk (x, y)

(2)

k=1

The pixel coordinates are x, y, bility that the pixel will take on kth value.

and

pk (x,

y)

is

the

proba-

4 Experiments and Results The gait images are used to perform gender recognition using convolutional neural networks. The textural features such as gait energy, entropy, and HOG are retrieved from the silhouettes for the experiment. After the textural data has been retrieved, a basic CNN algorithm is used to recognize human gender. The first experiment contrasted whole body and partial body gait images, with the partial body measured from head to hip and hip to toe. This was accomplished using texture features and their fusion separately. Another experiment was conducted to examine the efficacy of whole body texture features in two widely used datasets. The Dataset Gender recognition studies were conducted on two datasets. The dataset’s details are as follows: Dataset 1: The CASIA B Gait Database CASIA B gait dataset is a large multiview indoor gait database made up of the gait data of 124 people taken from 11 different perspectives. Three different conditions are employed in the dataset. Normal walking conditions (nm), clothing conditions [6] (cl), and carrying conditions (bg) are the three categories. There are 11 alternative views for each individual, starting with (6 typical walking sequences) and ending with (“0”, “18”, “36”, “54”, “72”, “90”, “108”, “126”, “144”, “162”, “180”) walking sequences [13, 14]. The image of each participant is described by his or her individual ID, as well as the participant’s walking status, which might range from normal to bulky to carrying a bag. The photographs are considered from left to right, starting with

Deep Learning-based Gender Recognition Using …

159

the frontal perspective and ending with the farthest opposite rear view of the participating human. Three distinct modifications are taken into account: changing the viewing angle, changing the apparel, and changing the carrying state. Silhouettes are extracted from the video files. The GEI pictures of a subject are created by combining the subject’s silhouettes. The proposed work takes into account GEIs of 1635 females and 1750 males. Figure 2 shows some samples of GEIs. Dataset 2 The OU-ISIR gait database is a multiview large population dataset (OU-MVLP) which includes gait video and data that can be used for cross-view gait detection. The database also includes the gait energy images (GEI) developed from the silhouette of the subjects. Gait energy images of 650 females and 656 men were used in the proposed work. Figure. 3 shows some sample GEIs.

Fig. 2 Sample GEIs from CASIA B dataset (male and female) [15]

Fig. 3 Sample GEIs (male and female) from OU-ISIR gait database [15]

160

K. T. Thomas and K. P. Pushpalatha

Fig. 4 Sample head to hip GEIs (male and female)

Fig. 5 Sample hip to toe GEIs (male and female)

Experiment 1 In the first experiment of the work, the whole body gait images and the partial body images were considered. Partial body gait energy images were created from the full body gait energy images. Figures 4 and 5 show the sample images partial body GEI of head to hip and hip to toe extracted from the GEIs. Figures 4 and 5 show some sample images generated from whole body GEIs. Python version 3 was used to carry out the experiments. Texture features are extracted from the GEIs. Apart from the energy features, histogram of oriented gradients (HOG) and entropy were also considered for the experiments. Experiments were carried out to see how well full body gait images and partial body gait images performed. The following are the main outcomes of the trial, as shown in Table 1: The graph (Fig. 6) shows the performance of the gait texture features while considering full and partial gait images. The findings revealed that full body gait images contribute more to gender detection than partial body images. However, it is worth noting that the half body, particularly the head to hip, can also help with gender recognition. Another takeaway from the findings is that the results achieved by combining texture features are comparable to the results obtained by considering each component separately. Figure 7 depicts a graph of the experiment’s accuracy and loss.

96.4

99.75

100

GEI

GEI + HOG

GEI + HOG + entropy

Training

Accuracy

Validation

96.7

84.5

91.1

Testing

98.2

94.1

96.3 95.7

98.9

89.9

Training

95.5

97.2

95.02

Validation

Head to hip gait images

No. of epochs Partial body

677 10

Test images

Full body

678

Validation images

Image feature

2030

Training images

Parameters

Table 1 Performance of gait texture images using full body, head to hip, and hip to toe images

69.3

54.8

69.8

Testing

96.2

95.67

80.38

Training

94.3

92.4

77.96

Validation

Hip to toe gait images

59.3

62.7

31.1

Testing

Deep Learning-based Gender Recognition Using … 161

162

K. T. Thomas and K. P. Pushpalatha

Fig. 6 Graph showing the performance of gender recognition using gait images

Experiment 2 The performance of the combination of the texture features is proved in the experiments using CASIA B dataset. The same features were tested on Dataset 2: the OU-ISIR gait database. The results are shown in Table 2. Figure 8 shows the performance of the texture features on two gait datasets.

5 Conclusion This paper presents the findings of research into the effectiveness of gait-based gender detection using three separate features, GEI, HOG, and entropy images, fed to a basic CNN. The combination of GEI, HOG, and entropy fusion features were also obtained and used with CNN. To accomplish so, the silhouettes from the CASIA B dataset were used. Three sets of feature images were developed. The first set of features covered the entire body, while the second covered head to hip, and the third covered hip to toe. As indicated by the testing findings, the proposed technique of leveraging the feature obtained by merging GEI, HOG, and entropy images enhanced accuracy. From the results of comprehensive empirical tests, it was observed that the whole body images are more useful in cases of gender identification based on human gait. Two of the most extensively used datasets in gait research were employed in the experiments—the CASIA B and OU-ISIR datasets. In the future, a more sophisticated convolution neural network or a transfer learning technique can be used to improve gender detection accuracy.

Deep Learning-based Gender Recognition Using …

163

Whole Body Gait Image GEI

GEI-HOG

GEI+HOG+ENTROPY

GEI(HEAD_HIP)

GEI_HOG(HEAD_HIP)

GEI_HOG_ENTROPY (HEAD_HIP)

GEI_HIPTOE

GEI_HOG(HIP_TOE)

GEI_HOG_ENTROPY((HIP_TOE)

Partial Body Gait Image

Fig. 7 Training and validation accuracy of the whole body and partial gait images

164

K. T. Thomas and K. P. Pushpalatha

Table 2 Performance of gait texture images on CASIA B dataset and OSU-ISIR Parameters

CASIA B

OU-ISIR

Images used for training

2030

980

Images used for validation

678

163

Images used for testing

677

163

10

10

No. of epochs Image feature

CASIA B Training

OU-ISIR gait database Validation

Testing

Training

Validation

Testing

GEI

96.4

91.1

96.3

79.6

81.6

61.5

GEI + HOG

99.75

84.5

94.1

92.7

87.7

58.8

GEI + HOG + entropy

100

96.7

98.2

93.9

82.21

62.5

Fig. 8 Accuracies obtained for CASIA B and OU-ISIR datasets

References 1. Kharb A, Saini V, Jain YK (2011) A review of gait cycle and its parameters. IJCEM Int J Comput Eng Manage 13, July 2011 2. Zhang Y, Huang Y, Wang L, Yu S (2019) A comprehensive study on gait biometrics using a joint CNN-based method. Pattern Recogn 93:228–236 3. Shiqi Y, Tan T, Huang K, Jia K, Xinyu W (2009) A study on gait-based gender classification. IEEE Trans Image Process 18(8):1905–1910 4. Van Hamme T, Garofalo G, Argones Rúa E, Preuveneers D, Joosen W (2019) A systematic comparison of age and gender prediction on IMU sensor-based gait traces. Sensors 19(13):2945 5. Yeoh TW, Aguirre HE, Tanaka K (2016) Clothing-invariant gait recognition using convolutional neural network. In: 2016 International symposium on intelligent signal processing and communication systems (ISPACS) 6. Liu T, Ye X, Sun B (2018) Combining convolutional neural network and support vector machine for gait-based gender recognition. In: 2018 Chinese automation congress (CAC) 7. Li X, Maybank SJ, Yan S, Tao D, Xu D (2008) Gait components and their application to gender recognition. IEEE Trans Systems, Man, Cybern, Part C (Appl Rev) 38(2):145–155

Deep Learning-based Gender Recognition Using …

165

8. Barra P, Bisogni C, Nappi M, Freire-Obregón D, Castrillón-Santana M (2019) Gait analysis for gender classification in forensics. In: Communications in computer and information science, pp 180–190 9. Wu Q, Guo G (2014) Gender recognition from unconstrained and articulated human body. Sci World J 2014:1–12 10. Yao L, Kusakunniran W, Wu Q, Zhang J, Tang Z (2018) Robust CNN-based gait verification and identification using skeleton gait energy image. In: 2018 Digital image computing: techniques and applications (DICTA) 11. Monisha SJ, Sheeba GM (2018) Gait based authentication with hog feature extraction. In: 2018 Second international conference on inventive communication and computational technologies (ICICCT) 12. Thomas KT, Pushpalatha KP (2021) A comparative study of the performance of gait recognition using gait energy image and Shannon’s entropy image with CNN. In: Data science and security, pp 191–202 13. Zheng S, Zhang J, Huang K, He R, Tan T (2011) Robust view transformation model for gait recognition. In: 2011 18th IEEE international conference on image processing 14. Yu S, Tan D, Tan T (2006) A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In: 18th International conference on pattern recognition (ICPR’06) 15. Xu C, Makihara Y, Liao R, Niitsuma H, Li X, Yagi Y, Lu J (2021) Real-time gait-based age estimation and gender classification from a single image. In: 2021 IEEE winter conference on applications of computer vision (WACV)

Forest Protection by Fire Detection, Alarming, Messaging Through IoT, Blockchain, and Digital Technologies in Thailand Chiang Mai Forest Range Siva Shankar Ramasamy, Naret Suyaroj, and Nopasit Chakpitak

Abstract The important and unique resource of Chiang Mai province and the Northern Thailand is the tropical forests. Chiang forest range and Northern Thailand consists of the natural resources are filled such as trees, rivers, animals, reptiles, mushroom, and herbs. The Northern Thailand forest will have a different type of geographical features and soil. Every forest range is having a unique and diverse culture and people who are living depending on the forests. A lot of minerals, timbers are possessed by the forests. However, fire occurrence in the forests is causing highlevel damages for resources such as trees, air, animals, birds, herbs, and soil also loosened without trees. This slowly leads to high level of landslides in the future. Forest fire causes through reasons like human beings, trees crashes, and overheat in the forest. Although firefighting team and local people are facing the challenges in tough terrains, finding the alternate path, to know the temperature in that spot, and wind range at that time. We propose a method to identify the fire through IoT devices such as heat sensors, temperature sensor (BME280), light sensor (LDR Photoresistor sensor module), smoke sensors (MQ2 module), and Global Positioning System (GPS) position sharing unit which share the information through blockchain to the selective units to spot the exact place. By analyzing the data, we can predict the path of the forest fire and fix them as fire spot. In future, we can stop and reduce the speed of fire by nature and artificial methods. This proposed method will be implemented in Doi Suthep forest range in Chiang Mai province, Thailand. This process gives us possibility to save human lives, cattle, blocking roads, and so on. Keywords Forest fire · Forest protection · IoT · Blockchain · Chiang Mai forest

1 Introduction The nature and human species will be retreated by the forests and mountains. The forest fire has been Thailand’s worst fear for decades, ravaging the northern half of S. S. Ramasamy (B) · N. Suyaroj · N. Chakpitak International College of Digital Innovation-Chiang Mai University, Chiang Mai 50200, Thailand e-mail: [email protected]; [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_14

167

168

S. S. Ramasamy et al.

the country. From April 1993 to 2020, 20 percent of the forested land was spread and destroyed. The forest fire in Thailand’s northern region affects the province of Chiang Mai with “serious pandemonium and air pollution.” The province of Chiang Mai is located in northern Thailand. The same time, the entire world is witnessing the number of fire alerts across the globe. Fire is increasing 13% compared to last year 2020 and 2021. Forest fires bringing Amazon forests and Australian forest fire into the record. Any forest fire is directly associated with human actions in them. Climate change, as well as other human actions such as setting fire to their own land or converting to agriculture, are all major contributors. The secondary cause of the increase in forest fires is communication gaps forest management and monitoring. Innovative techniques, understanding the forests, and responsibilities toward the fires shall be followed by rural places, forest cum village region, provinces, and northern regional forest ranges. Preventing fires is even better before they occur. Knowing the location of fire spot is considered as paramount because the information shall save lives. Forest fire study takes a type of forests, fires, geography, and technology to set out clear path to address the key causes [1]. Any project or research starts with the background of the study which motivates to initiate the research. Detection and prevention of forest fire are the main objective of this research, and they are evidently explained in this paper. Existing research on forest fire was reviewed thoroughly and clearly stated the merits and demerits of the works. The pipeline further proceeds with the anatomy of forest land, the impact of forest fire, and related policies and acts are illustrated. Significant activities for preventing forest fire are well explained. This paper is concluded with the summary of this research work and future implementations.

1.1 Background of the Study Forests area contains huge amount of minerals. However, fire occurrence in the forests is causing high-level damages for resources such as trees, air, animals, birds, herbs, and soil also loosened without trees. This slowly leads to great level of landslides in forthcoming periods. Forest fire causes through reasons like human beings, plants crashes, and overheat in the forest. Forest fires are caused by human activity, plant failure, and excessive heat in the forest. Although the firefighting crew has raised concerns about the lack of a trail, the temperature in that area, and the wind speed and direction at the time. We propose a method to identify the fire through IoT devices such as heat sensors, temperature sensor (BME280), light sensor (LDR Photoresistor sensor module), smoke sensors (MQ2 module), and Global Positioning System (GPS) location sharing unit which share the information through blockchain to the selective units to spot the exact place. This process gives us chance to save human lives, cattle, blocking roads, and so on.

Forest Protection by Fire Detection, Alarming, Messaging …

169

1.2 Research Objective • To analyze the present state and causes of forest fire in Thailand. • To analyze the preventive activities for forest fire in Chiang Mai Forest in Thailand. • To set up an automatic fire detection and alert system from the fire spots in the Chiang Mai forest range. • To propose a method to detect fire, give alarm and light and share GPS information to the firefighting and fire monitoring units.

1.3 Relevant Review of Forest Fire and Technology Framework From Thailand Fire Monitoring System [2], we got reports like numbers of periods the forest fire occurs, prone areas, dates, and so on details. Chiang Mai province and the forests in Chiang Mai are in great forest fire-prone zone. Gubbi et al. [3] proposed the techniques and utilization of IoT used in their work. Many fields are changing the modern lifestyle and modern day living as emphasized in this research article. Buckley [4] discussed the possibility and changes coming in the recent days of Internet and devices connected in them. This led us a bright idea on the process of comparing the process in forest areas. Any communication shall have meant; we can use the same sim card or connecting interfaces in the forest ranges also. Evans [5] explains the utilization of Internet in the real-time growth and real-time applications for the welfare of human lives and enriching the real need of livelihoods such as technology to basic amenities to the world. Evans [5] discusses how the Internet is used in real-time growth and real-time applications for the welfare of human lives and enriching real-world needs such as technology and basic amenities. As a result, in the proposed method, utilization of Internet and devices are also a part of recommended strategy in saving the forests from fire. Basu et al. [6] recommended the technique by using IoT for recognizing the fire and giving alert signals to Indian areas. The authors also used heat sensor and Arduino for communicating heat information to the nearby unit methods. Divya et al. [7] suggested a scheme which be contingent on several sensors connected with the wireless communication method. They require the support of satellite in the system, for dispatching these sensor data to the fire monitor station on ground. The monitoring unit or server unit will scrutinize the data. The date and discourse plan impends on the received data from the sensor for the discovery of forest fire spots. Sinha [8] had a review on forest fire detection using blockchain-based methodology. The review includes security and privacy of data as well. This review provides a complete summary of blockchain methodology. She et al. [9] introduced a blockchain trust model (BTM) for the pernicious hub. This method shared the distance sensor networks through a 3D space. This will be recognized by using blockchain shrewd

170

S. S. Ramasamy et al.

understanding and WSNs quadrilateral estimation for confinement of the recognition of malicious nodes. Rashid et al. [10] investigate the fire presence using adaptive neuro-fuzzy inference system (ANFIS). This work also advises the concerned individuals for serious fire risks if there should be an occurrence of a crisis or basic circumstance for fire spread. Sharma et al. [11] assembled fire-finding equipment using Arduino which is interfaced with sensors for temperature, smoke, and buzzer to alarm. At fire accidents, the framework consequently senses and cautions the forest authorities by sending an alarm. Yu et al. [12] proposed a technique with wireless sensor network which can detect and estimate forest fire more speedily than the conventional satellite-based recognition system using neural network approach. Forest fire discovery is proposed by Dubey et al. [13]. This feed-forward completely associated neural organization approach is utilized by Raspberry Pi microcontroller and sensors. In this framework, centralized server is used for storing and analyzing information regards to fire recognition. Herutomo et al. [14] introduced a work for fire detection through Zigbee WSN. The Zigbee WSN depends on the sensor being used to separate the fire network environment and their circumstance boundless forest land. Shinde et al. [15] presented a comprehensive work to justify the need for early detection of forest fire. Khalaf et al. [16] proposed a design for wireless fire detection system which is customizable. This mechanism is safe and cost-effective than the existing fire detection devices. Cui [17] presented two-stage methodology using IoT to detect forest fire. The first stage of the work is sensor-based risk identification process. Then, the observed data may find out the severity of the fire spread for evaluation. Reports from Malaysia [18] discussed forest fire and possible ways to detect the forest fire through drones. Drones shall be used, but it will be more expensive and have real-time issue because each drone shall be controlled by individual person. But, machine learning and artificial concepts shall be used with high-end systems. We can’t skip forest fire as one country’s problem. The chaos is seen by the forest fire in Brazil, Australia, Thailand, and India established a tremendous wave in the environment and expanding the ocean level also. Entire Asia is ready to stand for fighting the fire and let Thailand go as pilot process in automating this process. ASEAN shows the significance about forest fire and deforestation [19] in Asian political and business strategies in 2020. Let us see the status of the forest fire and methodologies in upcoming sections.

1.4 State of Forest Fire in Thailand Woods has a general importance in Thai guideline superfluous to tree cover, while saved backwoods has a more explicit significance, but again not actually connected to tree cover. Under both the 1941 Forest Act and the 1964 National Reserved Forests Act, forest essentially alludes all open land, whether or not it is forested. In the meantime, unique government recommendations point in growing woods cover, with a current target of 40% of land area. Figure 1 outlines [20] the picture depiction of various land portrayal or anatomy of the forests in Thailand.

Forest Protection by Fire Detection, Alarming, Messaging …

171

Fig. 1 Anatomy of forest and understanding the land classification in Thailand

Woodland fire is portrayed by the Royal Forest Department (RFD) (1996) as a “critical risk” considering the way that though many tree kinds of deciduous backwoods can suffer fire, seedling and saplings are successfully whipped, and regular life is moreover affected. Plus, losing soil productivity in light of colossal degree fires is seen as a peril [21]. Thailand’s woods fires occur during a dry period, which connects from repeat in February and March in an apex. For all intents and purposes, this huge number of blazes are surface flares. These surface flares occur generally in dry dipterocarp forest, mixed dipterocarp forest, and forest plantations, and fairly in dry evergreen forest, hill evergreen forests, and in specific bits of tropical evergreen forest. Every year in Thailand, enormous regions consume during the season: In 2020, flames have consumed 120,020.9 raise of land in the north of Thailand. This is an increment from 2019 where 102,363.39 raise was singed. With a sum of 159,490 raise consumed in all of Thailand adding up to 14,312,230,685.20-baht worth of harms in 2020 up to this point, the fire in the North makes up over 75% of all fires in the country. This obviously epitomizes the scale and size of fire the north is confronting [22]. There are a few effects of the woods fire in Northern Thailand were brought about decrease in the nature of the water, loss of regular vegetation, decrease of timberland cover, loss of the world life environments and extinction of interesting creatures, loss

172

S. S. Ramasamy et al.

of the job for the ancestral and provincial individuals, surprising vegetation progression, breakdown in agrarian biological system and request supply, worldwide and carbon cycle changes, loss of lumber assets, loss of bugs or reptiles, haze contamination, medical problems, reduction in tourism and migration of individuals, and border issues too. The forest fire in Northern Thailand is making worldwide effects, for example, obliterating biodiversity, environment changes, a dangerous atmospheric deviation, and ozone layer consumption. This demonstrates the meaning of the work to be done in the proposed work.

1.5 Forest Fire-Related Policy and Acts In 1976, Thailand’s Forest Fire Control Section was spread out under the Forest Management Division. Two or three years afterward, this segment was climbed to the forest fire control subdivision. An agency objective on November 24, 1981, gave this subdivision its public order to endeavor woodland fire control works out. In 1993, the workplace was raised to a full forest fire control division in conclusion to a forest fire control office in 1999. Since the mid-1990s, the fire control program has been reaching out to foster its consideration over fire-slanted districts. In 2000, the forest fire control office was made from four nearby fire control divisions, 15 forest fire control centers, 64 provincial forest fire control stations (PFCSs), and 272 forest fire control units supervising an “raised fire control program over 2.8 million hectares or 21% of the total woodland land.” There is no specific boondocks fire control act in Thailand yet there are four existing woodland fire control acts, which contain regions communicating disciplines for setting fires. The National Forest Policy No. 18 (1985) communicated that a huge course of action should be made with a movement of exercises for taking care of the deforestation issues in Thailand. Apart from Fire Fighting team and Local communites, the forest fire monitoring, information sharing, halting the spread of the fire, and putting out the fire all required military assistance. However, forest fires are also disrupting military operations and actions. Military personnel are being involved in modifying military basements, equipment, and communications towers, as well as using military helicopters to monitor fire sites. This division has moreover pre-arranged a movement of firefighting gatherings, including the Fire Tigers, a helicopter can drop the gathering for extremely far away regions. Regardless, most of the fire camouflage in Thailand is performed with quit spreading and stopping through hand instruments and water [23].

1.6 Prevention Activities for Forest Fire Forest fire is a hazardous and common problem in the forest. It is not only wreaking the wildlife but also changes the environment as animals and plants become more

Forest Protection by Fire Detection, Alarming, Messaging …

173

abundant. During the summer, the dry leaves are deposited in the forest. In that situation, the dry leaves will catch fire due to the heat. High wind speeds cause bamboo trees to rub against each other and stones to roll over each other which causing a fire. They are caused not only by nature but by humans. Taking precautionary measures to avoid forest fires, early warning to people nearby, firefighters, and the immediate response to the information are the ways to avoid human loss, cattle lost, environmental, and cultural heritage damage. (a)

Community-based Forest fire and traditional practices

In Thailand, rice is the main cultivation of the northeastern region. This is considered as rain-fed lowland rice, and the amount of rice is produced in large scale. The farmers are burning the rice residues in open space and keep generating air pollution. Sugarcane is highly cultivated in the lower northern, central, and northeastern regions. Adding to the rice fields, sugarcane residues are also burned in open [24]. Farmers usually burn their fields when wind is low and lighter. It may seems reducing the risk of fires spreading, but it is not. Farmers burn the grass in piles of the Orchard to increase the growth in the fruits, but they must understand the fire in agricultural fields affects agriculture [25].

2 Preventing Fire with Technology (a)

Smoke sensor (MQ2), Buzzer, and Arduino

The smoke sensors can play a vital role in detecting fire in the forest. The heat sensor and smoke sensor (MQ2 module, Fig. 2) are used to detect the fire possibility. Every sensor has a threshold value. An Arduino-based MQ2 smoke sensor is used in this fire detection process. MQ2 is linked to Arduino module, which is an electronic sensor used for sensing the gas concentration in the air. The trees in the forests release

Fig. 2 Buzzer and MQ2 sensor with Arduino

174

S. S. Ramasamy et al.

Fig. 3 ESP8266 via Arduino

flammable methane gas, and this can be sensed by MQ2 sensor. The observed data are sent to monitoring unit using NodeMCU. This sensor and module work on 5 V DC voltage, so we use either solar or 9 V standalone battery for running method. Sensor and module will discover within the concentrations of 10,000 ppm [26]. The circuit goes toward output of the module, where the buzzer is connected. The buffer volume variation shall be changed. Buzzer variation can be associated to detect various gases [27]. Buzzer will ring when the threshold values are exceeded so that people in the neighboring area are informed about the impending danger, emergency response centers become active, and analysts at the base station start recording the temperature changes and variations in smoke concentration [28]. Figure 3 displays the ESP8266-01 module, which is used to record and share sensor data and is connected to an Arduino controller. This module can also be utilized for long-range observations [29].

2.1 The System Architecture and the Flowchart Let us consider every “sensor” as a network node in complete connected graph (Fig. 4). We set up the nodes in a forest and connect the observatory unit as another node in the completely connected network. Each sensor sends data to node when it gets threshold limit. The node is further transmitting signal to base station to indicate that the fire gets started to spread. The fire may destroy a node that concern node is considered as dead node, and that data will also be updated to the station as well as blockchain. It is assumed that

Forest Protection by Fire Detection, Alarming, Messaging … Sensors (light, smoke, temperature) N = Nodes

175

Base Station

No de 3 No de 1 No de 2

No de 6 No de 4 No de 5

Fig. 4 System architecture for forest fire detection using IoT

failure node is destroyed already by the fire. At that point, the firefighters will decide to locate exactly where the fire is initiated and the direction in which it is started to spread. Even though fire is spotting, and volunteering teams shall get other innovative devices which can support the humankind and wildlife in Chiang Mai Forest ranges, the proposed model will be a pioneer for the other models.The flowchart in Fig. 5 shows how to explore the indented meaning of the planned work.

2.2 The Proposed Method in Chiang Mai Case Study This proposed method is to be implemented in the forest range called as “Doi Suthep.” Figure 6 illustrates the important temple named Wat Doi Suthep, which is in Doi Suthep forest range consists of three many small streams. The water from the stream is collected nearby a reservoir called as “Angkaew” in Chiang Mai University, Thailand. Using the water sources, we can build water reservoirs around the Doi Suthep to save Wat Doi Suthep, which is the most sacred temple in Chiang Mai, Thailand. We able to locate the forest fire through the launched sensors will send message to fire monitoring unit. Sensors will be using solo solar panels for every unit to serve individually. Forest officials and PM 2.5 resolving committee suggested us to avoid power cables inside the forest ranges to avoid fire accidents. Fire monitoring unit has the control of the reservoirs water flow control; they can control the water pipe to spray the water continuously. The tress or plants or gaps shall be created in the fire

176

S. S. Ramasamy et al.

Fig. 5 Flowchart of the proposed work

IoT enabled sensors calculate light, Temperature, and smoke

No If threshold value of sensors exceeds threshold limit Yes

Sensors Alarm and lights on the node to indicate fire spread

Sensor/Node sends information to the Observatory Units & Block chain store information Fire fighters acts in fireless path & No public allowed inside the forest range

Fig. 6 Application in the Wat Doi Suthep, Chiang Mai, Thailand

path. Hence, the spreading of the fire may stop, or it may be delayed for sure. If the sensor is still alive, we can understand that fire has not destroyed the sensor, and the place is safe.

3 Conclusion The important resource of the Northern Thailand, the forest ranges shall be saved from the fire in the future by the proposed method. Chiang Mai province and Chiang Dao tropical forests are named for their multivariate trees. These forest ranges consist

Forest Protection by Fire Detection, Alarming, Messaging …

177

of numerous natural resources such as trees, rivers, and valuable animals such as tiger and elephants, multiple living beings, and herbs. The forest range is the heart for the agricultural people around the Northern Thailand. Stopping the fire or prevention of fire, alarming the fire, identifying, and predicting the path of the fire may reduce the calamities. So, the proposed method shall identify the fire by IoT devices such as heat sensors, temperature sensor (BME280), light sensor (LDR Photoresistor sensor module), and smoke sensors (MQ2 module). The method is also using Global Positioning System (GPS) through NodeMCE devices. This information shall be saved in blockchain for digital traces. Later, the data shall be analyzed for months and years. Those analysis will be helpful to predict the fire path. Those paths shall be fixed by having more natural obstacles for the fire to control or stop the fire in future. This proposed method will be implemented in Doi Suthep forest range in Chiang Mai province, Thailand. Acknowledgements This proposed work is funded and sanctioned by Chiang Mai University Junior Research Grant—Number 2564_069, dated April 01, 2021. Authors would like to Thank, Mr. Vichit Tuntisak, Advisor, PM 2.5 resolving committee, Chiang Mai Governor Office and Dr. Kittiphan Chalom, MD, Assistant Director of Chiang Mai Public Health Office (Epidemiology) for their support and suggestions.

References 1. WWF (2020) Fires, forests, and the future: a crisis raging out of control. Worldwide Fund for Nature. www.panda.org/forests 2. http://fire.gistda.or.th/ 3. Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of Things (IoT): a vision architectural elements and future directions. Future Gener Comput Syst 29(7):1645–1660 4. Buckley J (2006) From RFID to the Internet of Things pervasive networked systems, Conference Centre Albert Borschette (CCAB) Brussels Belgium, May. 2006, [online] Available: ftp://ftp.cordis.europa.eu/pub/ist/docs/ka4/au_conf670306_buckley_en.pdf 5. Evans D (2011) The Internet of things: How the next evolution of the Internet is changing everything, Cisco IBSG San Francisco CA USA, Apr. 2011, [online] Available: http://www. cisco.com/web/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf 6. Basu MT, Karthik R, Mahitha J, Reddy VL (2018) IoT based forest fire detection system. Int J Eng Technol 7(2.7):124–126 7. Divya A, Kavithanjali T, Dharshini P (2019) IoT enabled forest fire detection and early warning system. In: 2019 IEEE International conference on system, computation, automation and networking (ICSCAN), Pondicherry, India, pp 1–5. https://doi.org/10.1109/ICSCAN.2019.887 8808 8. Sinha D (2020, Feb) Authentication and privacy preservation in IoT based forest fire detection by using blockchain–a review. In: 4th International conference on Internet of Things and connected technologies (ICIoTCT), 2019: Internet of Things and connected technologies, vol 1122, p 133. Springer Nature 9. She W, Liu Q, Tian Z, Chen J-S, Wang B, Liu W (2019) Blockchain trust model for malicious node detection in wireless sensor networks. IEEE Access 7:38947–38956. https://doi.org/10. 1109/ACCESS.2019.290281

178

S. S. Ramasamy et al.

10. Rashid MM, Rashid MM, Sarwar F, Ghosh D (2014) An adaptive neuro-fuzzy inference system based algorithm for long term demand forecasting of natural gas consumption. In: Fourth international conference on industrial engineering and operations management, Bali, Indonesia 11. Sharma K, Anand D, Sabharwal M, Tiwari PK, Cheikhrouhou O, Frikha T (2021) A disaster management framework using Internet of Things-based interconnected devices. Math Probl Eng 2021, Article ID 9916440, 21p. https://doi.org/10.1155/2021/9916440 12. Yu L, Wang N, Meng X (2005) Real-time Forest fire detection with wireless sensor networks. In: Proceedings. 2005 international conference on wireless communications, networking and mobile computing, pp 1214–1217. https://doi.org/10.1109/WCNM.2005.1544272 13. Dubey V, Kumar P, Chauhan N (2019) Forest fire detection system using IoT and artificial neural network. In: Bhattacharyya S, Hassanien A, Gupta D, Khanna A, Pan I (eds) International conference on innovative computing and communications. Lecture notes in networks and systems, vol 55. Springer, Singapore. https://doi.org/10.1007/978-981-13-2324-9_33 14. Herutomo A, Abdurohman M, Suwastika NA, Prabowo S, Wijiutomo CW (2015) Forest fire detection system reliability test using wireless sensor network and OpenMTC communication platform. In: 3rd International conference on information and communication technology (ICoICT), pp 87–91 15. Shinde R, Pardeshi R, Vishwakarma A, Barhate N (2017) Need for wireless fire detection systems using IOT. Semantic scholar articles 16. Khalaf OI, Abdulsaheb GM (2019) IOT fire detection system using sensor with Arduino_http, AUS 26(1):74–78 17. Cui F (2020) Deployment and integration of smart sensors with IoT devices detecting fire disasters in huge forest environment. Comput Commun 150:818–827. https://doi.org/10.1016/ j.comcom.2019.11.051 18. Pradhan B, Dini Hairi Bin Suliman M, Arshad Bin Awang M (2007) Forest fire susceptibility and risk mapping using remote sensing and geographical information systems (GIS). Disaster Prevent Manage 16(3):344–352. https://doi.org/10.1108/09653560710758297 19. Smith W, Dressler WH (2020) Forged in flames: indigeneity, forest fire and geographies of blame in the Philippines. Postcolonial Stud 23(4):527–545. https://doi.org/10.1080/13688790. 2020.1745620 20. Forest policy and administration. https://thailand.opendevelopmentmekong.net/topics/for estry-policy-and-administration/ 21. Nalampoon A (2003) National Forest Policy Overview Thailand. FAO 2003. Bangkok, pp 295–311 22. 2020 Northern Thailand Forest Fires Snapshot. https://www.wwf.or.th/en/?362337/2020-Nor thern-Thailand-forest-fires-snapshot 23. Ganz D (2002) Framing fires: a country-by-country analysis of forest and land fires in the ASEAN nations. The Worldwide Fund for Nature (WWF). Project Fire Fight Southeast Asia. Indonesia 24. Phairuang W, Hata M, Furuuchi M (2017) Influence of agricultural activities, forest fires and agro-industries on air quality in Thailand. J Environ Sci.https://doi.org/10.1016/j.jes.2016. 02.007 25. Rakyutidharm (2002) Forest fire in the context of territorial rights in northern Thailand. Moore P, Ganz D, Tan LC, Enters T, Durst PB (eds) Communities in flames: proceedings of an international conference on community involvement in fire management. Food and Agriculture Organization of the United Nations, Regional Office for Asia and the Pacific, Bangkok, Thailand 26. Kanakaraja, Vaishnavi, Pradeep, Khan (2019). An Iot based forest fire detection using Raspberry Pi. Int J Recent Technol Eng (IJRTE) 8(4). https://doi.org/10.35940/ijrte.D8862. 118419 27. Naik P, Dhopte P, Wanode R, Kantode R, Nagre S (2018) Gas sensor using Arduino UNO & MQ2 sensor. Int J Adv Res Comp Commun Eng (IJARCCE) 7(3). https://doi.org/10.17148/ IJARCCE.2018.73104548. ISO 3297:2007

Forest Protection by Fire Detection, Alarming, Messaging …

179

28. Mahajan R, Yadav A, Baghel DP, Chauhan N, Sharma K, Sharma A (2019) Forest fire detection system using GSM module. Int Res J Eng Technol (IRJET) 6(8). e-ISSN: 2395-0056 29. Sarobin M, Singh S, Khera A, Suri L, Gupta C, Sharma A (2018) Forest fire detection using IoT enabled drone. Int J Pure Appl Math 119(12):2469–2479. ISSN: 1314-3395

On Circulant Completion of Graphs Toby B Antony and Sudev Naduvath

Abstract A graph G with vertex set as {v0 , v1 , v2 , ..., vn−1 } corresponding to the elements of Zn , the group of integers under addition modulo n, is said  to bea circulant graph if the edge set of G consists of all edges of the form vi , v j where (i − j)(mod n) ∈ S ⊆ {1, 2, . . . , n − 1}, that is, closed under inverses. The set S is known as the connection set. In this paper, we present some techniques and characterisations which enable us to obtain a circulant completion graph of a given graph and thereby evaluate the circulant completion number. The obtained results provide the basic eligibilities for a graph to have a particular circulant completion graph. Keywords -completion · Circulant completion · Circulant completion graph · Circulant completion number · Circulant span · Circulant labelling MSC 2020 05C62 · 05C75

1 Introduction For terms and definitions in graph theory, we refer to [1, 2]. For further topics on circulant graphs, see [3–6]. If not mentioned otherwise, every graph we consider in this paper is simple, finite, connected and undirected. A graph G with vertex set as {v0 , v1 , v2 , . . . , vn−1 } corresponding to the elements of Zn , the group of integers under addition modulon, is said  to be a circulant graph if the edge set of G consists of all edges of the form vi , v j where (i − j)(modn) ∈ S ⊆ {1, 2, . . . , n − 1}, that is, closed under inverses. The set S is known as the connection set. The graph being dependent on the order and the set S, it is denoted as G(n, S) in [4].

T. B Antony (B) · S. Naduvath Department of Mathematics, CHRIST (Deemed to be University), Bangalore, India e-mail: [email protected] S. Naduvath e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_15

181

182

T. B Antony and S. Naduvath

The idea of chromatic completion graph and the parameter, chromatic completion number of a graph were put forward in [7]. The graph obtained by the addition of maximum possible number of good edges to a coloured graph G is referred as the chromatic completion graph of G. The number of newly added good edges for building up the chromatic completion graph is the chromatic completion number of the graph G. Motivated by the above study, in this paper, we introduce the concept of circulant completion of graphs and their circulant completion number. We have obtained certain characterisations for circulant completion graphs and also the circulant completion number of certain graph classes. Circulant graphs are well known for their applications in computer networks. The adjacency matrix of a circulant graph belongs to the collection Toeplitz. Members in Toeplitz have a wide range of applications in engineering (see [8]). With the help of circulant topology, we can design efficient routing algorithms that help in reducing the cost of hardware in developing networks on chip (NoCs) (see [9]). Circulant completion enables us to extend an ordinary network to a circulant network and gain all the benefits of circulant networks.

2 Circulant Completion of Graphs By a -completion of a graph, we mean the process of extending G by adding a minimum number of new edges to G so that the resultant graph satisfies a specific structural property . Using the above-mentioned idea of the completion of graphs, we define the circulant completion of graphs as follows: The circulant completion of a graph G is the process of extending G by adding a minimum number of new edges to G so that the resultant graph is a circulant graph. A circulant graph thus obtained is called a circulant completion graph of G and is denoted by G ζ . The circulant completion number of a graph G, denoted by ζ (G), is the number of edges that can be added toG to get  acirculant completion graph of G. In other words, ζ (G) =  E G ζ − G |=|E G ζ  − |E(G)|. It is to be noted that the circulant completion graph need not be unique. But, the size of any two circulant completion graphs of a given graph is same. For example, in Fig. 1, the second and third graphs are two non-isomorphic circulant completion graphs of a given graph. Let Zn be the group of integers under addition modulo n and let V (G) = {v0 , v1 , v2 , ...,vn−1 }. The circulant span between two vertices vi and v j (i = j), denoted by cs vi , v j is defined as   cs vi , v j =



 |i − j|, if |i − j| ≤ n2 ;  |i − j|−1 , if |i − j| > n2 ;

On Circulant Completion of Graphs

183

Fig. 1 Two non-isomorphic circulant completion graphs of the given graph

  −1 where  n x is the inverse of x ∈ Zn . It is clear from the definition that 1 ≤ cs vi , v j ≤ . 2 The circulant span of a graph G, denoted by Cs (G), is the set of all distinct circulant spans of all pairs of vertices of G. The number of edges in a circulant  completion graph corresponding to a circulant span cs vi , v j is denoted by cs(vi ,v j ) . Theorem 2.1 Let G be a graph of order n. Then, cs(vi ,v j ) =

n

, if cs = n2 ; n, Otherwise. 2

Proof Let {v0 , v1 , v2 , . . . , vn−1 } be the set of vertices of G. If a circulant span, say, i satisfies the condition,  gcd(n, i) =1, then there is a cycle of order n of the form {v0 , vi }, {vi , v2i }, …, v−i( mod n) , v0 in G ζ . If cs = i satisfies gcd(n, i) = i, then there are i cycles of order ni and hence altogether n edges. Corresponding to cs = n2 ,   the edges are of the form v j , v j+ n2 ( mod n) that cannot form any cycles. Thus, there are n2 such edges in G ζ .  Definition 1 A circulant labelling of a graph G of order n is the labelling of vertices of G such that (i) (ii)

|Cs | is minimum. n ∈ Cs for even n. 2

Theorem 2.2 The circulant completion graph of a graph G of order n is the cycle Cn if and only if there is only one circulant span of value k, where gcd(n, k) = 1, in a circulant labelling of G. Proof Let G be a graph of order n with Cs (G) = {k}, where gcd(n, k) = 1. Since gcd(n, k) = 1, by Theorem 2.1, there is one cycle of order n that corresponds to cs = k in G ζ . There are no more edges because there is no other circulant span. Hence, G ζ is Cn . Conversely, assume that G ζ is Cn . Then, it is obvious that there is only one circulant span and that must be relatively prime to n by Theorem 2.1.  Corollary 2.1 The circulant completion number for a path graph is 1. Proof Path graph can be labelled in such a way that Cs (G) = {1}. Since gcd(n, 1) = ζ 1, by Theorem 2.2, we have Pn as Cn .

184

T. B Antony and S. Naduvath

Corollary 2.2 The circulant completion graph of a graph G of order n with (G) > 2 is not the cycle Cn . Proof Let V (G) = {v0 , v1 , v2 , . . . , vn−1 }. Let vi be a vertex with degree 3. Then, spans because acirculant span, say, k can only have vi generates at least 2 circulant   two edges, vi , vi+k( mod n) and vi , vi−k( mod n) incident on vi in G ζ . Therefore, the other edge incident on vi generates other circulant span, and hence, Cs (G) is not a singleton set. Thus, by Theorem 2.2, G ζ cannot be the cycle Cn . Theorem 2.3 The circulant completion graph of a graph G is Cnk , if there are circulant spans from 1 to k in a circulant labelling of G. Proof Since gcd(n, 1) = 1, by Theorem 2.1, cs = 1 generates the cycle Cn in G ζ . Adding the edges corresponding to cs = 2 to the previously obtained Cn , we get the updated form as Cn2 . If k = 2, we stop here. Otherwise, we proceed in the same manner up to the stage k to get G ζ as Cnk . Theorem 2.4 Let G be a graph of even order n. The circulant completion graph of G is the Möbius ladder M n2 if there are only two circulant spans of values n2 and k, where gcd(n, k) = 1, in a circulant labelling of G. Proof By Theorem 2.1, cs = k generates the cycle Cn in G ζ . Also, Theorem 2.1 provides the idea that cs = n2 generates n2 edges in G ζ , which are of the form   v j , v j+ n2 ( mod n) where j ∈ Zn . The number k is odd being relatively prime to n, and hence, k = 2l + 1 for some l ∈ N. We have nk = n(2l+1) (modn) = n2 . 2 (modn) 2 n Therefore, the edges generated by cs = 2 are diagonals of Cn generated by cs = k. Thus, G ζ is the Möbius ladder M n2 . Definition 2 The k-sieve graph (see [10]) of a cycle Cn is the graph obtained by joining the non-adjacent vertices of Cn which are at a distance k. It is denoted by Cn(k) . Theorem 2.5 Let G be a graph of order n. The circulant completion graph of G is the k-sieve graph of cycle Cn , i.e. Cn(k) , if there are only two circulant spans of values 1 and k in a circulant labelling of G. Proof As in previous proofs, cs = 1 generates the cycle Cn in G ζ , whereas the non-adjacent vertices of Cn which are at a distance of k are connected by cs = k. Thus, we obtain G ζ as Cn(k) . Theorem 2.6 The circulant completion graph  of a graph G is the complete graph K n if there are circulant spans from 1 to n2 in a circulant labelling of G. n Proof Since Cn 2 is the complete graph K n , the proof is direct by Theorem 2.3. Theorem 2.7 The circulant completion graph of a graph G of order n with (G) = n − 1 is the complete graph K n .

On Circulant Completion of Graphs Table 1 Circulant completion number for graphs with universal vertex

185

G

ζ (G)

Star graph, K 1,n Wheel graph, W1,n

n(n−1) 2 n(n−3) 2

Double-wheel graph, DW1,n

n(2n − 3)

Flower graph, F1,n

n(2n − 3)

Blossom graph, Bl1,n

n(2n − 5)

Djembe graph, D j1,n

n(2n − 4)



distinct Proof Let v be a vertex of G with degree n − 1. There are at least n−1

n−1 2n circulant spans obtained from the edges incident on v. But, 2 = 2 . Hence, by Theorem 2.4, we obtain G ζ as K n . In view of the above result, the circulant completion number, ζ (G), of some graph classes is given in Table 1. The circulant completion graph, G ζ , of the above listed graphs is the complete graph of order n + 1. Theorem 2.8 The circulant completion number for a helm graph H1,n for n > 3 is (2n + 1) n2 − n + 1. Proof Since H1,n is a graph of odd order, by Theorem 2.1, there is no circulant labelling as mentioned in second condition of Definition 1. Therefore, it is enough to minimise the number of distinct circulant spans, in order to get a corresponding circu lant completion graph. Figures 2 and 3 illustrate that at least n2 distinct circulant

spans ranging from 1 to n2 are obtained from the edges incident on the vertex v0 . We have two edges that connect vertices of the form v−i( mod 2n+1) and v j( mod 2n+1) ,

where 1 ≤ i, j ≤ n2 , to complete the rim. These are the edges v n2 , v2n and

ζ

Fig. 2 Circulant labelling of H1,4 and the corresponding H1,4

186

T. B Antony and S. Naduvath

ζ

Fig. 3 Circulant labelling of H1,5 and the corresponding H1,5



v1 , vn+ n2 +1 , and at least one among them will generate a circulant span n2 + 1.



The pendant edges are of the forms, either vi , vi+ n2 or vi , vi+ n2  , where i ∈ Zn and addition is done modulo n. Therefore, the largest circulant span obtained from pendant edges is n2 . Thus, altogether, circulant spans vary from 1 to n2 + 1.  n2 +1 . Hence, the As a result, one of the circulant completion graphs for H1,n is C2n+1  n2 +1 circulant completion  is the difference in sizes of C2n+1 and H1,n , and it is  number given by (2n + 1) n2 + 1 − 3n = (2n + 1) n2 − n + 1. Theorem 2.9 The circulant completion number for a closed helm graph C H1,n for n > 3 is (2n + 1) n2 − 2n + 1. Proof Using the same argument in previous proof, we cannot have circulant labelling as mentioned in second condition of Definition 1. Therefore, we minimise the number ζ of distinct circulant spans to get the corresponding C H1,n . There are n edges incident

n on the central vertex. These edges generate at least 2 circulant spans. In Figs. 4

and 5, the circulant spans corresponding to this are 1, 2, . . . , n2 . Circulant span of edges in the inner rim is 1 with the exception of two edges that are to

incident vertices of the form v−i( mod 2n+1) and v j( mod 2n+1) , where 1 ≤ i, j ≤ n2 . At least

one of these edges generates a circulant span n2 +1. Circulant span of edges joining

n n inner rim and outer rim is either 2 or 2 as in the case of pendant edges of helm graphs. Edges in the outer rim have the circulant

span as 1, with an exception of two edges of the form either vi , vi+ n2 ( mod n) or vi , vi+ n2 ( mod n) , where i ∈ Zn .

The edges coming as an exception generate a circulant span of value either n2 or   

n  n . Thus, altogether, Cs C H1,n = 1, 2, . . . , 2 + 1 . Thus, by Theorem 2.3, 2  n2 +1  n2 +1 ζ . We have ζ (G) as the difference in sizes of C2n+1 C H1,n is obtained as C2n+1  n and C H1,n . That is,ζ (G) = (2n + 1) 2 + 1 − 4n = (2n + 1) n2 − 2n + 1.

On Circulant Completion of Graphs

187

ζ

Fig. 4 Circulant labelling of C H1,4 and the corresponding C H1,4

ζ

Fig. 5 Circulant labelling of C H1,5 and the corresponding C H1,5

3 Conclusion Motivated by the concept of chromatic completion of graphs, we developed the notion of -completion of graphs and introduced a classification of the same circulant completion. We provided characterisations for certain circulant completion graphs and determined circulant completion number of certain graph classes. Our next objective is to find a general formula for circulant completion number. Determining the reason for same circulant completion number for different graphs would be worthy. Other branches of -completion can be obtained and studied in the future.

188

T. B Antony and S. Naduvath

Acknowledgements The authors of the article would like to acknowledge the suggestions, comments, and the support of their co-researchers Ms. Sabitha Jose, Ms. Phebe Sarah George, and Ms. Sneha K R Nair during the preparation of this article.

References 1. Harary F (2001) Graph theory. Narosa Publishing House, New Delhi 2. West DB (2001) Introduction to graph theory, 2nd edn. Prentice Hall of India, New Delhi 3. Elspas B, Turner J (1970) Graphs with circulant adjacency matrices. J Combinatorial Theory 9(3):297–307 4. Alspach B, Parsons TD (1979) Isomorphism of circulant graphs and digraphs. Discrete Math 25(2):97–108 5. Alspach B, Morris J, Vilfred V (1999) Self-complementary circulant graphs. Ars Combin 53:187–192 6. Monakhova EA (2012) A survey on undirected circulant graphs. Discrete Math Alg Appl 4(1):1250002 7. Mphako-Banda E, Kok J (2020) Chromatic completion number. J Math Comput Sci 10(6):2971–2983 8. Nguyen VK (2010) Family of circulant graphs and its expander properties. PhD Thesis, San Jose State University 9. Monakhova EA, Romanov AY, Lezhnev EV (2020) Shortest path search algorithm in optimal two-dimensional circulant networks: Implementation for networks-on-chip. IEEE Access 8:215010–215019 10. Naduvath S, Augustine G (2015) A note on the sparing number of the sieve graphs of certain graphs. Appl Math E-Notes 15:29–37

Analysis of Challenges Experienced by Students with Online Classes During the COVID-19 Pandemic D. Elsheba, Nirmalya Sarkar, S. Sandeep Jabez, Arun Antony Chully, and Samiksha Shukla

Abstract In the current context of the COVID-19 pandemic, due to restrictions in mobility and the closure of schools, people had to shift to work from home. India has the world’s second-largest pool of internet users, yet half its population lacks internet access or knowledge to use digital services. The shift to online mediums for education has exposed the stark digital divide in the education system. The digitization of education proved to be a significant challenge for students who lacked the devices, internet facility, and infrastructure to support the online mode of education or lacked the training to use these devices. These challenges raise concerns about the effectiveness of the future of education, as teachers and students find it challenging to communicate, connect, and assess meaningful learning. This study was conducted at one of the universities in India using a purposive sampling method to understand the challenges faced by the students during the online study and their satisfaction level. This paper aims to draw insight from the survey into the concerns raised by students from different backgrounds while learning from their homes and the decline in the effectiveness of education. Keywords Online education · Digital divide · COVID-19 · Education sector · Work from home · Teaching · Quality education · Student experience

1 Introduction The world made an abrupt shift to online mode to reduce the mobility of people in order to tackle the spread of the pandemic. Lockdown and social distancing measures imposed due to the COVID-19 pandemic have closed educational institutions all across the country. With the closure of educational institutions across the country, the education of 300 million students in India was disrupted as they had to move on to a digital platform for learning. However, the education system in India was not equipped to deal with the abrupt shift to online education. D. Elsheba (B) · N. Sarkar · S. S. Jabez · A. A. Chully · S. Shukla Christ University, Bangalore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_16

189

190

D. Elsheba et al.

India as a developing country still lacks access to digital technology and Internet service for all the citizens. They must adapt with little or no other alternatives available as they are forced to adopt a system they are not prepared for. Students who were not able to adapt due to the inaccessibility of technology or lack of digital literacy faced challenges when it came to their learning. Teachers and students had to learn new ways to approach education as they adapted to digital learning. Classes were delivered through various online platforms like Zoom, Cisco Webex, Google Teams, etc. There are pedagogical changes that were implemented to the teaching material such as audio and video content. The transition from the traditional classroom setting with face-to-face learning to online learning was an entirely different experience for both the learners and the teachers [1]. While education is still accessible, these efforts are not likely to provide the same quality of education and student satisfaction as compared to the lectures delivered in classrooms. This work was initiated to understand the challenges faced by the Indian teachers and students with regard to the quality of education and student satisfaction in the context of the COVID-19 pandemic. There were students and teachers from middleclass backgrounds who were also not able to arrange the infrastructure for the online classes. During the pandemic, it was observed that many schools and college students dropped out of school due to unavailability of the resources. The demand for Internet connectivity increased in the last two years. Due to pandemics, there was a major impact on the learning of the students. As the youngsters are the backbone of the country, it may have a long-term effect on the country. Considering this, the study is relevant in the current scenario. This work presents insight into the impact of pandemics on the quality of education due to technological barriers. For this research work, a survey was conducted among the students of age group 18–29. The study has given various helpful results such as people facing challenges with an Internet connection like the lack of Internet connections at home or slow speed of network due to all workforce working from home. Other challenges were scarcity of appropriate places to connect to class, lack of knowledge about the usage of electronic devices, the concentration of the children during the class. The rest of the chapter is designed as follows, Sect. 2 presents the discussion about the literature that gives details about the offline and online mode of education and the impact of the pandemic on the quality of education. Section 3 shows the problem defination, the research challenges and dataset description. Section 4 presents the Methodology used for data collection and data analysis. The results and discussion are presented in Sect. 5. Section 6 presents the conclusions and future directions.

2 Literature Review Online education in India has a long history as learning content has been broadcasted through All India Radio and the Doordarshan as recorded material for both higher

Analysis of Challenges Experienced by Students …

191

education and school-going children as early as the 1960s [2]. Interestingly, after 60 years, they had to broadcast virtual classes through regional channels during the pandemic as physical education has been closed [3]. Though huge investments have been made by concerned ministries on educational content broadcasting, conclusive evidence on the positive impact of those initiatives by AIR and Doordarshan are limited [4]. The last few decades have witnessed the gradual evolution of technology from one-way telecast to teleconferencing, a dedicated national education channel, the launch of Edusat satellite to the current two-way video communication, and a plethora of technological alternatives. A significant milestone was the implementation of the right to education in 2010 with the aim of universalization of primary education. It followed a greater focus on technological upgradation in private and public schools with the distribution of laptops and tablets and the introduction of whiteboards and smartboards [5]. The last decade has also seen an array of edtech start-ups gaining a stronghold in online education in India. Between January 2014 and September 2019, more than 4450 edtech start-ups have been launched in India [6]. At the same time, these technological innovations in education cater only to less than 30% of the population that constitutes the middle or upper classes of Indian society. The adoption of digital technology has risen significantly in the past few years, but COVID-19 has accelerated this growth to exponential levels as educational institutions across the globe were forced to shut down the physical classes. There has been a huge surge in the usage of digital tools during the pandemic, whether it is language apps, virtual tutoring, video conferencing tools, or online learning software [7]. At the same time, many educators struggled to move away from the traditional teacher-centered education as most of them had limited exposure to integrating technology into their pedagogy. When COVID-19 struck, most of the teachers had to change their methods overnight related to teaching, assessment, research, student support, and other administrative chores [8]. Although institutions are left with no other choice than to adopt online education amidst the pandemic, evidence of the challenges of effective implementation continues to build up [9]. For developing countries, with limited infrastructure for the majority of institutions, the transition to online education was a huge challenge [10]. An extensive survey conducted by QS I-Gauge revealed that the technological infrastructure in India has not developed to the level required for providing efficient and effective online education to students across the country [11]. Technical challenges like inadequate power supply and insufficient data connectivity are very much the norm in most parts of the Indian subcontinent. An added challenge is the stark urban–rural contrast and digital divide in terms of access and affordability of necessary resources that facilitate a smooth transition to online education [5]. Studies have reported an array of different concerns during the migration to a new learning space relating to policy, pedagogy, logistics, socioeconomic factors, technology, and psychosocial factors [12]. Students had to encounter various difficulties like concerns about new learning and evaluation methods, overwhelming task load, technical difficulties, and confinement [13].

192

D. Elsheba et al.

3 Problem Definition, Research Challenges, and Dataset Description 3.1 Problem Definition The problem is to analyze and measure the views of students amidst the pandemic. This paper aims at a detailed understanding of the dataset. The exploratory data analysis has been conducted using Python which helps in visualization and coming to conclusions.

3.2 Challenges The dataset was required for the study as it was not readily available, a survey was conducted to collect the data. This data was used to measure the students’ views on satisfaction during online classes and challenges faced by them to overcome the immediate need of switching to online classes due to pandemics.

3.3 Dataset Description The primary dataset was created with the required features for the analysis. The major features considered for the study are ‘age’, ‘Gender’, ‘Education_Level’, ‘screen_Time’, ‘difficulties’, ‘net’, ‘device_comfort’, ‘Preference’, ‘study_matt’, ‘new_device’, ‘o_attendance’, ‘o_satisfaction’.

4 Methodology A survey of a population-based sample of college students was conducted to measure their views on online education and collect relevant data. The aim was to survey students from universities whose education was impacted due to the online studies and understand the contributing factors to the hindrance in learning online. Participants were asked to fill in the survey anonymously. The major challenge of remote learning is the disparity in access from electricity and Internet connections, the availability and affordability of the devices, etc. The pandemic has left many of the bright minds off school due to various factors like availability of the resources to attend online classes, financial inability to procure the devices, house infrastructure to get a peaceful environment, etc. In order to address the issue, initially, domain knowledge is gained by going through previous literature in the area of online education, the impact of pandemics on education,

Analysis of Challenges Experienced by Students …

193

Fig. 1 Methodology

and understanding the kind of work done. The data was collected through a survey conducted for the college-going students, the exploratory data analysis was carried out and the inferences are represented as graphs. The Python programming language is used to code in a Jupyter notebook. Following steps are followed involved methodology as shown in Fig. 1: Step 1: Data Collection: Since the data which was required to conduct the research was unavailable, a survey was conducted through which the primary data was collected. This survey was directly filled by several students of the institution. Step 2: Data Cleaning: The data collected from the survey was addressed for missing values and checked for biasedness if any. In the first round, the data collected was male dominated, so the second round of the survey was conducted targeting female participants so the bias can be removed. Then using Python missing values and null values was addressed and was replaced with the mean values for research purposes. Step 3: Exploratory Data Analysis (EDA): This process was broken down into three parts: univariate, bivariate, and multivariate analysis. Each of them offered unique insights into the data at hand. The attributes were compared against each other to see if there were any relations. Step 4: Data Visualization: Using the visualization tools offered by the Python programming language and its library, graphs were plotted based on the EDA. Step 5: Building Inferences: Based on the graphs, the determination of the subject, establishment of what each category and subcategories represent, understanding the relationship between axes and diagonal line, relative percentages that each bar represents, and other inferences were made.

5 Results and Discussion The primary data collected from the survey and the secondary data received from the national survey deliver some insight to understand the impact of the pandemic on the quality of education. Factors that impacted the quality of education were such as the

194

D. Elsheba et al.

age of the child, the availability of the Internet connection, the device to attend the school, and comfort with the device usage. This section will demonstrate the results of the study to get a better understanding of the challenges faced by the students. The choice of having offline classes rather than online classes is gender-neutral. As shown in Fig. 2, the number of females willing for in-person classes is more than the opting for online classes. Similar is the trend for the males. This choice is largely based on the need for a better understanding of the subject and getting exposure to the resources available for exploring the current trends in the technology area. Figure 3 shows that more than 60% of the data considered for the study invested in new technological devices to facilitate online studies. The need for such purchases was highly influenced by the predicted change in learning culture with education moving online due to pandemics. Figure 4 shows some degree of positive correlation between online class satisfaction and device comfort. In general, a trend has been seen that over time as comfort level with the device increased, the satisfaction level of the students and teachers Fig. 2 Students preference for online versus offline classes

Fig. 3 Percentage of people purchased new devices for online classes

Analysis of Challenges Experienced by Students …

195

Fig. 4 Relationship between the device comfort and satisfaction in online classes

Fig. 5 Correlation between various features of the study

also increased. The drop mapped against 7 and 8 ratings on device comfort signifies that even though the device comfort is satisfactory, individual commitment and concentration during the lecture also play a significant role in online class satisfaction. As presented in Fig. 5, there exists a strong positive correlation between device comfort and online class satisfaction. This correlation can be well explained as a better device facilitates faster, smoother, multi-tasking, better visual and audio choices for the students. Device comfort and screen time also have a high correlation. The only negative correlation discovered is that of age and class satisfaction.

6 Conclusion and Future Work The COVID-19 pandemic has impacted the lives of people in many ways like economically, physically, their education, career-wise, etc. The longer duration of study from home has left many children away from their regular interactive learning. However, digitization came as a bliss in many cases such as online payment, food delivery, online education but there is a major portion of underprivileged people who suffered a lot during the pandemic. It has been observed that the daily wage worker

196

D. Elsheba et al.

was the one who was starving for food due to the sudden lockdown in the country. The government came up with various policies for them but the children’s education was majorly left out. This is not only the condition for the poor but also for the middle class and others in society. The quiet space, Internet connectivity, connections, availability of electronic devices were some of the main concerns reported during the analysis. This pandemic needs policies to address the inequality in the education sector. The scope of online education is huge provided the required infrastructure is available. The result of the study shows that most of the students prefer offline learning over online; the device comfort plays an essential role in online class satisfaction. In the future, the government should come up with policies that enrich learners from diverse backgrounds to adapt to the hybrid modes of learning. It should emphasize the policies which can bridge the digital divide. This not only helps individuals but also the country as a whole. In case of any such pandemic when an expert resource is required, it is easily available in the virtual environment to learn, grow, and achieve the sustainable development goal.

References 1. Pokhrel S, Chhetri R (2021) A literature review on impact of COVID-19 pandemic on teaching and learning. Higher Educ Future 8(1):133–141 2. The history and usefulness of online teaching in India. Times of India Blog, 27-May2020. [Online]. Available: https://timesofindia.indiatimes.com/readersblog/mridul-mazumdar/ the-history-and-usefulness-of-online-teaching-in-india-20481/ 3. Education in time of covid-19: DD, air will broadcast virtual classes through regional channels. The Economic Times. [Online]. Available: https://economictimes.indiatimes.com/mag azines/panache/education-in-time-of-covid-19-dd-air-will-broadcast-virtual-classes-throughregional-channels/articleshow/75200617.cms 4. Dua MR, Menon KSR, Nagpal BB (2017) Unit-5 educational media. eGyanKosh, 07-Apr-2017. [Online]. Available: https://egyankosh.ac.in/handle/123456789/7305 5. A historical review of educational technology in schools in India: past, present and the future. researchgate.net. [Online]. Available: https://www.researchgate.net/publication/336591365_ A_Historical_Review_of_Educational_Technology_in_Schools_in_India_Past_Present_and_ the_Future 6. Shanthi S. The past, present and future of Edtech startups. tscfm.org. [Online]. Available: https://tscfm.org/media/the-past-present-and-future-of-edtech-startups/ 7. H. of M. Written by Cathy Li, The COVID-19 pandemic has changed education forever. This is how. World Economic Forum. [Online]. Available: https://www.weforum.org/agenda/2020/ 04/coronavirus-education-global-covid19-online-digital-learning/ 8. Sangster A, Stoner G, Flood B (2020) Insights into accounting education in a COVID-19 world. Acc Educ 29(5):431–562. https://doi.org/10.1080/09639284.2020.1808487 9. Barrot JS, Llenares II, del Rosario LS (2021) Students’ online learning challenges during the pandemic and how they cope with them: the case of the Philippines. Educ Inf Technol 26(6):7321–7338 10. Simbulan N (2020) The Philippines—COVID-19 and its impact on higher education in the Philippines. The HEAD Foundation, 04-Jun-2020. [Online]. Available: https://headfoundation. org/2020/06/04/covid-19-and-its-impact-on-higher-education-in-the-philippines/

Analysis of Challenges Experienced by Students …

197

11. QS-I gauge Indian college and university ratings distributed to JSS institutions. JSS Mahavidyapeetha, 15-Oct-2020. [Online]. Available: https://jssonline.org/qs-i-gauge-indiancollege-and-university-ratings-distributed-to-jss-institutions/ 12. Donitsa-Schmidt S, Ramot R (2020) Opportunities and challenges: teacher education in Israel in the Covid-19 pandemic. J Educ Teach 46(4):586–595. https://doi.org/10.1080/02607476. 2020.1799708 13. Fawaz M, Al Nakhal M, Itani M (2021) COVID-19 quarantine stressors and management among Lebanese students: a qualitative study. Curr Psychol, Jan 2021. https://doi.org/10.1007/ s12144-020-01307-w

Conceptualization, Modeling, Visualization, and Evaluation of Specialized Domain Ontologies for Nano-energy as a Domain Palvannan and Gerard Deepak

Abstract The growing demand for data to be defined in a way that is understandable by both computers and humans has encouraged researchers to look at ontologies. Nano-energy is an area where there are currently no ontologies; as a result, an ontology in this domain was thought to be essential. This paper provides a nanoenergy ontology model from the energy area viewpoint, focusing on sustainable energy. It is the outcome of detailed domain analysis. WebVOWL is used to visualize the ontology, and WebProtege is used to convert it to owl format. Ultimately, the semantic approach was used to test the domain ontologies in terms of both quantity and quality, and with the best, reuse ratio in a class of 0.012 is obtained. Keywords Domain ontology · Energy perspective · Nano-energy ontology · Ontology modeling

1 Introduction The volume of data available on the Internet is massive, and humans without machines are impotent to handle it. In today’s world, people have access to more data in a single day than they did in the previous decades. The underlying issue with data access is that raw data on the Internet could have numerous different forms, such as media, files, and links. As a result, since the data are in various formats, it is difficult to interpret the current relationships. The semantic Web’s intention is to make all information on the Internet computer-processable and useful. Ontology’s role is to model data that demonstrates data as grouping notions in a domain and obtains these notion’s relationships.

Palvannan Department of Metallurgical and Materials Engineering, National Institute of Technology, Tiruchirappalli, India G. Deepak (B) Department of Computer Science and Engineering, National Institute of Technology, Tiruchirappalli, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_17

199

200

Palvannan and G. Deepak

Within the next five decades, energy would turn out to be a tremendous demand across the globe. Nanotechnology provides a possible way to accelerate the production of energy. The materials used in nanotechnology scales less than 100 nm in length. The concept of nano-energy refers to the use of the nanomaterial and nanodevices for research and engineering in all forms of energy sector. The project will contribute to nanotechnology in the energy sector and will provide an outline of nano-energy. Motivation: The knowledge on the domains is conceptualized as concepts and individuals which is further structured as an ontology model. Concepts, sub-concepts, and individuals have a unique relationship. Ontologies have already been implemented into energy informatics to a certain extent. As energy is in pressing demand, nanotechnology would help alleviate the problem by increasing production efficiency and reducing energy waste. An ontology has been developed in the domain nano-energy to enhance the structural data and to improve the richness of energy informatics. Contribution: The paper was made in an attempt to model an ontology in the nanoenergy domain. Many tools were used to explore the nano-energy domain, and essential relationships and principles were identified. For this domain, knowledge modeling is required, as it acts as a guideline for the development and understanding of the subjects. A total of 100 concepts are considered in the proposed approach. The class hierarchy was created with WebProtege. The modeled ontology is then analyzed using a semantic approach, in terms of both quantity and quality, and a decision is reached with a reuse ratio of 0.012. Organization: The following is a breakdown of how the article is structured: In Sect. 2, the relevant work is illustrated; ontology modeling and knowledge representation for nano-energy are presented in Sect. 3; ontology visualization is seen in Sect. 4; Sect. 5 depicts the ontology evaluation, and Sect. 6 concludes the article.

2 Related Work Nasution [1] has proposed an idea of ontology which already exists, and also a mathematically-based approach is taken to produce a domain model that facilitates existence. Zi and Wang [2] have written about nanogenerators, which harvest electrical energy from minor physical changes caused by mechanical or thermal energy. These nanogenerators contribute to the advancement of smart technology in a variety of fields. Serrano et al. [3] have mentioned about the nanotechnology that has become widely popular with modern innovations, which improves human life. As a result, energy-related developments continue improving, the industry becomes more productive, and sustainable energy harvesting, storage, and usage innovation in the area of solar, hydrogen generation, batteries, supercapacitors, etc., emerges over the other. Sequeira [4] has elaborated the advent of nanotechnology in the 1980s

Conceptualization, Modeling, Visualization, and Evaluation …

201

which made life easier and also how nanotechnology is used in green energy production. Wang et al. [5] have stated about the photovoltaic cells, geothermal electricity, piezoelectric devices, and other energy harvesting technologies which are used in a number of applications. Fromer and Diallo [6] have proposed two approaches that can be used to solve issues concerning sustainable usage plus procurement of crucial materials in the power application. Cotterell et al. [7] proposed the ontology for energy investigations (OEI) which will benefit the domain of energy informatics (EI). Raina et al. [8] published about the non-renewable energy sources, such as fossil fuels, play a significant role in global warming and climatic change. Nanotechnology at a scale of less than 100 nm has the ability to lessen the input to energy generation, storage, and usage and is regarded as a new prospect for clean and sustainable energy applications, especially in the renewable energy domain. Caetano et al. [9] mentioned about life cycle analysis in order to help in the procurement and deployment of more efficient energy systems. Sui et al. [10] have written about the nanofluids which are liquid solutions in which nanomaterials are suspended in a base fluid, and the properties of the base fluid are altered depending on the application. Mallikarjuna et al. [11] have mentioned about the conversion of solar energy, with the use nanofluid and nanocoating which produced superior performance. Zhang et al. [12] have listed the nanomaterials which have been widely studied for energy-related applications due to their large surface to volume ratios, altered physical properties, etc., … Luther et al. [13] have written a comprehensive description of nanotechnology’s use in the energy field, including field isolation on the energy concept. In [14–19], semantic applications based on ontologies have been explored.

3 Ontology Modeling and Knowledge Representation for Nano-energy An ontology on the domain of nanotechnology has been constructed. It has 100 concepts, and it includes relationships such as “divides into,” “is the,” “based on,” “for,” and “consist of”. The top class, “nano-energy,” is classified into five subclasses: energy source, conversion, distribution, storage, and utilization. The energy source has two subclasses, while conversion, distribution, storage, and utilization have 3, 5, 3, and 3 subclasses. Figure 1 depicts the proposed nano-energy ontology in detailed. The description of some semantic concepts of the proposed nano-energy ontology is described in Table 1. Nano-energy: Nano-energy is the study and usage of nanomaterials and nanodevices to a confined space, extremely effective power collecting, storage, and utilization. This advancement allows for higher-yielding energy harvesting and energy storage devices in tinier and more compact applications. Supercapacitors, nanogenerators, and fuel cells, for example, can be made more miniature but more powerful with this technology.

202

Palvannan and G. Deepak

Fig. 1 Proposed ontology for nano-energy

Energy Source: Nanotechnology in the energy source sector plays a vital role as it provides better performance than before. The anti-reflective panel in a solar plant is glazed with a nanocoated material that controls the reflection of the incident solar energy, thus reducing energy wastage. Based on the example given, nanotechnology offers unique momentum to the energy sector for subsequent advancements. Energy Conversion: Nanotechnology, such as nanogenerators, have advanced energy conversion processes. Significant external factors such as pressure, temperature, heat, electromagnetism, and friction are used to convert electrical energy in these nanogenerators. With the aid of a nanocatalyst, the hydrogen generation process can also be completed quickly. Fuel cells have also progressed due to the use of nano-optimized electrodes and membranes, which make them more efficient.

Conceptualization, Modeling, Visualization, and Evaluation …

203

Table 1 Semantic description of some important concepts Keyword

Semantic implication

Renewable energy

Renewable energy is a form of nonconventional energy that is produced by nature and does not deplete over time. E.g., sunlight, water, wind, tides, geothermal heat, and biomass

Non-renewable energy

Non-renewable energy is a form of conventional energy that drains and cannot be replenished in our lifetime. E.g., fossil fuels and nuclear power

Nanogenerator

Nanogenerator is a device that converts mechanical or thermal energy as produced by minuscule physical changes into electricity

Fuel cell

A fuel cell is an electrochemical cell that uses chemicals as a fuel to generate electricity through a redox reaction inside of it

Hydrogen generation

Hydrogen generation is the process of transforming a variety of chemical fuels, including fossil fuels, biomass, and so on, into hydrogen

Phase change material

A material that delivers/consumes adequate energy to produce valuable heating/cooling at a phase transition state

Fuel tank

A fuel tank is a container to store combustible chemical fuels such as natural gas and petroleum

Battery

A battery is a storage device composed of one or more electrochemical cells that store chemical energy and generate electricity as required

Heat transfer fluid

A heat transfer fluid acts as a medium for transferring and storing thermal energy while minimizing transmission losses

Adsorptive storage

Adsorptive storage is the method of adsorbing chemical fuel on the adsorptive material’s surface

Wireless power transmission Wireless power transmission is the transportation of electrical energy without the use of cables

Energy Distribution: Even, in the area of energy distribution, nanomaterials make a greater contribution by consuming less energy and doing so efficiently. Nanomaterials used for distribution include carbon nanotubes (CNTs) as a base component in power transmission and heat transfer, nanosuperconductors for superconducting material, and nanosensors for flexible grid management in smart grid. Energy Storage: If there is a huge or a small amount of energy, there should be a way to store it for later use. Nanotechnology is proven to be efficient in the conservation of all forms of energy, including electrical, chemical, and thermal energy. Energy Utilization: Nanotechnology is also beneficial in the energy utilization process, as wastage is minimized, and energy usage for specific tasks is reduced. They are used for things like air conditioning, lighting, lightweight construction, thermal insulation, industrial processes, and plenty of other things.

204

Palvannan and G. Deepak

Fig. 2 Class hierarchy of ontology

4 Visualization Visualization of the modeled ontology is the next step in the modeling process. Ontology is represented using graphical elements with the use of visualization techniques. It cuts down on the amount of time spent looking for information. Using WebProtégé, we were able to model the ontology. WebProtege often entails the development of classes and subclasses, as well as the declaration of their objects. Then, visualization is achieved after using WebProtégé to create a “.owl” file. The ontological structure was visualized using WebVOWL in this research. Figure 2 depicts the ontology class hierarchy, Fig. 3 depicts the object relationships, and Fig. 4 depicts the entity graph.

5 Ontology Evaluation Evaluation is the next step in ontological modeling. Ontology can be assessed using a variety of approaches. The semiotics methodology is used in this research, so it can calculate the ontology in terms of quantity and quality. The number of classes, subclasses, attributes, and leaf classes are all noted and evaluated for the quantitative assessment. The reuse ratio as shown in Eq. (1) is a potential parameter along with the reference ratio as shown in Eq. (2) for assessing the ontology. Reuse Ratio =

No. of Reused Elements Total Elements in Domain

(1)

Conceptualization, Modeling, Visualization, and Evaluation …

Fig. 3 Relationship between the objects

Fig. 4 Entity graph

205

206

Palvannan and G. Deepak

Table 2 Quantitative ontology properties Class

Sub-class

Attributes

Leaf class

Reuse ratio

Reference ratio

99

94

6

27

0.012

0.145

Fig. 5 Evaluation result is represented in a qualitative manner by semiotic approach

Referenced Ratio =

No. of Reference Elements Total Element Reused

(2)

The reference ratio and reuse ratio are calculated by the formulas listed below. Equation (1) gives the reference ratio; Eq. (2) represents the reuse ratio. The reference ratio is 0.145 or 14.5% because this ontology was generated using few current ontologies as a guide. The magnitude of the reuse ratio is 0.012. Table 2 depicts the quantitative ontology properties. Figure 5 is a graphical depiction of the structured source data for evaluating qualitative manner by semiotic method. A community of 97 users of this knowledge base evaluates the nano-energy ontology. Accuracy, clarity, comprehensiveness, consistency, interpretability, lawfulness, relevance, and richness are among the criteria used to assess them. From Fig. 5, the voting received from the participant students strongly indicates that the qualitative metrics have very high scores.

6 Conclusion This is the primary of its type to build an ontology based on the knowledge of nanoenergy. This paper’s approach implements an information modeling technique in the form of classified XML, converted into its corresponding RDF structure. The

Conceptualization, Modeling, Visualization, and Evaluation …

207

modeled computational ontology for nano-energy is visualized using WebProtégé. The evaluation’s results suggest that the ontology can be used for its specific uses. The modeled ontologies are qualitatively and quantitatively analyzed using a conceptual method, yielding a 0.012 reuse ratio and 0.145 reference ratio. As a result, this ontology is the first and best for describing nano-energy as a separate domain and fostering energy informatics.

References 1. Nasution MKM (2018, December) Ontology. J Phys: Conf Ser 1116(2), 022030. IOP Publishing 2. Zi Y, Wang ZL (2017) Nanogenerators: an emerging technology towards nanoenergy. APL Mater 5(7):074103 3. Serrano E, Rus G, Garcia-Martinez J (2009) Nanotechnology for sustainable energy. Renew Sustain Energy Rev 13(9):2373–2384 4. Sequeira S (2015) Applications of nanotechnology in renewable energy sources exploitation 5. Wang H, Jasim A, Chen X (2018) Energy harvesting technologies in roadway and bridge for different applications—a comprehensive review. Appl Energy 212:1083–1094 6. Fromer NA, Diallo MS (2013) Nanotechnology and clean energy: sustainable utilization and supply of critical materials. In: Nanotechnology for sustainable development. Springer, Cham, pp 289–303 7. Cotterell M, Zheng J, Sun Q, Wu Z, Champlin C, Beach A (2012) Facilitating knowledge sharing and analysis in energy informatics with the ontology for energy investigations (OEI) 8. Raina N, Sharma P, Slathia PS, Bhagat D, Pathak AK (2020) Efficiency enhancement of renewable energy systems using nanotechnology. In: Nanomaterials and environmental biotechnology. Springer, Cham, pp 271–297 9. Caetano NS, Mata TM, Martins AA, Felgueiras MC (2017) New trends in energy production and utilization. Energy Procedia 107:7–14 10. Sui D, Langåker VH, Yu Z (2017) Investigation of thermophysical properties of nanofluids for application in geothermal energy. Energy Procedia 105:5055–5060 11. Mallikarjuna K, Reddy YS, Reddy KH, Kumar PS (2021) A nanofluids and nanocoatings used for solar energy harvesting and heat transfer applications: a retrospective review analysis. Mater Today: Proc 37:823–834 12. Zhang Q, Uchaker E, Candelaria SL, Cao G (2013) Nanomaterials for energy conversion and storage. Chem Soc Rev 42(7):3127–3171 13. Luther, W., Eickenbusch, H., Kaiser, O.S. and Brand, L., 2015. Application of nanotechnologies in the energy sector. Hessen Trade & Invest GmbH. 14. Deepak G, Santhanavijayan A (2020) OntoBestFit: A best-fit occurrence estimation strategy for RDF driven faceted semantic search. Comput Commun 160:284–298 15. Kumar A, Deepak G, Santhanavijayan A (2020, July) HeTOnto: a novel approach for conceptualization, modeling, visualization, and formalization of domain centric ontologies for heat transfer. In: 2020 IEEE international conference on electronics, computing and communication technologies (CONECCT). IEEE, pp 1–6 16. Deepak G, Teja V, Santhanavijayan A (2020) A novel firefly driven scheme for resume parsing and matching based on entity linking paradigm. J Discr Math Sci Cryptogr 23(1):157–165 17. Deepak G, Kumar N, Bharadwaj GVSY, Santhanavijayan A (2019, December) OntoQuest: an ontological strategy for automatic question generation for e-assessment using static and dynamic knowledge. In: 2019 Fifteenth international conference on information processing (ICINPRO). IEEE, pp 1–6

208

Palvannan and G. Deepak

18. Deepak G, Priyadarshini JS (2018) Personalized and enhanced hybridized semantic algorithm for web image retrieval incorporating ontology classification, strategic query expansion, and content-based analysis. Comput Electr Eng 72:14–25 19. Varghese L, Deepak G, Santhanavijayan A (2019, Dec) An IoT analytics approach for weather forecasting using Raspberry Pi 3 model B+. In: 2019 Fifteenth international conference on information processing (ICINPRO). IEEE, pp 1–5

An AI-Based Forensic Model for Online Social Networks Varsha Pawar and Deepa V. Jose

Abstract With the growth of social media usage, social media crimes are also creeping sprightly. Investigation of such crimes involves the thorough examination of data like user, activity, network, and content. Although investigating social media looks quite straight forward process, it is always challenging for the investigators due to the complex process involved in it. Due to the immense growth of social media content, manual processing of data for investigation is not possible. Most of the works from this area provide an automatic model or semi-automated, and much of the contributions lacks the logical reasoning and explainability of the evidence extracted. Searching techniques like entity-based search and explainable AI add value to the quick retrieval within appropriate scope and explain the results to the court of law. This paper provides a model by adding these new techniques to the basic forensic process. Keywords Cybercrime · AI · Social media · Forensics

V. Pawar (B) Department of Computer Applications, CMR Institute of Technology, Bengaluru, India e-mail: [email protected] V. Pawar · D. V. Jose Department of Computer Science, CHRIST (Deemed to be University), Bangalore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_18

209

210

V. Pawar and D. V. Jose

1 Introduction Social media became a hub for sharing the information among the people. Social media can provide huge amounts of information about the human behavior and their relationships. It provides an insight of their psychological behavior. This information gives an intuition into several aspects for processing the case legally. Social media data comprise of user, activity, network, and content. User’s data can be extracted from user profile. Activity refers to time and location of an action done by user in the social media. Network data provide the number of contacts user has in the social media revealing their relationships. Content refers to tweets, likes, and re-tweets. Different types of people are involved like suspects, victims, and witnesses.

1.1 Importance of Crime Data Analysis Due to the fragility and volatility of forensic evidence, certain procedures need to be observed to ensure that the record is not always altered in the course of its acquisition, packaging, transfer, and storage. These exact processes define the levels of records dealing with and the protocols to be observed in the course of records acquisition. The necessities for collection of forensic data from social media are normally mentioned as • Collecting the applicable records or content material from one or more online social networks. • Accumulate metadata associated with actual data of social media Websites. • Make sure the authenticity and coherence of records are maintained in the forensic process.

1.2 Social Media Crimes—Types See Fig. 1. Profile Hacking: Gathering the personal information of a person with his profile so as to dissipate the data and gain his own benefits. Cyber Bullying: It is an act of sharing messages so as to harm them mentally with their abusive content. Information Theft: Obtaining the confidential information of a person without his knowledge for performing fraudster activities. Link Baiting: Link baiting is a mechanism of sending the data to link with them. Photo Morphing: Photo morphing is a mechanism of changing image of an individual to other and misuses them. Romance and Dating Scams: Romance and dating scams occur when a person sends messages with fake identity to misguide them and leads to illegal activities.

An AI-Based Forensic Model for Online Social Networks

211

Fig. 1 Types of social media crimes

Offer and Shopping Scams: Offer and shopping scams occur when a person offers attractive shopping offers and theft the information.

1.3 Evidence Acquisition and Provenance Management Acquisition in social media is the process of copying data from social networks. The primary aim of the forensic procedure is to preserve any evidence without changing its originality while performing a standard investigation by extracting, identifying, and verifying the digital proof for the purpose of reconstructing past events. Provenance is a information about the entities, activities, and people involved in creating a data element or thing. This information can be used to create an assessment of its quality, reliability, or reliability. Provenance management sketches the source of evidence, who and when created the data. It also defines the how the data have been distribute and traces it. As social media data have personal information, and it is difficult to extract the data directly. Most of the online social networks provide API to extract the data. Data are mostly extracted through Web crawlers and APIs. Keyword-based search is used to extract the evidence, and with this type of search, we will not be able to differentiate the entities. Customized queries must be able to perform on data. The sequence of events needs to be traced so to provide the actions done between victim and suspect.

212

V. Pawar and D. V. Jose

2 Literature Review Ahvanooey et al. [1] proposed a watermarking technique for text to verify the data integrity. It also performs the verification of ownership on Latin content of online social network. Using this model, they are able to find only common friends in social media, and those are analyzed using this model. They have evaluated by using the evaluation parameters like IR, EC, DR. Weightage of the relationships could have been identified. Gon [2] implemented data mining approaches in their work. Using his model, they performed conceptual and relational analysis on Instagram data using Leximancer software. Correlation of data and exploratory analysis needs to be performed. Balakrishnan et al. [3] worked with machine learning algorithms, namely RF, NB, to detect cyber bullying content in social Media. They have used wrapper feature selection to extract the best features. They need to perform fine-grained cyber bullying classifications. Accuracy can be increased by implementing feature extraction methods. Ngejane et al. [4] implemented machine learning methods and multi-layer perceptions to detect bullying behavior in social media. They found key phrases which are provoking to sexual predatory activities. It comprises of features which are not significantly contributing to all the classes and hence overlapping them. They have used what-if tool to examine the model and got probability close to 0.5. Sun et al. [5] proposed a digital forensic investigation model with NLP-based techniques for online communication. In this model, they have used LDA models for crime identification. It should be enhanced for dynamically growing number of topics. Arshad et al. [6] implemented data mining approaches for collection and analysis of data. There is no explicit methodology for provenance management. Machine learning methods can be implemented to improve the performance of the model. Amato et al. [7] used semantic-based methodology and NLP techniques to analyze digital evidences. They have correlated individuals data and their relationships using SWRL, and information is stored in RDF assertions. The sequence of events in the incident is not traced. Arshad et al. [8] implemented a multi-layered semantic framework by building an hybrid ontology for interpreting and governing the heterogeneous data into welldefined structured data. AI and machine learning algorithms can be implemented to enhance the performance of the model. Lu et al. [9] proposed a framework for evidence extraction with hotword frequency analysis. Physical location acquisition is possible with this model. This model does not provide a mechanism to explicitly store social media content along with metadata.

An AI-Based Forensic Model for Online Social Networks

213

Arshad et al. [10] introduced a theoretical model in building and interpreting the evidence of incidents. By correlating the events occurred, the most relevant information can be filtered. Correlations of data will not be able to describe the events through durations and overlapping slots. Arshad et al. [11] proposed a semi-automated forensic investigation model which automatically analyzes data. There is no logical reasoning explaining the evidence extracted.

3 Proposed Model See Fig. 2.

3.1 Model Description 3.1.1

Initialization

The incident initialization will be done once there is any cybercrime complaint is raised. Proper authorization of the incident should be done, and before initialization of the process, the models infrastructure should also be ready.

3.1.2

Identify Social Media

The victim’s participation in the social media should be identified. Identify the social media where the crime has happened. The recording of the identified social media along with the duration should be maintained.

Fig. 2 AI-based analysis engine

214

3.1.3

V. Pawar and D. V. Jose

Evidence Collection

Semantic-based or entity-based should be implemented based on the entities identified. The parameters like victim name, location, topic identified will be used in searching mechanism. The victim and suspects conversation should be fetched and recorded.

3.1.4

Evidence Analysis

The evidence collected in intent with entities should be filtered and sorted. The data are analyzed by constructing the assertions and hypothetically testing them.

3.1.5

Evidence Interpretation

The evidence fetched will be fed to an explainable AI method in order to have logical reasoning of evidence so as to interpret the evidence found. It can provide the steps how the model is able to predict the victim and his messages as evidence.

3.1.6

Evidence Preservation

The evidence found should be preserved without altering any data. The data along with metadata should be stored. The data should be stored in memory by calculating the hash values.

3.1.7

Evidence Presentation

The collected evidence with logical reasoning of data should be submitted to investigators in a proper authenticated format.

3.2 Formal Description of the Model The steps involved in the proposed model are

An AI-Based Forensic Model for Online Social Networks

1. 2.

215

Identify incident and initialize it. If(incident=crime and location=Social Media and jurisdiction=TRUE) Invoke the investigation process Else End the process and specify the termination Identify the particular social network. Perform entity based search for extracting data of primary incident extraction

3. 4. zone. 5. Formulate the other parameters 6. Preserve the primary incident data by creating a forensic copy. 7. Model identifies the events and analyses regarding the incident. 8. Sequence of events should be maintained based on the data by sorting based on the time. 9. Extract the evidence. 10. Logical reasoning of extracted evidence and its events are explained by using explainable AI method. 11. If(explainability=TRUE) Fetch the evidence and Jump to step 14. else Jump to step 12 12. Identify the other entities and perform entity based search to formulate secondary extraction zone. 13. Perform steps 5 to 10 until explainability is true. 14. Associate metadata along with evidence. 15. Preserve the evidence. 16. Submit the evidence.

4 Evaluation of Model using ISO Standards and It Specifications The model will be assessed with ISO standards. The guidelines provided by the ISO will be helpful in increasing the standards of model or process. In the forensic science, ISO/IEC 27050-2:2018 and ISO/IEC 27050-4:2021 will provide the specifications invoking from identification of incidence to providing the evidence. Thus, the model specified will be evaluated using the ISO standards (Table 1).

5 Comparison with Other Models The phases of the AIMOSN model are compared with other models so as to ensure the forensic process is incorporated effectively (Table 2).

216

V. Pawar and D. V. Jose

Table 1 Verifying the model with ISO standards

ISO 27050-4:2021

AIMOSN

Possibility of electronic discovery

Yes

Identification of systems

Yes

Review

Yes

Integrity of data

Yes

Provenance

Yes

Management and storage of electronic discovery

Yes

Table 2 Comparing the model with other models Phases of the model

AIMOSN

FIMOSN (Arshad et al. [10])

Digital forensic investigation model (Montasari [12])

Automatic incidence zone identification

Yes

No

No

Drafting

Yes

Yes

Yes

Evidence detection

Yes

Yes

No

Evidence collection

Yes

Yes

Yes

Evidence analysis

Yes

Yes

Yes

Evidence provenance

Yes

Yes

No

Evidence storage

Yes

Yes

No

Evidence presentation

Yes

Yes

Yes

Comparing with other models, an incident zone will be identified by performing entity-based search which refines the social media records to search for the evidence (Table 3). Table 3 Comparing features of AIMOSN with the other models AIMOSN

FIMOSN(Arshad et al. [10])

Forensic model (Montasari [12])

Automated incident zone collections

Yes

No

No

Automated exploration

Yes

Yes

Yes

Evidence extraction explainability

Yes

No

No

Iteration of process

Yes

Yes

Yes

An AI-Based Forensic Model for Online Social Networks

217

6 Conclusion The model has provided the solution for the fewer problems in social media forensics such as automated incident zone identification, planning, analysis, and maintaining the evidence. However, the model has not specified any explicit methodology for provenance management. We do not have many sufficient datasets that can be applied to the forensic science. Due to the lack of expected datasets, it is restricting the researchers to limit their illustrations. However, the data of social media are personal to the user, and hence, extracting personal information is not allowed. In this paper, explainability of the evidence is proposed which provides how the model is able to find the evidence. The model has not specified a particular explainability method to illustrate the logical reasoning. In future, we want to implement the model with particular explainable AI methods to trace the evidence and for decision-making. The evidence with proper reasoning and explainability can be directly submitted to the court of law.

References 1. Ahvanooey MT, Li Q, Zhu X, Alazab M, Zhang J (2020) ANiTW: a novel intelligent text watermarking technique for forensic identification of spurious information on social media. Comp Secur 90:101702. https://doi.org/10.1016/j.cose.2019.101702 2. Gon M (2021) Local experiences on Instagram: social media data as source of evidence for experience design. J Dest Mark Manage 19:100435. https://doi.org/10.1016/j.jdmm.2020. 100435 3. Balakrishnan V, Khan S, Arabnia HR (2020) Improving cyberbullying detection using Twitter users’ psychological features and machine learning. Comp Secur 90:101710. https://doi.org/ 10.1016/j.cose.2019.101710 4. Ngejane CH, Eloff JHP, Sefara TJ, Marivate VN (2021) Digital forensics supported by machine learning for the detection of online sexual predator y chats. In: Forensic science international: digital investigation, vol 36, p 301109. https://doi.org/10.1016/j.fsidi.2021.301109 5. Sun D, Zhang X, Choo KKR, Hu L, Wang F (2021) NLP-based digital forensic investigation platform for online communications. Comp Secur 104:102210. https://doi.org/10.1016/j.cose. 2021.102210 6. Arshad H, Jantan A, Omolara E (2019) Evidence collection and forensics on social networks: research challenges and directions. Digit Investig 28:126–138. https://doi.org/10.1016/j.diin. 2019.02.001 7. Amato F, Cozzolino G, Moscato V, Moscato F (2019) Analyse digital forensic evidences through a semantic-based methodology and NLP techniques. Futur Gener Comput Syst 98:297– 307. https://doi.org/10.1016/j.future.2019.02.040 8. Arshad H, Jantan A, Hoon GK, Butt AS (2019) A multilayered semantic framework for integrated forensic acquisition on social media. Digit Investig 29:147–158. https://doi.org/10.1016/ j.diin.2019.04.002 9. Lu R, Li L (2019) Research on forensic model of online social network. In: 2019 IEEE 4th international conference on cloud computing and big data analysis (ICCCBDA). https://doi. org/10.1109/icccbda.2019.8725746 10. Arshad H, Jantan A, Hoon GK, Abiodun IO (2020) Formal knowledge model for online social network forensics. Comp Secur 89:101675. https://doi.org/10.1016/j.cose.2019.101675

218

V. Pawar and D. V. Jose

11. Arshad H, Omlara E, Abiodun IO, Aminu A (2020) A semi-automated forensic investigation model for online social networks. Comp Secur 97:101946. https://doi.org/10.1016/j.cose.2020. 101946 12. Montasari R (2016) A comprehensive digital forensic investigation process model. Int J Electron Secur Digit Forensics 8(4):285. https://doi.org/10.1504/ijesdf.2016.079430

Protection Against SIM Swap Attacks on OTP System Ebin Varghese and R. M. Pramila

Abstract One-time password-based authentication stands out to be the most effective in the cluster of password-less authentication systems. It is possible to use it as an authentication factor for login rather than an account recovery mechanism. Recent studies show that attacks like SIM swap and device theft raise a significant threat for the system. In this paper, a new security system is proposed to prevent attacks like SIM swap on OTP systems, the system contains a risk engine made up of supervised and unsupervised machine learning model blocks trained using genuine user data space, and the final decision of the system is subject to a decision block that works on the principles of voting and logic of an AND gate. The proposed system performed well in detecting fraud users, proving the system’s significance in solving the problems faced by an OTP system. Keywords Password-less authentication · Authentication · Security · OTP · Biometrics · Keystroke dynamics · Password · Machine learning models · Supervised learning · Unsupervised learning

1 Introduction There is always a scope of adding new ideas that can be nourished to services that aid community members, which comes with a responsibility to ensure community and service sustainability. By increasing amount of Web sites and applications that can aid us, that will make the user more vulnerable if their data is not protected, so it is evident that security is one of the essential features of a good application that can sustain. Passwords are difficult to keep track of: security recommends that passwords be long, difficult to guess, different across apps, and changed regularly.

E. Varghese (B) · R. M. Pramila Christ University, Bangalore, India e-mail: [email protected] R. M. Pramila e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_19

219

220

E. Varghese and R. M. Pramila

Even the most attentive user may be tempted to take shortcuts when keeping an everchanging array of complex passwords for an average of 90 accounts. It is necessary to create an alternative system that is password-independent. OTP system is one such alternative that nearly perfected an ideal authentication system; however, studies show that OTP systems can also be vulnerable to specific attacks such as SIM swap. In order to prevent attacks like SIM swap, researchers used user parameters and user specific unique patterns to detect such attacks. So how well a system that combines functionalities of a risk engine and biometric authentication works to prevent attacks OTP authentication systems. This research paper proposes an additional layer of protection combining biometric and risk-based authentication framework to solve SIM swap attacks. The research paper will address previous works done on the topic, proposed model, results and interpretation and conclusion. The rest of the paper is organized as follows: Sect. 2 introduces related works done on the research topic, Sect. 3 describes the whole architecture of the proposed model that can detect and prevent a fraud entry, Sect. 4 presents experimental results, compares the model, and evaluates the performance of the proposed model. Finally, Sect. 5 summarizes and concludes the research paper.

2 Related Works Every user is a unique entity, and the job of the authentication system is to uniquely identify the user by verifying the data shared by the user. Any deviation from the expected parameter set can be classified as an anomaly or an outlier. This is the fundamental idea behind many risk-based authentication systems [1, 2]. The user data used to capture unique patterns is not limited to static data; researchers experimented with risk models using dynamic features and static user features [3], which increased the model performance. The risk models, in most cases, are powered by machine learning algorithms [4] to detect anomaly records by comparing them with patterns observed within genuine user class. Risk estimate is used to determine whether an access request should be allowed or rejected, considering a variety of parameters. Such a function’s output is based on a risk threshold, and access is permitted if the risk threshold is met [5]. It is a good practice to combine the functionalities of a risk engine with biometric features to obtain a better-performing model for user filtering [6]. Negligence, blunders, illness, mortality, insider threats, and susceptibility to social engineering are elements of an organization’s human threat surface [7]. Passwords have various problems such as reuse, phishing, and leakage. Many papers addressed the loopholes in existing password authentication [8]. The survey paper [9] also addressing the same problem of using the same password for various sites and found alerting results, the researchers could crack about one-third of the passwords the students had created on higher-level sites. All these vulnerabilities in password authentication raised the necessity of a password-less authentication system [9].

Protection Against SIM Swap Attacks on OTP System

221

Faced with the difficulty of attracting users, the creators of new social networks have begun to investigate alternate techniques for authenticating users that do not require any form of login or registration process [10]. When a person enters the social network’s Web site, an ideal system would recognize them. The need for such a system is not limited to the concept of a user-friendly system [11]; many domains work under the value of time, such as the medical industry; some of the instruments they use are password-protected, such devices need fast access. There are different password-less authentication now, such as browser fingerprinting, cryptography, external device-dependent FIDO, location-based authentication, zero factor, or email-based authentication. Password-less forced developers to add more layers of security. There are efforts made to build an app that stores the credentials in a database. The camera would queue up and snap their photo while doing a face recognition check, but the problem of multifactor authentication is that the framework is not compatible with all devices [12]. Also, studies show that it is easy to mislead users when it comes to multifactor authentication like fingerprint authentication [13]. Among the password-less methods, OTP or one-time password stands out, and it is found that OTP can be assumed as the best system to prevent phishing attacks, man-in-the-middle attacks, malware, Trojans, instant messaging, social networking sites are some of the most common types of phishing. To guarantee the quality of this system, various frameworks are also introduced; the framework may be used in standard cellphones; however, it does not rely on a cellular or WiFi network. The framework’s goal is to make present password-authenticated Web services more secure [14]. The most frequent method for distributing OTPs is a short messaging service available for mobile phones [15]. They proposed a new schema in which OTP is sent as code that proved to be an effective security advancement for OTP to secure the distribution. An OTP protocol to evaluate the quality of transmission and OTP system was created by Ma et al. [16]. They found that among the 544 assessed apps, 536 failed to follow the protocol entirely. The attacker can get the OTP in various ways, including wireless interception, mobile phone Trojans, and SIM swap attacks [17]. SIM swapping is the second part of the phishing scam and is one of the most recent scams. To begin, a criminal uses phishing to obtain the victim’s basic personal information, after which he can intercept calls, texts, and other secret information [18]. So it is essential to add multiple layers of security that can deny access to an imposter even if he obtains the OTP. Keystroke dynamics and risk-based authentication are two main ideas tested by many researchers to pursue an ideal system. There are experimental proofs that verify the system’s success that combines both this idea, the experimental evaluation of this method given a low equal error rate of 8.21% [6]. One of the OTP vulnerabilities, called SIM swap, can be solved by combining both of these ideas. The frequency-domain solutions more accurately capture habitual patterns and disclose unique biometric characteristics without the risk of convergence [19]. They transformed behavior to storable barcodes, and when running SVM models, the result was satisfactory. Its key advantages are that it does not require any additional hardware, is cost-effective, typing is what the user does. Thus, there is no need to exert extra effort [20]. Keystroke dynamics are improving well, and it is done through finding

222

E. Varghese and R. M. Pramila

rich and sensitive feature sets; by focusing feature space and enhancing its quality, the system can obtain a low error rate [21, 22]. Risk-based authentication systems built upon an ML algorithm are fed with user parameters such as Internet Protocol address, location, number of failed attempts, and other features that can help the model distinguish between a genuine and an imposter based on the calculated risk score [23].

3 Proposed Model In a traditional OTP system, the user is granted access if both username and OTP are valid, and it is often used as an account recovery method. It is good practice to use OTP for authentication in the first place rather than as a recovery measure. It is found to be satisfactory for the users when authentication systems are replaced with the ‘forgot password’ recovery method [24]. The proposed system uses OTP authentication in the login phase. Its mechanisms can be grouped into three main components, the data extraction, risk engine, and decision block, as depicted in Fig. 1.

Fig. 1 Proposed model architecture

Protection Against SIM Swap Attacks on OTP System

223

3.1 Data Extraction The system’s effective functioning relies on user parameters and user keystroke data. A model trained on the proposed data space can easily differentiate users. In attacks like SIM swap or device theft, a fraud user can easily obtain a valid OTP. However, the proposed system can easily detect this kind of fraud entry. The proposed model collects user keystroke data and compares that with keystroke data of the genuine user of the account; therefore, having a valid OTP does not guarantee access. Along with keystroke data collection, the system collects user parameters at the time of login, such as date, login time, IP address, country, city, region, browser, browser version, OS, OS version, device identifier-1, device identifier-2, device identifier-3, and service provider.

3.2 Risk Engine Here, the risk engine is programmed to produce six class outputs as zeros and ones to represent fraud and genuine user, respectively. The block is subdivided into two learning blocks, supervised and unsupervised. The supervised learning blocks contain support vector machine, K-nearest neighbors, and Naïve Bayes supervised machine learning algorithms trained using historical data containing user parameters, keystroke data, and class label column. SVM classifier is one of the best-supervised learning classifiers. It can solve both linear and nonlinear problems and is helpful for a wide range of applications. The working principle of SVM is simple: The algorithm creates a line or a hyper-plane which separates the data into classes. Here, the model studies the underlying patterns in the data space and creates a hyper-plane that divides genuine user data points from fraudulent user data points such that when an unseen data point comes, the model can correctly choose the side of the hyper-plane that a point belongs to in our case SVM classifies the new data point as either genuine or fraudulent user. KNN model is a distance-dependent model depending on the pre-assigned parameter. It studies underlying patterns from training data and assumes similarity between new cases and existing cases and categorizes new cases into a class that is most similar in the available classes. Unlike SVM and KNN, Naïve Bayes uses Bayes theorem to decide the class that a parameter set belongs to. It calculates the probability of genuine class given parameter set with the probability of fraudulent class given parameter set and labels the parameter with a class with greater posterior probability. That is, P(Genuine/Parameter set) > P(Fraudulent/Parameter set) implies the given parameter set will be labeled as a genuine case. Thus, the supervised learning block produces three outputs from the three models used, and the output falls into one of the entries in the output class (genuine, fraud). Initially, there would not be any data of genuine user for training the models; therefore, the proposed model is under the assumption that records of

224

E. Varghese and R. M. Pramila

Table 1 Precision, recall, and F1-score tell how well each model performed in detecting the fraud inputs Model name

Precision (%)

Recall (%)

F1-score (%)

SVM

95

92

93

Naïve Bayes

86

86

86

KNN

94

92

93

One-class SVM

90

90

90

Isolation forest

93

88

89

Minimum covariance determinant

94

83

87

genuine user already exists and used the same to train the models before it starts to classify users. The unsupervised learning block is powered by three unsupervised learning models: One-class support vector machine, isolation forest, and minimum covariance determinant. The primary objective of this model is to identify the outlier in the given data that is fraud entries. To make the model call fraud cases as outliers, the model should be trained on a dataset containing only genuine cases. The output of an outlier detection model is 1 or −1, representing an inlier or outlier, respectively. The system considers an inlier as a genuine user and an outlier as a fraudulent user. Thus, the unsupervised learning block produces three outputs from the output class (genuine, fraud). The F1-score is the harmonic mean of recall and precision, and it provides a more accurate picture of cases that were mistakenly classified than the accuracy metric. Even though most models do not give an ideal F1-score, the system performs well for the task at hand. The final system prediction is dependent on predictions made by models of both supervised and unsupervised learning blocks. Table 1 shows model performance based on F1-score.

3.3 Decision Block The decision block evaluates the six outputs from the risk engine through voting and AND gate logic and decides whether or not to grant access to the user. In voting, the final class label is the class label predicted most frequently by the classification models. This method is programmed to the three outputs from the supervised block and three outputs from the unsupervised block separately, creating two voted classes at the end of the process. The final class of the parameter set is decided by the AND gate logic, where 1 and 0 represent genuine and fraud user, respectively. So the system considers the user as genuine only if both the voted classes of supervised and unsupervised block predict the user as genuine.

Protection Against SIM Swap Attacks on OTP System

225

4 Results and Discussions The dataset used for the experiments uses parameters from primary and secondary datasets, as shown in Table 2 [25]. With the help of IP logger programming, the primary dataset about user parameters such as date and time of login, location of the user, IP address, country, city, region, users browser details, and users device details is collected. For the keystroke parameters, a secondary benchmark dataset is used [25]. The subject is presented with a password that must be typed. The correctness of the password is tested as the person types it. The application encourages the subject to retype the password if the subject makes a typographical error. They record timestamps for 50 correctly typed passwords in each session in this way. The software application captures the event (i.e., keydown or keyup), the name of the key involved, and a timestamp for the keystroke event whenever the subject pushes or releases a key. To create highly accurate timestamps, an external reference clock was used. Each row of data is the timing information for a single subject’s repetition of the password. The subject column is a one-of-a-kind identification for each subject (e.g., s002 or s057). For the study, subject 1 as genuine and subject 2 as fraud are chosen. The F1-score is the harmonic mean of recall and precision. It provides a more accurate picture of mistakenly classified cases than the accuracy metric; Fig. 2 shows the performance of the six models in classifying genuine and fraud users. Among the supervised models, KNN performed well, and for the unsupervised model, one-class SVM produced an F1-score of 90%. For the task in hand, a model with a decent F1-score is enough since the final output is directly dependent on each model’s performance in an ensemble fashion. For training, the supervised model Sklearn train-test split is used to train the model with 70% of the total data space and used 30% for model evaluation. In the case of unsupervised learning, a separate dataset is made from the parent dataset to train the outlier detection models. The models are trained using only genuine cases, and for evaluation purposes, parameter sets are extracted randomly. For evaluation five random parameter sets are selected. Output from each model is recorded to quantify the performances of the proposed model. The system predicts the user as a genuine user if and only if both the voted class are genuine, and it is found that the system class prediction is always the same as the actual class. And the proposed model got an average F1-score of 90%, which is good.

5 Conclusion The study discussed advantages and disadvantages of a password system and the need for a system that is independent of password, the paper reviewed previous works and contributions made on password-less authentication security system, and

Login time

10:02

11:22

01:25

Date

26.09.2021

06.09.2021

16.07.2021

27.7.104.25

157.46.138.88

157.46.138.88

IP address

Table 2 Sample from the parent dataset

India

India

India

Country

Kochi

Kochi

Kochi

City

Kerala

Kerala

Kerala

Region

Mozilla

Chrome

Chrome

Browser

96.0

94.0.4606.61

94.0.4606.61

Version

Win

Win

Win

OS

0.2312

0.2534

0.1484

Key.P1

0.0172

0.1142

0.0932

Key.P2

0.1633

0.1496

0.2583

Key.p3

226 E. Varghese and R. M. Pramila

Protection Against SIM Swap Attacks on OTP System Fig. 2 Performance comparison of models used

227

Minimum Covariance… Isolaon Forest F1-Score

One class SVM

Recall

KNN

Precision

Naïve Bayes SVM 75

80

85

90

95

it proposed that OTP systems are the most prominent method in cluster of passwordless authentication system; however, it is prone to device theft and SIM swap attacks. Through these kinds of attacks, a fraudulent user can obtain a valid OTP and thus get access to user accounts. Combining the functionalities of a risk engine and biometric authentication offers a significant addition to the security and defense against SIM swap attacks. The proposed system has two functionalities one is to detect a fraud entry and the other is to prevent, results and experiments shows a significant proof that the systems bent in achieving the same.

References 1. Wu Z (2015) A novel behavior-based user authentication scheme. In: 2015 IPFW student research and creative endeavor symposium, Book 72 2. Rocha CC, Lima JCD, Dantas MAR, Augustin I (2011, June) A2BeST: an adaptive authentication service based on mobile user’s behavior and spatio-temporal context. In: 2011 IEEE symposium on computers and communications (ISCC). IEEE, pp 771–774 3. Spooren J, Preuveneers D, Joosen W (2015, April) Mobile device fingerprinting considered harmful for risk-based authentication. In: Proceedings of the eighth European workshop on system security, pp 1–6 4. Djosic N, Nokovic B, Sharieh S (2020, June) Machine learning in action: securing IAM API by risk authentication decision engine. In: 2020 IEEE conference on communications and network security (CNS). IEEE, pp 1–4 5. dos Santos DR, Marinho R, Schmitt GR, Westphall CM, Westphall CB (2016) A framework and risk assessment approaches for risk-based access control in the cloud. J Netw Comput Appl 74:86–97 6. Traore I, Woungang I, Obaidat MS, Nakkabi Y, Lai I (2014) Online risk-based authentication using behavioral biometrics. Multi Tools Appl 71(2):575–605 7. Cuchta T, Blackwood B, Devine TR, Niichel RJ, Daniels KM, Lutjens CH, …, Stephenson RJ (2019, September) Human risk factors in cybersecurity. In: Proceedings of the 20th annual SIG conference on information technology education, pp 87–92 8. Morii M, Tanioka H, Ohira K, Sano M, Seki Y, Matsuura K, Ueta T (2017, July) Research on integrated authentication using passwordless authentication method. In: 2017 IEEE 41st annual computer software and applications conference (COMPSAC), vol 1. IEEE, pp 682–685 9. Haque ST, Wright M, Scielzo S (2013, February) A study of user password strategy for multiple accounts. In: Proceedings of the third ACM conference on data and application security and privacy, pp 173–176

228

E. Varghese and R. M. Pramila

10. Ozan E (2017, January) Password-free authentication for social networks. In: 2017 IEEE 7th annual computing and communication workshop and conference (CCWC). IEEE, pp 1–5 11. El-Mahi E (2020) Password-less authentication in medical lab devices. University of Limerick 12. Kennedy W, Olmsted A (2017, December) Three factor authentication. In: 2017 12th International conference for internet technology and secured transactions (ICITST). IEEE, pp 212–213 13. Oogami W, Gomi H, Yamaguchi S, Yamanaka S, Higurashi T. Observation study on usability challenges for fingerprint authentication using webAuthn-enabled android smartphones. Age, 20, 29 14. Zhao S, Hu W (2018) Improvement on OTP authentication and a possession-based authentication framework. Int J Multi Intell Secur 3(2):187–203 15. Tandon A, Sharma R, Sodhiya S, Vincent PM (2013) QR Code based secure OTP distribution scheme for authentication in net-banking. Int J Eng Technol 5(3):0975–4024 16. Ma S, Feng R, Li J, Liu Y, Nepal S, Bertino E, …, Jha S (2019, December) An empirical study of sms one-time password authentication in android apps. In: Proceedings of the 35th annual computer security applications conference, pp 339–354 17. Lee K, Kaiser B, Mayer J, Narayanan A (2020) An empirical study of wireless carrier authentication for {SIM} swaps. In: Sixteenth symposium on usable privacy and security ({SOUPS} 2020), pp 61–79 18. Karia MAR, Patankar DAB, Tawde P (2014) SMS-based one time password vulnerabilities and safeguarding OTP over network. Int J Eng Res Technol 3(5):1339–1343 19. Alpar O (2021) Biometric keystroke barcoding: a next-gen authentication framework. Expert Syst Appl 177:114980 20. Miya J, Bhatt M, Gupta M, Anas M (2017) A two factor authentication system for touchscreen mobile devices using static keystroke dynamics and password 21. Wang Y, Wu C, Zheng K, Wang X (2019) Improving reliability: user authentication on smartphones using keystroke biometrics. IEEE Access 7:26218–26228 22. Kim J, Kang P (2020) Freely typed keystroke dynamics-based user authentication for mobile devices based on heterogeneous features. Pattern Recogn 108:107556 23. Misbahuddin M, Bindhumadhava BS, Dheeptha B (2017, August) Design of a risk based authentication system using machine learning techniques. In: 2017 IEEE smartworld, ubiquitous intelligence and computing, advanced and trusted computed, scalable computing and communications, cloud and big data computing, internet of people and smart city innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). IEEE, pp 1–6 24. Andrés S (2015) Zero factor authentication: a four-year study of simple password-less website security via one-time emailed tokens 25. Killourhy KS, Maxion RA (2009, June) Comparing anomaly-detection algorithms for keystroke dynamics. In: 2009 IEEE/IFIP international conference on dependable systems and networks. IEEE, pp 125–134

One Time Password-Based Two Channel Authentication Mechanism Using Blockchain H. P. Asha and I. Diana Jeba Jingle

Abstract Using Fog Nodes, also known as IOT devices are increasing everyday with more and more home automation, industry automation, automobile automation, etc. Security threats for these devices are also increasing. One of the threats is impersonating one fog node, stealing data and taking control of the network which is also known as the Sybil attack. To provide security, most fog devices use one step or two step authentication and sometimes use encryption. With static passwords, there is a chance of compromise by password sharing and leaking. Some weak encryption algorithms used are also compromised. Data about fog nodes in the network is stored in a weak database and is tampered. OTP-based Two Channel Authentication Mechanism (OTPTAM) to authenticate the fog nodes with metadata stored in Blockchain Database and communicate using channels encrypted with Elliptical Ciphers can solve the majority of these problems. Metadata of the nodes like Bluetooth MAC address, network mac address, telephone number are all stored in the blockchain and the OTP is exchanged via these channels to ensure the authenticity of the fog nodes. Keywords OTP · Blockchain · Hashing · Bluetooth · Fog nodes

1 Introduction Increase in IOT devices in day-to-day applications ranging from simple Electric Bulb to Complex Self Driving Cars, the need for applying Big Data to compute this data is increasing day by day. Non-existent or low computing power on IOT devices and the latency to forward the data to Cloud and get computed, leaves us with Fog Computing closer to IOT devices with a good computing power and decision-making capability with less latency. Increased computing needs which can afford latency is forwarded H. P. Asha (B) · I. D. J. Jingle Department of Computer Science and Engineering, School of Engineering and Technology, Christ (Deemed to be) University, Bangalore, India e-mail: [email protected] I. D. J. Jingle e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_20

229

230

H. P. Asha and I. D. J. Jingle

to Cloud. Cloud is a widely distributed, network-based, storage process or store data hosted at some remote location with delivery of Services, Storage on Demand and Computing Services over the Internet on a pay-as-you-go basis. Fog computing greatly reduces the amount of data being sent to and from the cloud, reducing latency as a result of local computation while minimizing security risks. But it still inherits few security challenges from cloud computing architecture. The more flexibility to move within the network is easier for an attacker to roam around the network and cause damage to the network if he gains access to the network. The network communication attracts various attacks may be due to an insecure environment, insecure communication channels, energy restrictions and many other networking circumstances, may be due to the lack of computing power and hardware cost affordability. This opens up doors to many threats. Majority of threats are addressed and are being addressed, but still need more robust and faster algorithms for faster detection and prevention of network attacks.

1.1 Motivation One Time Password or OTP technology is not a new concept. A random number or string is generated and sent to the client via email or SMS used during registration, to verify the logging in person is the same as who had registered [1, 2]. We see OTP being used in many applications like banking applications [3], forgotten passwords, etc. This has been proved as highly secured mode of authentication compared to static passwords, as the passwords might be stolen. Blockchain technology was introduced in 2008 and has become very popular over the years mainly because of crypto currency or bitcoin. Bitcoin usage is becoming popular and it uses blockchain technology for storing data. Blockchain is append only immutable database and data is stored in multiple databases in multiple places [4]. Data is committed only once all the participating databases are able to store it. Hash of every data added with the previous hash is calculated. Any change in data would result in hash-data mismatch. It is almost impossible to tamper with a blockchain database. Multi-channel authentication, primarily two channel authentications, has been used without our knowledge for many years [5]. OTP that we receive from an application via smartphone messaging/email. Here smartphone messaging or email is one channel. Password keyed in goes via the primary data channel using the Internet via Wi-Fi or mobile data.

1.2 Contribution In the proposed method, Blockchain-based immutable database is used to store metadata of fog nodes and Two Channel OTP-based authentication method is used to

One Time Password-Based Two Channel Authentication Mechanism …

231

detect and prevent security attacks. Two channel authentication is a mode of authentication to ensure the validity of the client metadata provided by sending and receiving OTP from one channel to the application and receiving from another channel. This mode of authentication is being used extensively in mobile applications already to ensure the phone number is valid. Mobile applications use SMS to send OTP and receive via Mobile Data or Wi-Fi. Same technology is being proposed to be used for Fog Clients or IOT devices. IOT devices use various technologies for communication like Wi-Fi, Bluetooth, ZigBee, NFC, etc. In the proposed model, OTP is sent via one channel using any one technology like Bluetooth and receives the same via other technology like Wi-Fi/NFC. This communication will take place via the metadata of the communication channel such as Bluetooth MAC address, Network MAC address saved in a Blockchain database during the registration process and validated. This ensures the client is valid as both the channels are validated. This can be further extended to multi channels.

1.3 Organization The rest of this paper is organized as follows. Section 2 describes the Literature survey of existing models on OTP and Multi-channel blockchain-based system model and the security requirements. Proposed method and Concrete system building blocks are described in Sects. 3. Section 4 describes the implementation and presents the performance evaluation findings and analysis, respectively. The final part is the conclusion presented in Sect. 5.

2 Literature Review Many researchers have addressed security threats, but still need more robust, faster and cost-effective algorithms for faster detection and prevention of network attacks. Manzoor et al. [6] has made a survey on state-of-the-art multi-tier authentication techniques, security threats, vulnerabilities and their solutions. A concise view of authentication models was adapted by each approach in [6]. An OTP-SMS scheme for the 2FA framework was proposed by Alharbi and Alghazzawi [1] based on blockchain smart contract technique. The framework performs two stages of authentication. In stage 1, the client sends a request by providing username and password to the desired application/website. Hash of the username and password is used to generate OTP. In stage 2, the OTP is then decrypted and generated and sent via a smart contract and authorizes the user. Abdellaoui et al. [7] proposes an Image-based OTP (imOTP) that requires an out-of-band channel for sending image OTP on a smartphone which is pre-registered with a cloud server, but the author has not addressed confidentiality and integrity.

232

H. P. Asha and I. D. J. Jingle

Buccafurri and Romolo [2] proposes an OTP authentication scheme for MQTT. It uses Ethereum blockchain as an independent channel for the second factor authentication to secure the privacy of the clients. In [4] Wu et al. proposed a 2FA framework for out-of-band IOT devices based on blockchain technology. The devices are registered with blockchain and are mutually connected to other devices registered in blockchain. When a request is sent by client 1 to client 2 then client 1 checks in the blockchain and authenticates. Here blockchain stores the relationship between devices and provides access to the related device only. This scheme is able to prevent external attack, where an attacker can attack or modify the initial token itself. In [8], an improvement to one time password algorithms with general-purpose possession-based authentication framework was proposed and it addresses some of the limitations and weaknesses of the existing multifactor authentication methods. The framework can be implemented in popularly used smartphones but does not rely on cellular network or Wi-Fi network. The purpose of the framework is for current password-authenticated online services to adopt multi factor authentication easily. Park et al. [9] proposed a 2FA framework to solve the problem of the private blockchain. It is a framework based on time-based OTP. Based on current time information and secret key shared by TOTP server and client. It generates a password which is then sent to the application rather than sending to the user in order to ensure a high level of security. Lin et al. [10] proposed a framework for smart factories which uses blockchain to authenticate. Ethereum smart contract is used to request transactions and encrypt using employees’ private key. The transaction is validated by decrypting it using a public key for the employee. It is observed that the existing approaches used the 2 Factor Authentication (2FA) mechanism using one channel communication. This does not ensure the client’s genuinity as the client may spoof the metadata of the channel. Spoofing multiple channels is difficult. A 2FA on two channel authentications can be used to further enhance security. A two channel authentication method is proposed to address these limitations.

3 Proposed Method In this proposed method, a Node Registration Phase and Authentication Phase is used. Node Registration Phase is one time per fog node and consists of extracting metadata from fog node and storing in blockchain after initial validation. Authentication phase is when one fog node wants to communicate with other after authentication.

3.1 Node Registration Phase Figure 1 shows the flow of fog node registration to the server which includes the following steps.

One Time Password-Based Two Channel Authentication Mechanism …

233

Fig. 1 Node registration phase

Step 1: Fog node sends a request for registration of the node to the server. Step 2: Extraction of Metadata: After a node request for registration the server extracts the node’s metadata. (i) getmac (host name) is used to extract the MAC address (ii) IP address of the IOT client is extracted, (iii) Time stamp, (iv) Bluetooth address (v) Telephone number (for mobile devices). Step 3: Authenticity of channels is verified using OTP authentication across various channels. Step 4: Once OTP authentication is successful, metadata of client is hashed and stored in blockchain.

3.2 Authentication Phase Figure 2 explains how the Fog node is authenticated. During this phase, the following steps are carried out: Step 1: Fog node requests authentication to Fog server to communicate with another Fog node, say Node X. Step 2: Fog server extracts Metadata of Fog node, hashes it and compares with the hash value stored in the blockchain database.

234

H. P. Asha and I. D. J. Jingle

Fig. 2 Node authentication phase

Step 3: Upon verification, generates 128 characters string OTP and sends it via one of the channels of Fog node and waits for the receipt of OTP via another channel with a fixed time out that can be defined by the Administrator at the Fog server. Step 4: Upon verification of OTP, the target Fog node is also verified with the same OTP mechanism. Step 5: Once verified both the Fog nodes are provided with one more OTP for mutual authentication and allowed to communicate with each other for a specified period of time, after which they have to get re-authenticated. Concepts used to construct OTP are extracting Metadata using WebSocket, hashing, OTP validation.

3.3 Extracting Metadata Metadata is a foundation element or essential details about any asset or network, here metadata of a node are IP address, Mac address, date and time of node creation (Time stamp). MAC address: Media Access Control address (MAC) is a physical address of a computer or a six-byte hexadecimal address (48-bit address contains 6 groups of 2 hexadecimal digits, separated by either hyphen (-) or colons (:)) provided by NIC card’s manufacturer, used to simply identify the device. MAC addresses cannot be shared. Since it is unique and neither can be changed with time and environment nor can be spoofed easily. MAC address authentication is not only the most secure and scalable method [11], but MAC-based authentication implicitly provides an additional layer of security authentication devices.

One Time Password-Based Two Channel Authentication Mechanism …

235

Fig. 3 Hashing

IP address: Internet Protocol (IP) [12] address is a logical address provided by Internet service provider used to uniquely define a device on the network is either a four-byte (IPv4) or an eight-byte (IPv6) address and can be modified with time and environment and multiple client devices can share the same IP address. Timestamp: The current time of a node entry or node requesting for registration is recorded by the computer. It is generally used for synchronization purpose, real time transport protocol.

3.4 Hashing String comprising of IP address, MAC address (or any metadata), Bluetooth address (or any metadata) and timestamp is hashed. sha512 encoding is used for hashing. Figure 3 shows the example of hash function, shown is creating a new object with sha512_256 algorithm [13], encoding the plain text metadata and passing it to the object to get the final result. Declaring sha512_256 encoding. Creating ‘h’ as an object of the hashlib function with sah512_256 algorithm: h = hashlib.new(‘sha512_256’). Uuencoding the metadata to pass onto hashing program using: metadata.encode() passing on metadata to hashlib object is done using: h.update(metadata) result of hash key is stored in a hexadecimal digest using hexdigest() function: result = h.hexdigest().

4 Implementation and Analysis Our experimental setup had two Windows 10 Laptops, (i5 4th Generation, 12 GB RAM, SSD Drive) running Fog Server and CentOS VM running on the same machine running BigchainDB for Blockchain and i5 10th Generation, 8 GB RAM, SSD Drive running Fog Node. Both the machines had Python 3.1 running with WebSocket installed and Windows Firewall modified to allow the ports to listen and Bluetooth Turned on. The BigchainDB server was created using Oracle VirtualBox 6.1. Oracle

236

H. P. Asha and I. D. J. Jingle

VirtualBox 6.1 is a Virtualization software from Oracle Corporation. CentOS is an Open-Source Linux Platform. CentOS 8.4.2105 is installed on VirtualBox to host a VM. MongoDB 4.4 is installed, which is an open-source document database with data stored in JSON format. Bigchain DB v2.2.2 is an open-source Blockchain Database built using Python, installed on top of MongoDB. It maintains Data in blockchain format using Mongo as the backend. It is an append only database. Tendermint 1.1 is installed which is a node synchronization software installed as part of Bigchain installation. It ensures a minimum number of Mongo DBs to be updated during any data addition or modification. It is observed that the time taken for authentication via two channels using Bluetooth and Wi-Fi is approximately 4 s for Wi-Fi Bluetooth combination. Depending upon the channel speed, it may vary. Bootstrapping a new node will take less than 20 s with an OTP generated at the server that needs to be keyed in at the client end. Fog server takes around 4 s to authenticate a single client. As the number of clients increases, the delay in the authentication process increases. This can be horizontally scaled up by adding more Fog Servers as Fog nodes increase. The Fog Server stores the client’s metadata in Blockchain in order to secure the metadata of nodes and identify any potential breach. While adding new nodes, it is observed there are CPU spikes (spike to 50% on a Single CPU VM running CentOS) due to Bigchain DB and Tendermint working together to sync all the nodes. If too many nodes are added at the same time, clients might face some performance issues. This could also be addressed by adding more Fog Servers. In future, this will be added to Kubernetes Orchestration, where more Fog Servers can be deployed depending on the number of fog nodes. This method of authentication can prevent the following attacks: 1.

2.

3.

4.

Man in the middle Attack: Data transmission is encrypted with highest possible secured Ciphers and a listener cannot listen on the pipe and pretend to be the source. Along with the Cipher security, there is frequent OTP generated and sent to source and target fog nodes from fog servers which will be exchanged like a token valid for a particular duration during which data can be communicated Phishing Attack: Phishing attack is an attack where Username and Password is stolen. This type of attack is not possible due to the OTP mechanism which is valid only for a particular duration. Sybil Attack: In Sybil Attack, a malicious node pretends to be a legitimate node and transmit and receive information from other legitimate nodes. Due to multi-channel OTP authentication, along with verification of channels and the certificates installed on the server makes it difficult or almost impossible for an attacker. Replay Attack: Replay attack can happen when the channel is not encrypted or encrypted with weak ciphers. An attacker will try to reply to the messages from source to target. In our case, we are using Elliptical Ciphers for encryption which can provide the highest level of security.

One Time Password-Based Two Channel Authentication Mechanism …

237

5 Conclusion Lot of research has gone into implementation of blockchain and OTP for fog nodes, but nothing combines it together due to various reasons. This implementation secures metadata with blockchain, uses multi-channel OTP and transmits through highly secured Elliptical Ciphers Encryption along with identification. Certificates can be organization signed if used internally to an organization or publicly signed if hosted on the Internet. OTP, plus digital signatures and ECC encryption makes it one of the robust methods of securing fog nodes. It prevents various important threats like Sybil Attack, Replay Attack, Man in the Middle Attack, Phishing Attack, etc. Acknowledgements I thank all the reviewers for their valuable comments and suggestions.

References 1. Alharbi E, Alghazzawi D (2019) Two factor authentication framework using OTP-SMS based on blockchain. Trans Mach Learn Artif Intell 7(3):17–27. https://doi.org/10.14738/tmlai.73. 6524 2. Buccafurri F, Romolo C (2019) A blockchain-based OTP-authentication scheme for constrained IoT devices using MQTT, 1–5. https://doi.org/10.1145/3386164.3389095 3. Adukkathayar A, Krishnan GS, Chinchole R (2015) Secure multi factor authentication payment system using NFC. In: 2015 10th International conference on computer science and education (ICCSE), pp 349–354. https://doi.org/10.1109/ICCSE.2015.7250269 4. Wu L, Du X, Wang W, Lin B (2018) An out-of-band authentication scheme for internet of things using blockchain technology. In: 2018 International conference on computing, networking and communications (ICNC), pp 769–773. https://doi.org/10.1109/ICCNC.2018.8390280 5. Mamun Q, Rana M (2017) A robust authentication model using multi-channel communication for eHealth systems to enhance privacy and security. In: 2017 8th IEEE annual information technology, electronics and mobile communication conference (IEMCON), pp 255–260.https:// doi.org/10.1109/IEMCON.2017.8117210 6. Manzoor A, Shah M, Akhunzada A, Qureshi F (2018) Secure login using multi-tier authentication schemes in fog computing. EAI Endors Trans Internet of Things 3:154382. https://doi. org/10.4108/eai.26-3-2018.154382 7. Abdellaoui A, Khamlichi YI, Chaoui H (2015) Out-of-band authentication using image-based one time password in the cloud environment. Int J Security Appl 9:35–46. https://doi.org/10. 14257/ijsia.2015.9.12.05 8. Zhao S, Wenhui H (2018) Improvement on OTP authentication and a possession-based authentication framework. Int J Multim Intell Secure 3:187–203 9. Park W, Hwang D, Kim K (2018) A TOTP-based two factor authentication scheme for hyperledger fabric blockchain. In: 2018 Tenth international conference on ubiquitous and future networks (ICUFN), pp 817–819.https://doi.org/10.1109/ICUFN.2018.8436784 10. Lin C, He D, Choo K-K, Vasilakos A (2018) BSeIn: a blockchain-based secure mutual authentication with fine-grained access control system for industry 4.0. J Netw Comput Appl 116:42–52. https://doi.org/10.1016/j.jnca.2018.05.005 11. https://www.arubanetworks.com/techdocs/ArubaOS_60/UserGuide/MAC_Authenticati on.php 12. https://eadmin.ebscohost.com/eadmin/help/authentication/IP_Address_Auth.htm 13. https://crypto.stackexchange.com/questions/63836/size-of-a-hashed-string-using-sha-512

IEESWPR: An Integrative Entity Enrichment Scheme for Socially Aware Web Page Recommendation Gurunameh Singh Chhatwal and Gerard Deepak

Abstract The World Wide Web is the most extensive knowledge repository and the vastest record structure that has ever existed in human history. A socially aware and semantically driven web page recommendation algorithm is therefore a compelling necessity. In this paper, an entity enrichment mechanism for recommendation of web pages has been proposed. The approach incorporates semantic frame matching and entities enriched by generation of Resource Description Framework along with the incorporation of background knowledge from the Linked Open Data cloud, social awareness is incorporated by including entities from the Twitter API. The dataset is classified using the XGBoosting algorithm, and the SemantoSim measure has been chosen to compute the semantic similarity under LION optimization algorithm which serves as the metaheuristics. The approach considers the user query, the current user clicks as well as web usage data of the user. The proposed methodology yields an accuracy of 95.42% which surpasses the existing techniques. Keywords Cognitive knowledge · Lion optimization · Semantic frame matching · Socially aware · XGBoosting

1 Introduction The World Wide Web (WWW) has been in constant evolution since its conception, we are entering an age of the web where data has exploded in terms of content, and there is a never-ending demand for utilization and interaction. Web 2.0 has had many limitations for the search and recommendation engines; as due to lack of semantic sense and knowledge representation, a search result is often deprived of polysemic results. With an advent in personalization tools and social data, a more comprehensive G. S. Chhatwal Department of Electronics and Communication Engineering, University Institute of Engineering and Technology, Panjab University, Chandigarh, India G. Deepak (B) Department of Computer Science and Engineering, National Institute of Technology, Tiruchirappalli, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_21

239

240

G. S. Chhatwal and G. Deepak

approach is needed to serve the users with results in a true semantic sense. With the power of knowledge-based systems and ontology focused web search, we would be able to create recommendation systems that provide relevant and personalized recommendations to the users.

1.1 Motivation It is difficult to build successful personalized recommendation systems because of the unstructured and semi-structured nature of web pages, as well as the design eccentricities of websites. There is a need for semantically based machine learning algorithms for web-based recommendation systems. Machine learning alone enhances the cognitive load, and it is exceedingly difficult to train and extract relations from large amounts of web data. Semantic inclusive techniques are required to meet the demanding needs of the WWW.

1.2 Contribution Following are the propositions in this paper: (1) pre-processing the user query, current user clicks, and the user profile from the user’s online activity data results in the aggregation of a combined collection of terms. (2) Entity enrichment is carried out by generation of Resource Description Framework (RDF) and addition of socially aware entities from the Twitter API. (3) XGBoosting has been incorporated for the classification of the dataset based on the combined set of terms, and under the LION optimization algorithm, semantic similarity is measured using the SemantoSim measure, which acts as the competing metaheuristics for providing the results. (4) Semantic frame matching has been incorporated, and entities derived from the cloud knowledge base platform Linked Open Data (LOD) enrich the background auxiliary knowledge which is fed into the framework. Precision, recall, accuracy, F-measure, false discovery rate (FDR), and normalized discounted cumulative gain (nDCG) are all possible measures for evaluating the suggested approach’s performance.

1.3 Organization The remainder of the paper is arranged as follows. Section 2 contains pertinent research that has already been done on the subject. The proposed architecture is found in Sect. 3. Section 4 discusses implementation. Section 5 comprises performance evaluations and observed results. The paper is concluded in Sect. 6.

IEESWPR: An Integrative Entity Enrichment Scheme …

241

2 Related Works Bhavsar et al. [1] have put forward a personalized web page recommendation system using the web mining approach. The domain ontology integration takes place with a pattern mining approach with the dataset using graph algorithms such as Depth First Search. The web server logs and the active user sessions are mined; furthermore, pattern matching with the domain ontology takes place to give final recommendations. Bhavithra et al. [2] have created a recommendation service using the clustering of the features based on the cases and further rule mining performed using the weighted association approach. Here, the content-based features are fed into K-means nearest neighbor (KNN) or collaborative filtering approach along with the case-based reasoning where weighted association rules are also incorporated. Singh et al. [3] came up with an approach using partially ordered sequential rules for making predictions on website recommendations. They have used TRuleGrowth and CMRules mining algorithms for generating sequential rules. Deepak et al. [4] have come forth with a system to compute semantic heterogeneity between the inputs of the system, namely search query words, extracted keywords, and the words from the content; furthermore, a hybrid adaptive pointwise mutual information strategy with varied thresholds is used to calculate semantic similarity for recommendations. Xie et al. [5] have put forward a methodology based on twofold clustering, combining the strengths of two clustering algorithms which include density-based and k-means approaches for vector quantization. It employs density-based clustering to determine the quantity of the clusters and their respective nuclei then uses these parameters to perform clustering more quickly for recommendations. Leung et al. [6] have come up with an approach for enhancing the parallelization of association rule mining and uncovering collections of the most visited web pages. Deepak et al. [7] have proposed a system for recommendation based on users’ queries, a mixture of approaches that analyzes their web usage data and employs techniques such as latent semantic analysis and latent Dirichlet allocation. Furthermore, elements of personalization have been added using the prioritization vector. Leung et al. [8] have put forward serial mining algorithms which utilize the compressed bitwise depiction for web pages. These algorithms utilize parallelization and collect crucial content of the website data in a compressed manner to discover the most visited web pages via which further associative rules can be formed for recommendations. Jiang et al. [9] have proposed a web mining (or bitwise frequent pattern mining) method that detects web surfer patterns for web page suggestion. Mohanty et al. [10] have devised a framework using rough fuzzy clustering and chicken swarm optimization. For the recommendation or personalization process, a fuzzy recommendation engine is given which is primarily based on the profile of the user and the ontological knowledge base. Moreover, it also incorporates a broad observational correlation of multiple fuzzy information membership derivations. In [11–16], several frameworks in support of the proposed model have been discussed.

242

G. S. Chhatwal and G. Deepak

3 Proposed Architecture Figure 1 illustrates the first phase of the architecture for the personalized web-based recommendation system. The user query, current user clicks, and user profile contents are taken as prospective inputs to the network. All the prospective inputs are subject to preprocessing which consists of tokenization, lemmatization, stop word removal, and named entity recognition. The query provides query words, the current user clicks yield the initial term set, and the individual terms are extracted from the user profile content after preprocessing. These three combined yield the combined set of terms for the next phase. Once the combined set of terms are yielded, the entity enrichment takes it as an input as illustrated in Fig. 2. The second phase includes entity enrichment where the combined set of terms is subject to addition of socially aware entities by using the

Fig. 1 Architecture diagram for yielding the combined set of terms

Fig. 2 Architecture diagram for the entity enrichment phase

IEESWPR: An Integrative Entity Enrichment Scheme …

243

Twitter API. It is a programming interface which provides access to Twitter in unique and advanced ways for analysis and retrieval of tweets, users, messages, lists, trends, media, and places. The semantic frame matching step of the entity enrichment process starts with calculating the semantic similarity between the combined collection of words and the dataset. A semantic frame in this case is a common data structure, i.e., a dictionary with key value pairs maintaining the relevant concepts of a particular topic into a structural format. For example, a semantic frame F ← (T, C) which contains the data of the topic T which in our case is a word from the combined set of terms and its related concepts into C. Further, more individual frames are inserted to formulate a hash table. For multi-valued characteristics, semantic frames are represented by a key value hash map, which is connected to a corresponding set. Because of their lightweight and ease of usage, as well as their perceived functionality when organized under labeled categorizations, frames are chosen over complicated connectionist AI tools like knowledge bases. After the formulation of the semantic frames, the background knowledge from the LOD cloud is added to it. Based on the RDF standards of the semantic web, LOD provides a vision of globally accessible and connected data on the Internet; along with this, it is a data cloud that allows anybody to see any data and to add to any data without disrupting the original data source. To take advantage of this cloud retrieval, the information from the metadata of the web page is required. This is used to anchor entities from the real WWW to the data as background knowledge. Background knowledge is included as it increases the density of knowledge in the framework. Furthermore, RDF is generated using OntoCollab which is a strategic ontology modeling tool. It removes all the physical constraints and allows participants in the context of ontology to collaborate based on review compared to a live collaboration. The input to the OntoCollab is the metadata from the WWW, i.e., blogs and other user communities. RDF increases the density of knowledge as well, and it produces a predicate object which establishes a relationship with the subject. The subject-object relation increases the amount of highly relevant entities into the framework. Further, socially aware entities, terms from the Twitter API are added to make the system more socially aware. In the second phase, the initial classification of the dataset is also done using the XGBoost and considering the combined set of terms as classes. Afterward, only the top 20% of the classified instances have been taken because a large set of recommendations is not required, only highly relevant yet diverse sets of recommendations are needed, since the dataset is quite large, we only require the top 20%. XGBoost algorithm or the extreme gradient boosting algorithm is a system for tree-based decision algorithms which provide a whole solution and is scalable. It includes a unique sparsity-conscious method for sparse data and weighted tree-based quantile sketches. It also integrates insights into cache access, data compression, and data sharding to develop a scalable tree boost system that is considerably smaller than previous systems in billions of cases.

244

G. S. Chhatwal and G. Deepak

Boosting is an ensemble technique that combines several machine learning algorithms into one predictive model. It provides the advantage of multilearner prediction and gives an aggregated output. Several features make XGBoosting a popular choice for the implementation of gradient boosting like handling sparse data, regularization, handling weighted data, cache controlling mechanisms, core computation handling, and robust structure for parallel learning. Afterward, the semantic similarity between the enriched entities and the top 20% of the classified instances has been computed using the LION optimization algorithm. The semantic similarity has been calculated using the SemantoSim measure and embed it inside the LION optimization to get a highly optimized yet similar and relevant set of entities as the final set of recommendations. The measurement SemantoSim is a semantic similarity measure generated from the pointwise mutual information measure and a normalized semantic measurement. Equation (1) calculates the pointwise mutual information between the two terms x and y. If there are more than two terms in the query, then the permutations of all the available terms are considered; on the contrary, if we have only one term available, then the SemantoSim measure is calculated with the most closely related term. The function p gives the probability that term y is occurring with respect to x when considered for both the terms, whereas for each single term, it gives the probability of that term present in the database taken in the context. All the terms are subject to pre-processing including lemmatization, tokenization, etc. SemantoSim(x, y) = (pmi(x, y) + p(x, y) log[ p(x, y)])/([ p(x) · p(y)] + log( p(y, x))) (1) LION optimization algorithm is a bio-inspired (or) nature-inspired optimization algorithm that is mainly based on the metaheuristic principles. This algorithm is driven by a special lifestyle of lions and their traits of collaboration. The LOA was developed based on the simulations on the behavior of lions such as hunting, mating, and defense. This algorithm is assimilated to our architecture as we embed the semantic similarity in it and try to optimize to get the highly semantically relevant results. Finally, the set of recommendations is re-ranked based on their SemantoSim score and then the final recommendations are provided.

4 Implementation Implementation of the IEESWPR system has been done using Python 3.9 and Google Colab as the preferred IDE. The Google’s Colab is accessed using the Intel’s i7 7th-generation processor with 4.20 GHz as the maximum processor frequency and enabled using the Nvidia tesla k80 GPU. We have taken the open-source Website Classification Dataset from the UK Selective Web Archive which is collected

IEESWPR: An Integrative Entity Enrichment Scheme …

245

and managed manually. The link to the UKWA website classification dataset is as follows (https://data.webarchive.org.uk/opendata/ukwa.ds.1/classification/). This dataset encompasses the classification of the URLs in a two-level hierarchical structure along with labels, i.e., the primary category, secondary category, title, and URL. Titles and a set of keywords that encapsulates the site are obtained which is further useful in finding the similarity index while ranking the web pages in consistency with the user query. The RDF is generated using OntoCollab for all the various categories which are available in the dataset. Apart from the directly available categories in the dataset, indirectly associated categories from LOD cloud as well as the standard web thesaurus are also incorporated in the form of ontologies. Several domains which are relevant to the dataset categories as well as all possible inter-related categorical domain ontologies have been harvested as RDF. The OntoCollab does not directly generate the RDF; it rather harvests the OWL from several repositories as well as online web communities which are further converted into RDF via the XML intermediary structure. Table 1 depicts the IEESWPR algorithm.

5 Performance Evaluation and Result The performance of the proposed entity enriched integrative web page recommendation system has been determined taking in consideration the precision, recall, accuracy, F-measure, FDR, and nDCG as evaluation metrics. Standard formulations for average precision, average recall, accuracy, F-measure, FDR, and nDCG were used. The proposed IEESWPR is baselined with WPRWM [1], CBCWARM [2], and WPRPOSR [3] for quantifying and evaluating the performance of the proposed web page recommendation approach. The WPRWM is a web page recommendation approach which uses web usage mining; here, the web server logs as well as the active user sessions are mined; furthermore, the pattern matching and the domain ontology integration take place. The approach is not very keen because the strength of these techniques is not quite high as the application of domain ontology alone does not indicate that the high amount of auxiliary knowledge is not incorporated into the approach. Moreover, the domain ontology will be localized to a specific dataset, and the extraction of user patterns from the active user sessions does not make it query centric and propels toward a more user-centric approach. A good web page recommendation system must be query centric along with satisfying the user requirements. Table 2 depicts the comparison with other baseline approaches. The CBCWARM uses a case-based clustering approach as well as weighted association rules that are used along with the KNN; i.e., the content-based features are fed into K-means nearest neighbors’ algorithm or collaborative filtering approach along with the case-based reasoning where weighted association rules are also incorporated into the approach. The approach is quite good, and it has performed well, but the pitfall here is that, firstly, it has not worked for a categorical dataset, secondly, the

246

G. S. Chhatwal and G. Deepak

Table 1 Proposed IEESWPR algorithm

Table 2 Comparing the performance of the proposed IEESWPR with other approaches Search technique

Average precision %

Average recall %

Accuracy %

F-measure %

FDR

nDCG

WPRWM [1]

78.87

81.12

79.99

79.97

0.22

0.81

CBCWARM [2]

84.12

88.66

86.39

86.33

0.16

0.83

WPRPOSR [3]

89.45

92.18

90.81

90.74

0.11

0.85

Proposed IEESWPR

93.18

97.66

95.42

95.36

0.07

0.95

IEESWPR: An Integrative Entity Enrichment Scheme …

247

association rules must be weighted, and there is no specific template for deciding the weight factor. Moreover, when collaborative filtering is used along with the weighted association rule mining, the computation complexity becomes quite high. As a result, this approach, though uses personalization, is not the best suitable candidate for a semantically driven web page recommendation. The absence of auxiliary knowledge makes it completely dependent on the dataset and the previously accessed web pages which focuses on the web usage and the web structure mining which makes the entire approach non-semantic. WPRPOSR is a rule-based model which is based on CMRules and TRuleGrowth, this sequential rule mining is quite effective, but it does not support the semantic web. Moreover, the decisions on the sequential rules must be done in an offline preliminary stage, also the user session sequences play a vital role in training as well as testing thus making the system completely user-driven rather than the community, whereas in a semantic web environment, recommendation system must cater to the community despite the user preferences and make it more topic based and query centric. Therefore, a scope for improvement in this model is also found. Furthermore, formulating thousands of sequential rules is a tedious task which is also a noticeable drawback of the system. The proposed IEESWPR is a semantic approach, and it is interoperable in a highly semantic environment as well as in non-semantic sparse traditional Web 2.0. The entity enrichment process makes the system socially aware as well as semantic in nature, it is done using a socially aware Twitter API, and further, a resource description framework is generated using OntoCollab for the context of the terms which are extracted from the user query which makes it query centric. Moreover, a high amount of auxiliary knowledge is added using the Linked Open Data cloud. There is a highly coherent scheme which uses semantic frame matching making sure that the individual terms get aligned semantically with respect to the terms in the dataset. The XGBoosting classifier makes sure that the dataset is classified and the computational load in the recommendation is reduced. The computation of semantic similarity is infused with LION optimization algorithm, and the usage of metaheuristics with semantic similarity measures ensures the high relevancy of the final recommendations. As a result, the proposed approach is much better than the baseline models. It is socially aware, semantic in nature, query centered, and it also takes the consent of the user by considering the user clicks as well as the user query. Figure 3 indicates the precision percentage vs the number of recommendations distribution graph. The graph clearly shows that the proposed IEESWPR has a better percentage of precision despite the number of recommendations when compared with the baseline models. The reason being in comparison to the other approaches, it contains a high amount of auxiliary knowledge which includes information from the LOD cloud apart from the dataset making the approach highly semantic. Along with this, the approach is query centric and takes in consideration the socially aware aspect by including the socially aware entities from the Twitter API. It does not require any preformulated sequential rule mining and is also personalized by taking the user input. Finally, the use of metaheuristics, namely LION optimization, makes

248

G. S. Chhatwal and G. Deepak

Fig. 3 Precision comparison of IEESWPR among several approaches

the approach deliver highly semantic yet personalized relevant recommendations in various capacities.

6 Conclusion The proposed approach takes into consideration the query terms, the user data, and current user clicks which are preprocessed yielding the combined set of terms which are fed into the system for entity enrichment. The methodology is a socially aware, semantically driven machine learning-based approach with XGBoosting being used for classification of the dataset. Semantic frame matching has also been incorporated into the system to formalize a relationship between dataset and the combined set of terms. The approach is being made socially aware by including the terms from the Twitter API, furthermore background auxiliary knowledge is added from the LOD cloud, and RDF are generated. The semantic similarity is calculated using the SemantoSim measure between the classified instances and the enriched entities embedded with metaheuristic LION optimization to generate the final set of recommendations. The experimentation has been carried out on the UKWA website classification dataset with measures of performance including accuracy, precision, recall, F-measure, FDR, and nDCG. With an extremely low FDR of 0.07, the F-measure of 95.36 percent was attained.

IEESWPR: An Integrative Entity Enrichment Scheme …

249

References 1. Bhavsar M, Chavan MP (2014) Web page recommendation using web mining. Int J Eng Res Appl 4(7):201–206 2. Bhavithra J, Saradha A (2019) Personalized web page recommendation using case-based clustering and weighted association rule mining. Clust Comput 22(3):6991–7002 3. Singh H, Kaur M, Kaur P (2017) Web page recommendation system based on partially ordered sequential rules. J Intell Fuzzy Syst 32(4):3009–3015 4. Deepak G, Priyadarshini JS, Babu MH (2016) A differential semantic algorithm for query relevant web page recommendation. In: 2016 IEEE international conference on advances in computer applications (ICACA). IEEE, pp 44–49 5. Xie X, Wang B (2018) Web page recommendation via twofold clustering: considering user behavior and topic relation. Neural Comput Appl 29(1):235–243 6. Leung CK, Jiang F, Pazdor AG (2017) Bitwise parallel association rule mining for web page recommendation. In: Proceedings of the international conference on web intelligence, pp 662– 669 7. Deepak G, Shwetha BN, Pushpa CN, Thriveni J, Venugopal KR (2020) A hybridized semantic trust-based framework for personalized web page recommendation. Int J Comput Appl 42(8):729–739 8. Leung CK, Jiang F, Souza J (2018) Web page recommendation from sparse big web data. In: 2018 IEEE/WIC/ACM international conference on web intelligence (WI). IEEE, pp 592–597 9. Jiang F, Leung C, Pazdor AG (2016) Web page recommendation based on bitwise frequent pattern mining. In: 2016 IEEE/WIC/ACM international conference on web intelligence (WI). IEEE, pp 632–635 10. Mohanty SN, Rejina Parvin J, Vinoth Kumar K, Ramya KC, Sheeba Rani S, Lakshmanaprabu SK (2019) Optimal rough fuzzy clustering for user profile ontology based web page recommendation analysis. J Intell Fuzzy Syst 37(1):205–216 11. Yethindra DN, Deepak G (2021) A semantic approach for fashion recommendation using logistic regression and ontologies. In: 2021 international conference on innovative computing, intelligent communication and smart electrical systems (ICSES). IEEE, pp 1–6 12. Roopak N, Deepak G (2021) KnowGen: a knowledge generation approach for tag recommendation using ontology and Honey Bee algorithm. In: European, Asian, Middle Eastern, North African conference on management & information systems. Springer, Cham, pp 345–357 13. Krishnan N, Deepak G (2021) KnowCrawler: AI classification cloud-driven framework for web crawling using collective knowledge. In: European, Asian, Middle Eastern, North African conference on management & information systems. Springer, Cham, pp 371–382 14. Roopak N, Deepak G (2021) OntoJudy: a ontology approach for content-based judicial recommendation using particle swarm optimisation and structural topic modelling. In: Data science and security. Springer, Singapore, pp 203–213 15. Manaswini S, Deepak G (2021) Towards a novel strategic scheme for web crawler design using simulated annealing and semantic techniques. In: Data science and security. Springer, Singapore, pp 468–477 16. Deepak G, Rooban S, Santhanavijayan A (2021) A knowledge centric hybridized approach for crime classification incorporating deep bi-LSTM neural network. Multimed. Tools Appl. 1–25

Interval-Valued Fuzzy Trees and Cycles Ann Mary Philip , Sunny Joseph Kalayathankal , and Joseph Varghese Kureethara

Abstract Interval-valued fuzzy tree (IVFT) and interval-valued fuzzy cycle (IVFC) are defined in this chapter. We characterize interval-valued fuzzy trees. We also prove that if G is an IVFG whose underlying crisp graph is not a tree then G is an IVFT if and only if G contains only α strong arcs and weak arcs. It is shown that an IVFG G whose underlying crisp graph is a cycle is an IVFC if and only if G has at least two β strong arcs.

1 Introduction Graph theory is prominent branch of mathematics that helped grow the science of optimization. It creatively engages the theory of sets and logical reasoning. The impact of graph theory in the field of computer science and decision sciences is tremendous. The introduction fuzzy set theory Zadeh and the subsequent introduction of fuzzy graphs [9] had unimaginable impact in the world of applied computing and decision making. Fuzzy graphs were studied in various levels in the past five decades. Recently, Das et al. [2] introduced fuzzy chordal graph and its properties. This shows relevance of fuzzy graphs even though fuzzy graph was introduced by Rosenfeld in 1975 [9]. He defined fuzzy trees and connectedness [9]. Sunitha and Vijayakumar studied it in detail and obtained a characterization of it [10]. Types of arcs in a fuzzy tree were discussed by Mathew and Sunitha [3]. Fuzzy cycle was defined by Mordeson and Nair [4]. Here, in this chapter, we define interval-valued fuzzy tree and cycle and study it in detail. The chapter is divided into five sections. After A. M. Philip Assumption College, Changanacherry, India S. J. Kalayathankal Jyothi Engineering College, Thrissur, India e-mail: [email protected] J. V. Kureethara (B) Christ University, Bangalore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_22

251

252

A. M. Philip et al.

the introductory section, in Sect. 2, we define interval-valued fuzzy tree (IVFT) and obtain characterization of IVFT in terms of the cycles contained in it and also in terms of the arcs involved in it. In Sect. 3, we define interval-valued fuzzy cycle (IVFC) and obtain a characterization. In Sect. 4, some applications of interval-valued fuzzy trees and cycles are given. The general conclusion is given in Sect. 5. If the crisp graph is G ∗ = (V , E), let its associated interval-valued fuzzy graph (IVFG) be G = (A, B). One may refer [5–8] for more on IVFG. Details on intervalvalued fuzzy bridge, interval-valued fuzzy cutnode, weakest arc in an interval-valued fuzzy graph, weak arc in a interval-valued fuzzy graph, etc., are as available in [6–8]. We shall see some definitions now. Definition 1 [1] Let G  = (V , E) be an undirected simple connected graph. An interval-valued fuzzy graph (IVFG) G on G  is defined as a pair G = (A, B), where + − + A = [μ− A (x), μA (x)] is an interval-valued fuzzy set on V and B = [μB (xy), μB (xy)] − − − is an interval-valued fuzzy set on E such that μB (xy) ≤ min{μA (x), μA (y)} and + + μ+ B (xy) ≤ min{μA (x), μA (y)} for all xy ∈ E. − − + + + If μ− B (xy) = min(μA (x), μA (y)) and μB (xy) = min(μA (x), μA (y)) for all x, y ∈ V , then G is called a complete interval-valued fuzzy graph (CIVFG).

Definition 2 [1] Let H = (C, D) and G = (A, B) be two IVFGs whose underlying crisp graphs are H ∗ = (V1 , E1 ) and G ∗ = (V , E), respectively. Then, H is said to be a subgraph of G if V1 ⊆ V , E1 ⊆ E and membership degrees of the nodes and arcs of H are same as that of G. In particular, H is said to be a spanning subgraph if V1 = V in Definition 1. Definition 3 [6] An arc (u, v) of G is called an interval-valued fuzzy bridge (IVF bridge) if the deletion of (u, v) reduces the μ− and μ+ strength of connectedness between some pair of nodes of G. Definition 4 [6] A node w of G is called an interval-valued fuzzy cutnode(IVF cutnode) if the deletion of w reduces the μ− and μ+ strength of connectedness between some other pair of nodes of G. Definition 5 [7] Two arcs e1 and e2 are said to be comparable if their membership − + + − degrees are such that either μ− B (e1 ) > μB (e2 ) and μB (e1 ) > μB (e2 ) or μB (e1 ) < − + + μB (e2 ) and μB (e1 ) < μB (e2 ). As of now, in the interval-valued fuzzy graphs, identification of nine types of arcs is done. See Table 1 for a full list with requirement.

2 Interval-Valued Fuzzy Tree Definition 6 Let G = (A, B) be a connected interval-valued fuzzy graph. G is an interval-valued fuzzy tree (IVFT) if it contains a spanning tree F = (A, C) and + / F. μ− B (u, v) < NCONNF (u, v), μB (u, v) < PCONNF (u, v) for all arcs (u, v) ∈

Interval-Valued Fuzzy Trees and Cycles Table 1 Types of arcs Name α−

strong strong α strong β − strong β + strong β strong αβ strong βα strong δ − arc δ + arc δ arc(weak arc) αδ βδ δα δβ α+

253

Requirement μ− B (u, v) > NCONNG−(u,v) (u, v) μ+ B (u, v) > PCONNG−(u,v) (u, v) α − strong and α + strong μ− B (u, v) = NCONNG−(u,v) (u, v) μ+ B (u, v) = PCONNG−(u,v) (u, v) β − strong andβ + strong α − strong and β + strong β − strong and α + strong μ− B (u, v) < NCONNG−(u,v) (u, v) μ+ B (u, v) < PCONNG−(u,v) (u, v) δ − arc and δ + arc α − strong and δ + β − strong and δ + δ − and α + strong δ − and β + strong

Remark 1 Let G = (A, B) be an IVFG on G ∗ = (V , E) where G ∗ is a tree. Then, the spanning subgraph F in the above definition is G itself. Remark 2 From Definition 6, we can have the following remarks. 1. If G = (A, B) is an IVFG on G ∗ = (V , E) where G ∗ is a tree, then G is an IVFT. 2. If G = (A, B) is an IVFG on G ∗ = (V , E) where G ∗ is a tree, then every arc will be a bridge, and hence, IVF bridge and so all arcs are α strong. 3. G is an IVFT need not imply that G ∗ is a tree. 4. In the crisp graph theory, a tree does not contain cycles. But an IVFT may contain cycles. Lemma 1 Let G = (A, B) be an interval-valued fuzzy tree on G ∗ = (V , E) and let H = (C, D) be an interval-valued fuzzy subgraph of G. Then NCONNH (u, v) ≤ NCONNG (u, v) and PCONNH (u, v) ≤ PCONNG (u, v) Proof Proof is obvious from the definition of an IVF subgraph of an IVFG.



The following theorem gives a characterization of an IVFT in terms of the cycles contained in it. Theorem 1 Let G be a connected interval-valued fuzzy graph. Then, G is an interval-valued fuzzy tree if and only if for any cycle C of G there is an arc (u, v) in C such that μ− B (u, v) < NCONNG−(u,v) (u, v)

254

and

A. M. Philip et al.

μ+ B (u, v) < PCONNG−(u,v) (u, v).

Proof Let G be a connected IVFG and suppose that for any cycle C of G, there is an arc (u, v) ∈ C such that μ− B (u, v) < NCONNG−(u,v) (u, v)

(1)

μ+ B (u, v) < PCONNG−(u,v) (u, v).

(2)

We have to prove that G is an IVFT. For that it is enough to prove that G has a spanning + tree F = (A, C) and μ− B (u, v) < NCONNF (u, v) and μB (u, v) < PCONNF (u, v) for all arcs (u, v) ∈ / F. Hence, we construct such a spanning subgraph F of G using our assumption. Without loss of generality, we assume that the arc (u, v) that belongs to a cycle of G and satisfying the conditions in the hypothesis has the least membership degree. Consider G − (u, v). If G − (u, v) does not contain any cycles, it is the required spanning subgraph F of G. Also, our chosen arc (u, v) satisfies the required conditions since it is of the least membership degree. If it contains cycles, repeat the process. At each stage, arcs are deleted such that no previously deleted arc is stronger than the arc being currently deleted. Hence, the path guaranteed by the condition in the theorem involves only arcs that have not yet been deleted. Continue this process of removing arcs, until the resulting graph has no cycles. Then, clearly the resulting graph will be the required spanning subgraph F of G. Now it remains to prove that every arc (u, v) ∈ / F satisfies the conditions in the definition of an IVFT. Consider any (u, v) ∈ / F. Then, (u, v) is one of the arcs that we have deleted in the process of constructing F. There is a path P from u to v that contain neither (u, v) nor any arcs deleted and such that μ− B (u, v) < Sμ− (P) and

Clearly,

and

μ+ B (u, v) < Sμ+ (P). μ− B (u, v) < NCONNF (u, v) μ+ B (u, v) < PCONNF (u, v)

for all arcs (u, v) ∈ / F. Thus G is an IVFT. Conversely, let G be an IVFT. Let C be any cycle in G. We have to prove that there is an arc (u, v) ∈ C such that μ− B (u, v) < NCONNG−(u,v) (u, v)

Interval-Valued Fuzzy Trees and Cycles

and

255

μ+ B (u, v) < PCONNG−(u,v) (u, v).

Since G is an IVFT, there is a spanning subgraph F of G such that μ− B (u, v) < NCONNF (u, v) and

μ+ B (u, v) < PCONNF (u, v)

for all arcs (u, v) ∈ / F and in particular, for all arcs (u, v) ∈ C − F. By Lemma 1, NCONNF (u, v) ≤ NCONNG−(u,v) (u, v) and PCONNF (u, v) ≤ PCONNG−(u,v) (u, v). Thus,

and

μ− B (u, v) < NCONNG−(u,v) (u, v) μ+ B (u, v) < PCONNG−(u,v) (u, v)

where (u, v) ∈ C − F. This completes the proof.



Corollary 1 Let G be a connected IVFG. Then, G is an IVFT if and only if every cycle of G contains at least one weak arc. Theorem 2 Let G be an IVFT. Then, the arcs of F are the IVF bridges of G. Proof Let G be an IVFT. Then, by Definition 6 and Lemma 1, μ− B (u, v) < NCONNF (u, v) ≤ NCONNG−(u,v) (u, v) and

μ+ B (u, v) < PCONNF (u, v) ≤ PCONNG−(u,v) (u, v)

for all (u, v) ∈ / F. Then, an arc (u, v) ∈ / F is certainly not an IVF bridge of G. Now, we have to prove that if (u, v) ∈ F then (u, v) is an IVF bridge of G. Suppose (u, v) is not an IVF bridge. Then, there exists a path P from u to v not involving + (u, v) such that Sμ− (P) ≥ μ− B (u, v) and Sμ+ (P) ≥ μB (u, v). This path must involve arcs (ui , vi ) not in F since F is a tree. Any such arc (ui , vi ) can be replaced by a path Pi in F such that Sμ− (Pi ) > μ− B (ui , vi ) and

256

A. M. Philip et al.

Sμ+ (Pi ) > μ+ B (ui , vi ). Since

and

we have

and

− μ− B (ui , vi ) ≥ μB (u, v)

+ μ+ B (ui , vi ) ≥ μB (u, v),

Sμ− (Pi ) > μ− B (u, v) Sμ+ (Pi ) > μ+ B (u, v).

Hence, Pi cannot involve arc (u, v). Thus, by replacing each (ui , vi ) by Pi , we can construct a path in F from u to v that does not involve (u, v) giving us a cycle in F, which is a contradiction. Hence, our assumption is wrong and (u, v) is an IVF bridge.  Proposition 1 If G = (A, B) is an IVFT, then the internal nodes of F are the IVF cutnodes of G. Proof Let w be an internal node of the IVF spanning tree F of the IVFT G = (A, B). By Theorem 2, the arcs of F are the IVF bridges of G. Hence, w is the common node of two IVF bridges of G, and hence, w is an IVF cutnode. Now suppose that w is not an internal node of F or in other words, w is an end node of F. We have to prove that w is not an IVF cutnode. For that assume the contrary; that is, w is an IVF cutnode. Then, by the definition of an IVF cutnode, there exists nodes u and v, distinct from w such that w is on every strongest u − v path and one such path surely lies in F. But since w is an end node, this is not possible. Hence, our assumption is wrong and w is not an IVF cutnode.  Next, we give a characterization of an IVFT in terms of the arcs involved in it. Theorem 3 Let G = (A, B) be a connected interval-valued fuzzy graph on G ∗ = (V , E), where G ∗ is not a tree. Then, G is an interval-valued fuzzy tree if and only if G contains only α strong arcs and weak arcs. Proof Let G = (A, B) be a connected IVFG on G ∗ = (V , E), where G ∗ is not a tree. Suppose that G is an IVFT. Let F = (A, C) be the corresponding IVF spanning tree. All arcs in F are α strong. Suppose G contains a β strong arc, say, (u, v). Then, clearly, (u, v) does not belongs to F. Hence, by the definition of an IVFT, μ− B (u, v) < NCONNF (u, v) ≤ NCONNG−(u,v) (u, v)

Interval-Valued Fuzzy Trees and Cycles

and

257

μ+ B (u, v) < PCONNF (u, v) ≤ PCONNG−(u,v) (u, v)

which implies that (u, v) is a weak arc, which is a contradiction. Hence, G cannot contain a β strong arc. Similarly, we can prove that G cannot contain αβ, βα, αδ, βδ, δα and δβ arcs. Hence, (u, v) is a weak arc if (u, v) ∈ / F. Thus, G contains only α strong arcs and weak arcs. Conversely, suppose that G contains only α strong arcs and weak arcs. Let C be a cycle in G. All arcs of C cannot be α strong since it contradicts the definition of α strong arcs. Hence, C contains at least one weak arc, and hence, by Corollary 1, G is an IVFT.  Proposition 2 Let G be a connected IVFG on G ∗ = (V , E), where G ∗ is not a tree and such that every two arcs of G are comparable. Then, G is an IVFT. Proof Let G be a connected IVFG on G ∗ = (V , E), where G ∗ is not a tree. Assume that every two arcs of G are comparable. Then, G contains only α strong arcs and weak arcs. Hence, by Theorem 3, G is an IVFT.  Remark 3 The converse of Proposition 2 is not true which can be seen from the Example 1. Example 1 Consider the IVFG G given in Fig. 1. Clearly, we can show that arcs (a, b), (a, e), (b, e) and (c.e) are weak arcs and the remaining arcs are α strong arcs. Hence, by Theorem 3, G is an IVFT. But we can see that not every two arcs of G are comparable. Proposition 3 Let G be a connected IVFG on G ∗ = (V , E) where |V | = n and |E| = m. If s cycles of G where s = m − n + 1 contain a unique weakest arc, then G is an IVFT. [0.3, 0.6]

a

[0.4, 0.7]

b

c

[0 .2

[0.4, 0.7]

] .3 ,0 [0

.2

]

[0.6, 0.7]

.3

[0.2, 0.3]

,0

f

[0.5, 0.8]

e

[0.5, 0.7]

Fig. 1 Example to show that the converse of Proposition 2 is false

d

258

A. M. Philip et al.

Proof Let G be a connected IVFG on G ∗ = (V , E) where |V | = n and |E| = m. We know that s = m − n + 1 gives the number of distinct cycles of G. Let s cycles of G contain a unique weakest arc. Then, by the unique weakest arc of these s cycles are weak arcs. Suppose (u, v) is an arc of G other than this s weak arcs. Then, there arises two cases. Case 1: (u,v) does not belong to a cycle. In this case, (u,v) is a bridge and hence an IVF bridge and is an α strong arc. Case 2: (u,v) belongs to a cycle. In this case, clearly (u, v) belongs to one or more of the above s cycles and is α strong by the assumed condition. Thus, G contains only α strong arcs and weak arcs. Hence, by Theorem 3, G is an IVFT.  Proposition 4 Let G be a connected IVFG. If there is a unique strongest path between any two nodes of G, then G is an IVFT. Proof Let G be a connected IVFG such that there is a unique strongest path between any two nodes of G. We have to prove that G is an IVFT. For that assume the contrary. Suppose G is not an IVFT. Then, by Theorem 1, there exists a cycle C in G such that + μ− B (u, v) ≥ NCONNG−(u,v) (u, v) and μB (u, v) ≥ PCONNG−(u,v) (u, v) for all arcs (u, v) ∈ C. Then, clearly (u, v) is a strongest u − v path. If we choose (u, v) to be the weakest arc of C, it follows that C − (u, v) is also a strongest path from u to v, which is a contradiction to our assumption that there is a unique strongest path between any two nodes of G. Hence, our assumption is wrong, and thus, G is an IVFT.  Example 2 Consider the IVFG G given in Fig. 2. Clearly, we can show that arcs (a, b), and (c.d ) are weak arcs and the remaining arcs are α strong arcs. Hence, by Theorem 3, G is an IVFT. But we can see that both P1 : a, d , b and P2 : a, d , c, b are strongest a − b paths.

[0.1, 0.2]

b

[0

.5

,0

.6

]

[0.2, 0.3]

a

[0.4, 0.5]

Fig. 2 Example to show that the converse of Proposition 4 is false

d

[0.3, 0.4]

c

Interval-Valued Fuzzy Trees and Cycles

259

Proposition 5 If there is a unique strongest path between any two nodes x and y in an IVFG G, then it is a strong x − y path. Proof Let G be an IVFG, and suppose that there is a unique strongest path between any two nodes x and y in G. Then, by Proposition 4, G is an IVFT, and clearly by the definition of an IVFT, the unique strongest path belongs to the corresponding IVF spanning tree F of G. Again, all the arcs of F are α strong. Hence, all the arcs of the unique strongest x − y path is α strong, and so it is a strong x − y path.  Proposition 6 Let G = (A, B) be an IVFT on G ∗ = (V , E) where G ∗ is not a tree. Then, there exists at least one arc (u, v) ∈ E such that μ− B (u, v) < NCONNG (u, v) and μ+ B (u, v) < PCONNG (u, v). Proof Let G = (A, B) be an IVFT. Then, by Definition 6, there exists an IVF spanning subgraph F = (A, C) which is a tree and + μ− / F. B (u, v) < NCONNF (u, v), μB (u, v) < PCONNF (u, v) for all (u, v) ∈

By Lemma 1, NCONNF (u, v) ≤ NCONNG (u, v) and PCONNF (u, v) ≤ PCONNG (u, v). + Therefore, μ− B (u, v) < NCONNG (u, v) and μB (u, v) < PCONNG (u, v) for all (u, v) ∈ / F. Thus, there exists at least one arc (u, v) ∈ E such that + μ− B (u, v) < NCONNG (u, v) and μB (u, v) < PCONNG (u, v). 

Proposition 7 Let G = (A, B) be an IVFT on G ∗ = (V , E) where G ∗ = K1 . Then G is not a CIVFG. Proof Let G = (A, B) be an IVFT on G ∗ = (V , E) where G ∗ = K1 . Now, if possi+ ble assume that G is a CIVFG. Then, μ− B (u, v) = NCONNG (u, v) and μB (u, v) = − PCONNG (u, v) for all arcs (u, v) ∈ G. Since G is an IVFT, μB (u, v) < NCONNF (u, / F. Thus, NCONNG (u, v) < NCONNF v), μ+ B (u, v) < PCONNF (u, v) for all (u, v) ∈ (u, v) and PCONNG (u, v) < PCONNF (u, v), which is a contradiction to Lemma 1. Hence, our assumption is wrong, and thus, G is not a CIVFG. 

260

A. M. Philip et al.

3 Interval-Valued Fuzzy Cycle Definition 7 Let G = (A, B) be an IVFG on a cycle G ∗ = (V , E). Then, G is an interval-valued fuzzy cycle (IVFC) if G has more than one weakest arc. In other words, G is called an IVFC if there exists at least two weakest arcs e1 and e2 in G such that μB− (e1 ) = μB− (e2 ) and μB+ (e1 ) = μB+ (e2 ). Proposition 8 Let G = (A, B) be an IVFG on a cycle G ∗ = (V , E). If G is an IVFC, then G does not contain weak arcs. Proof Let G = (A, B) be an IVFG on a cycle G ∗ = (V , E). Suppose G is an IVFC. We have to prove that G does not contain weak arcs. For that, suppose G contain at least one weak arc, say, (u, v). Then, (u, v) is the unique weakest arc of G which is a contradiction to our assumption that G is an IVFC. Hence, G does not contain weak arcs.  The following theorem gives a characterization of IVFCs in terms of β strong arcs. Theorem 4 Let G = (A, B) be an interval-valued fuzzy graph on a cycle G ∗ = (V , E). Then, G is an interval-valued fuzzy cycle if and only if G has at least two β strong arcs. Proof Let G = (A, B) be an IVFG on a cycle G ∗ = (V , E). Let G be an IVFC. Then, by definition of an IVFC, there exists at least two weakest arcs e1 = (u, v) and e2 = (x, y) in G such that − μ− B (e1 ) = μB (e2 ) and

+ μ+ B (e1 ) = μB (e2 ).

Since G ∗ is a cycle, P : G ∗ − (u, v) is the only u − v path in G − (u, v). Since P : G ∗ − (u, v) contains e2 = (x, y) and e2 is a weakest arc, we have NCONNG−(u,v) (u, v) = Sμ− (P) = μB− (e2 ) = μB− (e1 ) = μB− (u, v) and PCONNG−(u,v) (u, v) = Sμ+ (P) = μB+ (e2 ) = μB+ (e1 ) = μB+ (u, v). Hence, (u, v) is β strong. Applying the above arguments to arc (x, y), we can prove that (x, y) is also β strong. Conversely, suppose that G has at least two β strong arcs. Let (u, v) and (x, y) be any two β strong arcs of G. Now, we prove that both (u, v) and (x, y) are weakest arcs of G. Since (u, v) is β strong, μ− B (u, v) = NCONNG−(u,v) (u, v)

Interval-Valued Fuzzy Trees and Cycles

and

261

μ+ B (u, v) = PCONNG−(u,v) (u, v).

Since G ∗ is a cycle the only u − v path in G − (u, v) is P1 : G ∗ − (u, v). Hence, μ− B (u, v) = NCONNG−(u,v) (u, v) = Sμ− (P1 ) and

μ+ B (u, v) = PCONNG−(u,v) (u, v) = Sμ+ (P1 ).

− + Clearly, (x, y) belongs to P1 . Therefore, μ− B (x, y) ≥ Sμ− (P1 ) = μB (u, v) and μB (x, y) + ≥ Sμ+ (P1 ) = μB (u, v). Similarly, beginning with (x, y) and arguing as above, we − + + − − have, μ− B (u, v) ≥ μB (x, y) and μB (u, v) ≥ μB (x, y). Hence μB (u, v) = μB (x, y) + + − + and μB (u, v) = μB (x, y) and all other arcs in the cycle have μB and μB values + greater than or equal to μ− B (u, v) and μB (u, v), respectively. So (u, v) and (x, y) are two weakest arcs, and hence, by definition G is an IVFC. 

Proposition 9 A regular IVFG on an odd cycle is an IVFC. Proof Let G be a regular IVFG on an odd cycle. By Theorem 4.15 of [6], every two arcs of G are equal. Then, all the arcs of G are β strong. Hence, by Theorem 4, G is an IVFC.  Remark 4 A regular IVFG on an even cycle need not be an IVFC always. This is clear from Example 3. Example 3 Consider the IVFG G given in Fig. 3. Clearly, we can see that G is a regular IVFG on an even cycle. But G has no weakest arcs, and hence, G is not an IVFC. Proposition 10 Let G = (A, B) be a cycle on G ∗ = (V , E). If G is an IVF cycle, then G is not an IVFT. Fig. 3 Example to illustrate Remark 4

[0.3, 0.4]

b

[0.2, 0.5]

[0.2, 0.5]

a

d

[0.3, 0.4]

c

262

A. M. Philip et al.

Fig. 4 Example to show that the converse of Proposition 10 is false

[0.4, 0.8]

b

[0.2, 0.7]

[0.5, 0.6]

a

d

[0.3, 0.9]

c

Proof Let G = (A, B) be an IVFG on the on G ∗ = (V , E). Suppose G is an IVF cycle. Then, by Theorem 4, G contains at least two β strong arcs. Then, by Theorem 3, G is not an IVFT.  Example 4 Consider the IVFG G given in Fig. 4. Clearly, we can show that arcs (a, b), and (c.d ) are α strong arcs, (b, c) is δα and (a, d ) is αδ. Hence, by Theorem 3, G is not an IVFT. But we can also see G is not an IVF cycle.

4 Applications of Interval-Valued Fuzzy Trees and Cycles A graph represents objects and their relations. Fuzzy graph is used in defining the strength of relationship between the objects on a scale of 0–1. Although, this serves the purpose of several networks, the uncertainty in the level of relationships may not be expressible always. In this context, interval-valued fuzzy graphs have relevance. The edge labels are represented by subintervals of [0, 1]. In several situations, such as impressions of people among each other in a social network or in community, or voltage variations on the electrical lines in an electric network, etc., can be best expressed with the help of intervals. In the case of transport networks, the intensity of the vehicles between the junctions can be represented as intervals. Water-level changes, salinity levels, temperature levels, etc., could be expressed as intervals in a water network. In all these cases, trees and cycles are of great significance.

Interval-Valued Fuzzy Trees and Cycles

263

5 Conclusion In this chapter, we have defined interval-valued fuzzy tree (IVFT) and interval-valued fuzzy cycle (IVFC) and studied about them. We have characterized IVFT in terms of the cycles contained in it and also in terms of the arcs associated with it. We have obtained a characterization of an IVFC in terms of β strong arcs. This work will be very helpful in the future research of interval-valued fuzzy graphs.

References 1. Akram M, Dudek WA (2011) Interval-valued fuzzy graphs. Comput Math Appl 61(2):289–299 2. Das K, Samanta S, De K (2021) Fuzzy chordal graphs and its properties. Int J Appl Math 7(36). https://doi.org/10.1007/s40819-021-00959-x 3. Mathew S, Sunitha MS (2009) Types of arcs in a fuzzy graph. Inform Sci 179(11):1760–1768 4. Mordeson JN, Nair PS (1996) Cycles and cocycles of fuzzy graphs. Inform Sci 90:39–49 5. Pal M, Rashmanlou H (2013) Irregular interval-valued fuzzy graphs. Ann Pure Appl Math 3(1):56–66 6. Philip AM (2017) Interval-valued fuzzy bridges and interval-valued fuzzy cutnodes. Ann Pure Appl Math 14(3):473–487 7. Philip AM, Kalayathankal SJ, Kureethara JV (2019) On different kinds of arcs in intervalvalued fuzzy graphs. Malaya J Math 7(2):309–313 8. Philip AM, Kalayathankal SJ, Kureethara JV (2019) Characterization of interval-valued fuzzy bridges and cutnodes. AIP Conf Proc 2095:030002 9. Rosenfeld A (1975) Fuzzy graphs. In: Zadeh LA et al (eds) Fuzzy sets and their applications to cognitive and decision processes, pp 77–95. https://doi.org/10.1016/B978-0-12-775260-0. 50008-6 10. Sunitha MS, Vijayakumar A (1999) A characterization of fuzzy trees. Inform Sci 113:293–300

OntoQC: An Ontology-Infused Machine Learning Scheme for Question Classification D. Naga Yethindra, Gerard Deepak, and A. Santhanavijayan

Abstract The fundamental element for question answering is question classification. Though many papers have been presented on the following topic, this paper puts forth a powerful method for the process of question classification. The purpose of this paper is to improve the accuracy to the maximum extent for the classification of the question. In this paper, the experimentations are conducted on the Quora dataset which is preprocessed and then formalized for achieving ontology matching and mapping using concept similarity. The ontology matching is achieved using SemantoSim measure under ant colony optimization for deriving the optimal entities from a set of feasible solutions. The feature selection is achieved by encompassment of question category ontology. The ontology used in this paper is domain specific. Semantic labeling with the support of LOD cloud is realized to enhance the feature selection. The XGBoost classifier is encompassed for achieving classification of questions. An overall accuracy of 95.53% and the average precision is 96.9% has been yielded by the proposed OntoQC model. Keywords Ant colony optimization · LOD cloud · Question classification · Semantic labeling · Semantic similarity

1 Introduction The vast quantity of resources on the Internet has led to generating a limitless amount of data which brings the need for question classification. It is needed in order to bring out the precise and compelling results from the big amount of data. Question classification helps in question answering which is an integral part of search engines meaning that it helps the search engines to understand what the user needs and D. Naga Yethindra Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, India G. Deepak (B) · A. Santhanavijayan Department of Computer Science and Engineering, National Institute of Technology, Tiruchirapalli, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_23

265

266

D. Naga Yethindra et al.

presents significant signals for the framework. Classification can be performed on any dataset and it has been associated semantically in several frameworks [1]. It can be utilized, wherever there is a lot of scope to map enormous amounts of printed information. The search engines even in the fields like marketing where genuine as the correspondence among brands and clients happen. The use of marketing has shot up significantly in the last few years; therefore, people are concentrating more on user’s preferences to achieve better commitment. To make it possible, it has become a must for the marketing people to utilize user’s search history and investigating those turns into an unquestionable requirement to take care, and this happens with the help of question classification. Question classification can also be made use in question answering. It serves question answering in appropriate response selection by categorizing which impacts the efficiency of it straightforwardly. It does this by mapping it to the exact class by analyzing the type of question asked. For instance, if there is a question as “Who is the first batsman to score a century in test cricket?” and question classification helps the system to understand that the answer to this question is a person’s name, so it makes the system to search for answers in the class of names of the persons; this reduces the time and also improves the efficiency of the system as instead of searching for each and every class in the ontology it searches only one particular class and gets the desired answer from it. Though question classification is a type of text classification, they are unconnected. Unlike text classification, question classification needs to preserve the contextual meaning of the sentences or words. They undergo extra rounds of classification than text classification because it has less vocabulary-based data than normal text. In TC, when one round of classification happens, the sentence is broken, and each word is mapped to a particular class of the ontology, whereas in question classification, another round of classification has to take place in order to acquire high characterizing exactness. For instance, a single question can be asked in multiple ways, and the machine has to understand this, “what’s up?” and “what are you doing?” have the same meaning, but the words are different, so only by training, the machine with multiple rounds of classification, the machine will recognize the core meaning of the text. However, in this paper, the ultimate intention is to do the preliminary step of all these methods which is question categorization. Motivation: The categorization of questions synthesized manually doesn’t produce satisfactory results. So, it is exhilarating to create a machine which classifies and categorizes questions which is the mandatory step for any of the applications offered by question classification. The approach of this paper makes it stand out from other papers; this paper makes use of domain ontology integrated with ontology matching and mapping. On top of this, semantic labeling combined with LOD cloud is also utilized and, finally, classified using XGBoost classifier. Contribution: This paper brings forth an approach to classify questions using semantic labeling and XGBoost classification. The experimentations are conducted on Quora dataset. The data after being extracted undergo preprocessing and is converted into executable words. Then, the preprocessed data are mapped using concept similarity up to four levels of subgraph on getting help with the domain and

OntoQC: An Ontology-Infused Machine Learning …

267

question category ontology which is crafted using WebProtege and OntoCollab environment. The mapped data then go through ontology matching using SemantoSim measure which happens under the ant colony optimization. The data then move into semantic labeling for achieving domain concept conformance and enriching the data as it is important for accurate results. SPARQL endpoints are used to extract information from the LOD cloud which assists semantic labeling of the data. Then, ontology mapped and matched data break into two parts to get ready for classification. One part is 80% of the data which are used for training, and the other 20% is used for testing. XGBoost classifier is used for data classification. The results are attained from the XGBoost classifier. Organization: The pattern of the paper is arranged as follows. Section 2 comprises the work which is related to the field of study and experimentation. Section 3 constitutes the proposed architecture for question classification. Section 4 is made up of the results generated from the system and various comparisons among multiple methods. Section 5 consists of conclusions.

2 Related Work Mohasseb et al. [2] have put forth a system for question classification and categorization using a grammar-based system by exploiting space explicit data and also preserving the construction of the questions. All this is an automated process using AI. The name of this model is called GQCC. Xu et al. [3] developed a SVM-based way to deal with classification of questions. Also, at that point, reliance relations and high-recurrence words are consolidated into our gauge framework to increase the efficiency of the framework. Li et al. [4] is a theory paper which compares several approaches like voting, AdaBoost ANN, and TBL for the accuracy of question classification. Megha et al. [5] have introduced an AI-based way to deal with question grouping, displayed as a directed learning characterization issue. To prepare the learning calculation, a rich arrangement of lexical, syntactic, and semantic highlights is created among which is the inquiry headword and hypernym, which is considered as essential for precise inquiry arrangement. Haihong et al. [6] propose a model by embracing the LSTM neural organization for single- or dual-channel input, singleor multi-granularity convolution part, and with or without fast channel. Direct investigation and different components on the utilization of techniques imbibing control variables for various investigations with the ideal model and its boundary settings has been put forth. Ikonomakis et al. [7] created another calculation using SVMs for doing active learning. They presented three algorithms using the advantage provided by the duality between parameter space and feature space. The three algorithms are simple method, ratio method, and hybrid method. Simple method is the quickest at computation. Ratio method is used when asking each question is expensive in comparison to registering time. It is feasible to join the advantages of the ratio and simple techniques which are known as hybrid method. Scott and Matwin [8] portray a technique for

268

D. Naga Yethindra et al.

consolidating WordNet information into text portrayal that can lead to huge decreases in blunder rates on specific sorts of text arrangement undertakings. The strategy utilizes the lexical and semantic information encapsulated in WordNet to move from a sack-of-words portrayal to a portrayal dependent on hypernym thickness. The proper incentive for the tallness of speculation boundary h relies upon the attributes of every arrangement task. A side advantage of the hypernym thickness portrayal is that the order rules instigated are frequently less difficult and more fathomable than rules instigated utilizing the sack-of-words. In [9–14], several ontological models in support of the proposed literature are depicted.

3 Proposed Architecture The goal of the proposed system is to classify questions into specific categories which are acquired from the dataset which is procured from Web sites like Quora using Python libraries. The overall architecture of the proposed model is depicted in Fig. 1. This is achieved using XGBoost classification and ant colony optimization with the help of ontology and LOD cloud. The strategy proposed for increasing the efficiency of question classification is partitioned into three levels. The first step of the first level is Web scraping of a dataset from Quora. The retrieved dataset is then preprocessed with the help of three methods. The first method is tokenization which breaks the data into separate words. The second method is lemmatization which

Fig. 1 Proposed system architecture

OntoQC: An Ontology-Infused Machine Learning …

269

converts these separated words into its significant base structure. The final method is stop word removal or stemming which removes all the prefixes and suffixes of the word which is of no use to the meaning of the word. All the stop words such as ‘a’, ‘the’, ‘an’, and ‘and’ are detached from the words to make it more precise. The preprocessing of data is very crucial as this makes the data intelligible for the machine to process. The data after preprocessing will be ready to enter into the next levels. In the second level, an ontology is developed using an automatic and dynamic ontology creation tool called OntoCollab. It is created by incorporating all the categories related to questions. OntoCollab is used in order to make the features more suitable and reliable. Then, the ontology is manually modeled using an opensource tool, namely WebProtege using the markup keywords. The one created is a static ontology. Question category and domain ontology are created in this paper to improve the efficiency of the data by classifying and categorizing it. After the creation of the ontology, it is converted into a .csv file. The csv file of the ontology is yielded in order to enter into the next step of the second level which is ontology matching and mapping. The ontology matching and mapping happens between the dataset and the ontology to make the data more sorted and cataloged. Ontology mapping is used in this paper to make it less expensive and reduce the computational process. It maps the ontology with the dataset up to four levels of subgraph based on which ontology matching happens. Using this, all the relationships, axioms, and description logics analysis are mapped. Mapping happens using concept similarity. In data recovery, the similarity is consistently an impression of the semantic matching with degree about the content and the question. We use comparability to portray the level of likeness of two concepts. The formula for concept similarity is given by, Sim(C1, C2) =

n 

θi (C1, C2)∂i

(1)

i=1

In Eq. (1), C1 and C2 are concepts in the ontology; Sim(C1,C2) is the similarity between two concepts. When Sim(C1, C2) produces a result as zero, then it means that the concepts are not similar; if they are similar, then Sim(C1,C2) is one. After mapping, ontology matching happens with the help of semantic similarity using the method SimantoSim measure. Ontology matching is the activity to find the relationships between two concepts means two categories in this context. As already the dataset is mapped with the ontology, matching just makes the data more classified with the help of the sub-graphs mapped using concept similarity. The capacity to make relationships between various information hubs is very crucial toward the integration of data stored in them. SemantoSim produces the semantic similarity between the keywords(x, y). It is computed using, SemantoSim(x, y) =

pmi(x, y) + p(x, y) log[ p(x, y)] p(x) · p(y)] + log[ p(y, x)]

(2)

270

D. Naga Yethindra et al.

In Eq. (2), pmi(x, y) is a connection between the keywords. p(x, y) is the likelihood of the event of the keyword y with x. p(x) and p(y) are the probabilities of the presence of the terms x and y individually. In this paper, ontology matching is infused with ant colony optimization (ACO). SemantoSim measure happens under ACO to make the data more and more relevant and meaningful. ACO is a procedure which helps in decreasing the complexity of the computational issues by discovering great methods using graphs. The working of ant colony optimization is based on how an actual ant colony works. In the actual ant colony, the real ants stroll around in search of food, and once they find their food and return back, they leave a pheromone trail back so that the other ants follow the trail for food instead of roaming around. Similarly, ant colony optimization also works in the same way where the digital ants find a path to a solution by strolling around for all possible solutions. The ants record all their viable solutions and the merit of the solution so that it will be easier for the other ants to find an answer to their problem, and in upcoming cycles, the other simulated ants will also locate improved solutions. After undergoing all these steps, the dataset moves to the semantic labeling. Semantic labeling is the method of mapping attributes in information origins to classes in ontology. It happens with the help of LOD cloud using SPARQL endpoints. Though mapping happens in the previous step in order to enrich the data and attain domain concept conformance, semantic labeling is done. Semantic labeling assigns a role to each word or sentence which allows the machine to understand them better and improve the usage of the words in different sentences or paragraphs. Linked open data cloud helps in semantic labeling. LOD cloud is an approximately linked multitude of information and data given by the World Wide Web and that is the reason for the usage of both ontology and this in this paper. It licenses both fundamental and complex query situated admittance utilizing either the SPARQL endpoints or SQL. All these steps conducted help a lot to get the best of feature selection. After undergoing these steps, the dataset becomes an ontology-mapped dataset, and it moves to the third and final level. It is split into two parts with 20% of it being provided for testing and 80% provided for training for classification by XGBoost classifier. XGBoost model performs both classification and regression; in this paper, XGBoost classifier is used and not regression. It is called the one for all algorithms. It works well even if some of the data is missing, and the accuracy of this classifier is better than almost any other classifier. It is also very adaptable. XGBoost utilizes many fail learners which are decision trees to build a solid learner. Therefore, it is alluded to as an outfit learning strategy since it utilizes the yield of numerous models in the last forecast. It is anything, but an optimal mix of programming and equipment enhancement procedures to yield pervasive results by utilizing fewer registering assets in the most limited measure of time. After the classification, the necessary results are provided.

OntoQC: An Ontology-Infused Machine Learning …

271

4 Implementation The question classification system put forth in this paper works on a Windows 10 operating system upheld by Intel i7 processor with 16 GB RAM making use of Python 3.9.0 and Google Colab environment. The Python libraries included in the implementation of this method are the following NLTK, WordNet Lemmatizer, BeautifulSoup, matplotlib, and scikit learn. The other systems that are thought about for comparison are Bloom’s Taxonomy with TF-IDF and Word2Vec, SVM + hybrid feature extraction, and CNN apart from ontology-implanted question classification. Precision (%), recall (%), accuracy (%), F-measure (%), and FNR (%) are the potential metrics that have been calculated using the dataset extracted. From Table 1, it is quite evident that OntoQC performs very much better than the baseline models. It has delivered the best results among all the approaches taken into consideration. It has a developed precision than the TF-IDF [17] model by 8.47% and SVM-hybrid [15] model by a whopping 14.72% improvement in precision. The accuracy of the proposed model also has an improvement of 8.49% from the TFIDF model and 14.635% more than SVM-hybrid model. The reason for the better results of the OntoQC is because the blend of ontology mapping along with the semantic labeling using LODC with the help of XGB classification makes it better than the other models. Bloom’s taxonomy with modified TF-IDF and Word2Vec has a precision of 88.42% and a recall of 85.67% with an accuracy of 87.045%; the results are not convincing. It merges two features TF-IDF and word2vec which undergo into three types of classification systems such as support vector machine, k-nearest neighbor, and logistic regression. The reason for the disappointing results is Bloom’s taxonomy, and TF-IDF is great to use, but only when they are used separately, the mix makes it a dull one. Mainly, because TF-IDF is based on the rarity and frequency of words, and here, in this case, relations and associated entities have to be chosen, thus making its performance not so good. SVM + hybrid feature extraction delivered results of 82.17, 79.63, and 80.9% of precision, recall, and accuracy. The results produced are way lower when compared to knowlegde centric models, the system is built to improve the exactness of already existing Bangla question classification using SVM with different kernel functions. The flood of unigram-dependent dataset and predefined feature set was considered to assemble the framework. The performance Table 1 Comparison of performance of the proposed OntoQC with other approaches Model

Precision (%)

Recall (%)

Accuracy (%)

F-measure (%)

FDR

Blooms taxonomy with 88.42 modified TF-IDF and Word2Vec [17]

85.67

87.04

87.02

0.12

SVM + hybrid feature extraction [15]

79.63

80.90

80.88

0.19

82.17

CNN [16]

93.18

90.17

91.67

91.65

0.08

OntoQC

96.89

94.18

95.53

95.51

0.04

272

D. Naga Yethindra et al.

of this is not up to the mark because SVM is a very traditional and conventional model, and the merging of the hybrid model doesn’t complement well because SVM is particularly slow in training the datasets. CNN [16] has produced 93.18% precision, 90.17% recall, and 91.675% accuracy; though the results look promising, it is not as good as OntoQC. This is probably because in CNN, auto-handcrafted feature selection takes place; also, neural networks in general are itself a blind box. The relation extraction should always be governed which doesn’t happen in the case of CNN which also contributes to its average performance. Maxpool operation makes it computationally unhurried and complex. OntoQC delivers overwhelming results with 96.89% precision, 94.18% recall, and 95.535% accuracy. The results are over the top, and the reason for this is because the proposed model is a semantic-infused machine learning model. Feature selection happens based on the ontology matching instead of separate feature selection which is why SemantoSim measure is used, and it helps in strengthening the similarity between two concepts more accurately. Also, semantic labeling is done with the help of LOD cloud which makes sure that there is a high cover of entities, and it also provides the background knowledge to the data. XGBoost classifier utilized in this model is also pretty good in managing the data which is absent with its in-built features, and also, it is computationally faster than gradient boosting. Also, the very low rate of FNR which is just 0.044% for OntoQC is because of the heterogeneous resources gathered from LOD cloud and ontology that makes it very much efficient. Figure 2 displays the comparison of F-measure versus number of instances performance for the five systems. It is quite evident from Fig. 2 that OntoQC has produced better results than any other system. As the number of instances increases, the Fmeasure (%) of all the other systems reduces dramatically, whereas in OntoQC, though there is a reduction, it is very minimal. The reason for this phenomenal performance is because of the mix of both ontology and LOD cloud which provides the necessary auxiliary knowledge for the system. The use of external resources Fig. 2 F-measure (%) versus no. of instances

OntoQC: An Ontology-Infused Machine Learning …

273

helps it in choosing the precise features during feature selection, and hence, the high performance has been delivered.

5 Conclusion A novel semantics-injected machine learning-incorporated technique for question classification has been proposed. The OntoQC is contrasted with multiple baseline and standard methodologies, and it is benchmarked to be unrivaled as far as execution is concerned. OntoQC is a hybrid model for question classification which encompasses the XGBoost classifier intelligently pipelined with a knowledge amalgamation model using ontologies. The techniques like ontology mapping and matching are instilled into the model for knowledge enrichment. The semantic similarity models under the ant colony algorithm ensure a strong relevance computation paradigm. The low discovery rate and powerful efficiency of the model play the crucial part in producing accurate and promising results. The model manifests a low FNR of 0.044% and an average precision of 96.89%. The blend of ontology matching and mapping with semantic labeling backed by LOD cloud enhances the feature selection which is then provided to the XGBoost classifier for classification. This referenced process helps in boosting the proficiency of the model.

References 1. Malika S, Jaina S (2018) Semantic ontology-based approach to enhance text classification. In: CEUR workshop proceedings, vol 2786, pp 85–98 2. Mohasseb A, Bader-El-Den M, Cocea M (2018) Question categorization and classification using grammar based approach. Inf Process Manag 54(6) 3. Xu S, Cheng G, Kong F (2016) Research on question classification for Automatic Question Answering. In: 2016 international conference on Asian language processing (IALP), 2016, pp 218–221 4. Li X, Huang XJ, Wu L (2005) Question classification using multiple classifiers. In: Proceedings of the fifth workshop on Asian language resources (ALR-05) and first symposium on Asian language resources network (ALRN) 5. Mishra M, Mishra V, Sharma HR (2013) Question classification using semantic, syntactic and lexical features. Int J Web Semantic Technol 4. https://doi.org/10.5121/ijwest.2013.4304 6. Haihong E, Hu Y, Song M, Ou Z, Wang X (2017) Research and implementation of question classification model in Q&A system, pp 372–384. https://doi.org/10.1007/978-3-319-654829_25 7. Ikonomakis M, Kotsiantis S, Tampakas V (2005) Text classification using machine learning techniques. WSEAS Trans Comput 4(8):966–974 8. Scott S, Matwin S (1998) Text classification using WordNet hypernyms. In: Usage of WordNet in natural language processing systems 9. Yethindra DN, Deepak G (2021) A semantic approach for fashion recommendation using logistic regression and ontologies. In: 2021 international conference on innovative computing, intelligent communication and smart electrical systems (ICSES). IEEE, pp 1–6

274

D. Naga Yethindra et al.

10. Roopak N, Deepak G (2021) KnowGen: a knowledge generation approach for tag recommendation using ontology and Honey Bee algorithm. In: European, Asian, Middle Eastern, North African conference on management & information systems. Springer, Cham, pp 345–357 11. Krishnan N, Deepak G (2021) KnowCrawler: AI classification cloud-driven framework for web crawling using collective knowledge. In European, Asian, Middle Eastern, North African conference on management & information systems. Springer, Cham, pp 371–382 12. Roopak N, Deepak G (2021) OntoJudy: a ontology approach for content-based judicial recommendation using particle swarm optimisation and structural topic modelling. In: Data science and security. Springer, Singapore, pp 203–213 13. Manaswini S, Deepak G (2021) Towards a novel strategic scheme for web crawler design using simulated annealing and semantic techniques. In: Data science and security. Springer, Singapore, pp 468–477 14. Deepak G, Rooban S, Santhanavijayan A (2021) A knowledge centric hybridized approach for crime classification incorporating deep bi-LSTM neural network. Multimed Tools Appl 1–25 15. Nirob SMH, Nayeem MK, Islam MS (2017) Question classification using support vector machine with hybrid feature extraction method. In: 2017 20th international conference of computer and information technology (ICCIT), 2017, pp 1–6 16. Pota M, Esposito M, De Pietro G, Fujita H (2020) Best practices of convolutional neural networks for question classification. Appl Sci 10(14):4710 17. Mohammed M, Omar N (2020) Question classification based on Bloom’s taxonomy cognitive domain using modified TF-IDF and word2vec. PLoS ONE 15(3):e0230442

A Study of Preprocessing Techniques on Digital Microscopic Blood Smear Images to Detect Leukemia Ashwini P. Patil, Manjunatha Hiremath, and K. Kavipriya

Abstract Digital microscopic blood smear images can get distorted due to the noise as a result of excessive staining during slide preparation or external factors during the acquisition of images. Noise in the image can affect the output of further steps in image processing and can have an impact on the accuracy of results. Hence, it is always better to denoise the image before feeding it to the automatic diagnostic system. There are many noise reduction filters available; the selection of the best filter is also very important. This paper presents a comparative study of some common spatial filters like wiener filter, bilateral filter, Gaussian filter, median filter and mean filter which are efficient in noise reduction, along with their summary and experimental results. Performing comparative analysis of result based on PSNR, SNR and MSE values, it can be determined that median filter is most suitable method for denoising digital blood smear images. Keywords Leukemia · Noise reduction · Bilateral · Wiener · Median · Gaussian

1 Introduction Digital microscopic blood smear images are used for manual and automatic diagnosis of leukemia. Leukemia can be acute or chronic, among these acute leukemia is life threatening since it claims many lives across the world due to less time availability for diagnosis. Diagnosis of leukemia manually is time-consuming and can also be prone to human errors. The automatic detection system is crucial for the early detection of this disease. White blood cells (WBC) from digital blood smear images are analyzed to determine if cells are cancerous or healthy based on the features extracted. Digital images can get distorted due to various noises during their acquisition. These noises can affect features to be extracted and impact the overall accuracy of diagnosis. The A. P. Patil (B) · M. Hiremath · K. Kavipriya Department of Computer Science, CHRIST (Deemed to be University), Bengaluru, India e-mail: [email protected] A. P. Patil Department of Computer Application, CMR Institute of Technology, Bengaluru, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_24

275

276

A. P. Patil et al.

quality of digital microscopic blood smear images plays an important role in the automatic diagnostic system and hence preprocessing techniques are necessary to improve the image quality. Image preprocessing includes various techniques like contrast enhancement, image scaling, color space transformation, image restoration, noise removal, etc. to improve the image quality. This paper focuses on some of the often used denoising methods in the medical image processing field. Depending on the image type, image noise is classified as photo electronic, impulsive, or structured noise. Some of the common examples of noises are Gaussian noise, Poisson noise, salt and pepper noise and speckle noise. There are many filters available to reduce these noises from digital images. Filters can be typically categorized into time/spatial domain filters and frequency domain filters. This paper reviews and analyzes some of the spatial domain filters used in noise removal and presents a comparative study of these filters on digital microscopic blood smear images. The paper is structured as follows; this section defines the problem domain. Section 2 presents a related literature review and Sect. 3 describes the datasets and various spatial domain filters. The experimental findings of implemented filters are discussed in Sect. 4 and Sect. 5 concludes the study.

2 Literature Review Noise removal filters are used by many researchers to denoise the images as a part of preprocessing techniques in the medical image processing field. Scotti [1] used Low pass Gaussian filter to reconstruct the background of images. Nasir et al. [2] proposed two noise reduction techniques, Gaussian filter and the removing pixel technique, to remove small spots which were assumed to be the noise from the original images. The removing pixel technique removes all the objects with less than 100 pixels. Mohapatra et al. [3] presented that noise accumulated during the acquisition by staining is removed using a selective median filter. Khashman and Abbas [4] applied special filter and automatic edge detection on images before passing it to the neural classifier. Joshi et al. [5] first segmented the images and then a 3 × 3 minimum filter was applied on segmented nucleus images to denoise the image, preserve edges, and increasing the darkness of nuclei. Mohammed et al. [6] converted images to CY color space and a 5 × 5 median filter was applied. Patel and Mishra [7] has used median filter to remove noise and wiener filter to remove blurriness in the image. Madhloom et al. [8] implemented median filter to enhance the discontinuity in pixels. Mishra et al. [9] improved images using Weiner filtering. Rawat et al. [10] have applied order statistic rank filter for noise reduction in the image.

A Study of Preprocessing Techniques on Digital …

277

3 Material and Methods For this study images were collected from publicly available two online datasets, i.e.; ALL-IDB Dataset and ASH dataset. ALL-IDB is a public dataset of microscopic images intended to test and compare image segmentation and classification methods. American Society of Hematology (ASH) image bank is a public resource of high quality, peer-reviewed hematologic images. Total 50 images were considered for this study, 10 images of Acute myeloid leukemia (AML) from ASH image bank and 40 images of Acute lymphoid leukemia from ALL_IDB dataset. There are numerous denoising techniques available in image preprocessing. Spatial domain filter operates on each pixel to enhance the overall image. Spatial filters are often used to enhance the digital blood smear images. Some of these commonly used spatial filters are discussed in detail.

3.1 Wiener Filter Wiener filter was published by Norbert Wiener in 1949. It is an optimal filter for reducing additive Gaussian noise and minimizes mean square error (MSE). Pixelwise adaptive low pass wiener filter overcomes the limitation of linear wiener filter. Linear wiener filter uses the same filter throughout the image whereas adaptive wiener filter varies itself based on situations estimated from a local neighborhood of each pixel. Adaptive wiener filter is formulated as below: fˆ(x) = m f (x) +

σ 2f (x) σ 2f (x)

+



σn2

g(x) − m f (x)



(1)

where mf is local mean and σ 2f is variance [11].

3.2 Bilateral Filter Bilateral is a non-linear filter implemented to achieve noise reduction and smoothing while preserving the edges of the image [12]. In Bilateral filter each pixel’s intensity is replaced by a weighted average of intensity values from neighboring pixels. The bilateral filter can be formulated as: fˆ(x) = m f (x) +

σ 2f (x) σ 2f (x)

where W p is the normalization factor.

+

σn2



g(x) − m f (x)



(2)

278

A. P. Patil et al.

3.3 Gaussian Filter Gaussian Filter is also known as Gaussian smoothing or Gaussian blur. Gaussian filter is a non-uniform low pass filter which blurs the image but also reduces noise in the image. The Gaussian kernel is a bell shaped Gaussian hump. For image noise filtering we use a two dimensional Gaussian function which is formulated as: G(x, y) =

1 − x 2 +y2 2 e 2σ 2π σ 2

(3)

It is more effective in smoothing the image and ineffective in removing salt and pepper noise.

3.4 Median Filter The median filter is widely used non-linear noise reduction technique. The median filter contains a window of odd size that slides through the whole image and replaces each pixel of the image with the median of its neighbors. It preserves the edges in the image and reduces noise; it is especially good at removing salt and pepper noise. The median filter is formulated as: fˆ(x, y) = median(s,t)∈Sx y {g(s, t)}

(4)

3.5 Mean Filter A mean filter is simple and most commonly used for reducing noise. It is a linear filter and similar to the median filter where the window slides over an image and each pixel at the center of the image is replaced by the average of its neighboring pixel. 1  g(s, t) fˆ(x, y) = mn (s,t)∈S xy

where mn is the size of the window.

(5)

A Study of Preprocessing Techniques on Digital …

279

4 Results and Discussion This section presents results obtained after applying spatial filters on microscopic digital blood smear images. Figure 1a shows the original image and Figs. 1b, 2b and 3b shows denoised output images. Figures 4 and 5 show the histogram for all the original as well as output images. From the filtered images we can see that the Gaussian filter blurred the image and small black spots caused due to staining are not completely removed. The Wiener filter and Mean filter have smoothened the images but could not get rid of black spots from the background of the image. The bilateral filter smoothen the image as well as preserve the edges but is not very efficient in removing the black spots. The median filter gives the best output; it smoothens the images while preserving the edges and also eliminates the black spots. Filter images were evaluated based on PSNR, SNR and MSE values. Signal-tonoise ratio (SNR) and peak signal-to-noise ratio (PSNR) are both a ratio of signal

Fig. 1 Spatial filters a original blood smear image b Wiener filtered image

Fig. 2 Spatial filters a bilateral filtered image b Gaussian filtered image

280

A. P. Patil et al.

Fig. 3 Spatial filters a median filtered image b mean filtered image

Fig. 4 Histogram a original blood smear image b Wiener filtered image c bilateral filtered image d Gaussian filtered image e median filtered image f mean filtered image

power to noise power, but in PSNR we consider only peak signals that are highintensity regions of the image. In general, the greater the PSNR value, greater the quality of the reconstructed image. PSNR value is considered better for quality estimation of the reconstructed images. Mean squared error (MSE) is the average square difference between the denoised image and the original image. The lesser the MSE value lesser is the error. Table 1 shows the evaluation metric and Fig. 5 shows the graphical representation of the evaluation metric. Evaluation metric in Table 1 contains average of SNR, PSNR and MSE values for all the images. From the evaluation metric, we can observe that as compared to all the methods median filter SNR ratio is higher, i.e., above 1 which indicates that the signal level is greater than noise level. With respect to MSE ratio bilateral filter and median filter performs better. After over all analysis of the

A Study of Preprocessing Techniques on Digital …

281

Fig. 5 Performance result of spatial filters

Table 1 Evaluation metric for denoised images

Filter

SNR

PSNR

MSE

Wiener

0.0053

40.9135

5.2817

Bilateral

0.0023

43.9820

2.9388

Gaussian

0.0049

40.4016

5.9427

Median

1.7212

42.5569

3.6175

Mean

0.0063

39.8473

6.7751

evaluation metric it is observed that median filter gives the better result in denoising digital microscopic blood smear images.

5 Conclusion This paper briefly discusses spatial filters used for denoising digital microscopic blood smear images and presents a comparative study for the same. In this work wiener filter, bilateral filter, Gaussian filter, median filter and mean filter are implemented and average of SNR, PSNR and MSE value were calculated. The performance is evaluated based on comparison of these SNR, PSNR and MSE value and the visual quality of the output images. From the evaluation metric and filtered images it is observed that Median filter give better results as compared to other implemented spatial filters. Acknowledgements Authors acknowledge to Mr. Fabio Scotti of University of Milan, Italy for providing an image database.

282

A. P. Patil et al.

References 1. Scotti F (2006) Robust segmentation and measurements techniques of white cells in blood microscope images. In: 2006 IEEE instrumentation and measurement technology conference proceedings 2. Salihah A, Nasir A, Mustafa N, Fazli N, Nasir M (2009) Application of thresholding technique in determining ratio of blood cells for leukemia detection 3. Mohapatra S, Patra D, Satpathi S (2010) Image analysis of blood microscopic images for acute leukemia detection. In: 2010 international conference on industrial electronics, control and robotics 4. Khashman A, Abbas HH (2013) Acute lymphoblastic leukemia identification using blood smear images and a neural classifier. Advances in computational intelligence, pp 80–87 5. Joshi MD, Karode AH, Suralkar SR (2013) White blood cells segmentation and classification to detect acute leukemia 6. Mohammed R, Nomir O, Khalifa I (2014) Segmentation of acute lymphoblastic leukemia using C-Y color space. Int J Adv Comput Sci Appl 5(11) 7. Patel N, Mishra A (2015) Automated leukaemia detection using microscopic images. Procedia Comput Sci 58:635–642 8. Madhloom HT, Kareem SA, Hany A (2015) Computer-aided acute leukemia blast cells segmentation in peripheral blood images. J Vibroeng 17:4517–4532 9. Mishra S, Majhi B, Sa PK, Sharma L (2017) Gray level co-occurrence matrix and random forest based acute lymphoblastic leukemia detection. Biomed Signal Process Control 33:272–280 10. Rawat J, Singh A, Bhadauria HS, Virmani J, Devgun JS (2017) Computer assisted classification framework for prediction of acute lymphoblastic and acute myeloblastic leukemia. Biocybern Biomed Eng 37(4):637–654 11. Tomasi C, Manduchi R (1998) Bilateral filtering for gray and color images. In: Sixth international conference on computer vision (IEEE Cat. No. 98CH36271) 12. Westin C (2009) Adaptive image filtering. Handbook of medical image processing and analysis, pp 19–33

Emotion Recognition of Speech by Audio Analysis using Machine Learning and Deep Learning Techniques Ati Jain, Hare Ram Sah, and Abhay Kothari

Abstract Verbal communication is key for understanding any human being. Verbal communication even shows various emotions of a person. During pandemic, online learning systems are used everywhere that need to be understands the emotions of learner for happy learning. Many applications introduced for speech analysis. Deep learning applications are growing faster as time in various fields like image recognition, health care, natural language processing, machine and language translation, and many more. People never imagined that such applications will give produce Alexa like virtual assistants. In similar concepts, emotion recognition is also nowadays one of the important factors to be considered for further research. An audio analysis is a field that works on digital signal processing, tagging, music classification, speech synthesis, and automatic speech recognition like techniques. Information is extracted for the virtual assistants from the audio signals only. For the same work, it is mandatory to analyze audio data; in this paper, how a tool can be built that understands and explores data for emotion recognition using machine and deep learning platforms is mentioned and described. Keywords Emotion · Speech · Synthesis · Audio analysis

1 Introduction Speech plays an important role in human life. It is a way through which human emotions and intensity of speech, i.e., affective state of the human can also be recognized. Many factors of speech such as tone, pitch, intensity, voice quality, and articulations also reflect emotions. Such systems are known as speech recognition system which is also based on the concepts that how animals understand the human language. Voice controlled systems such as Google assistants and Alexa are growing A. Jain (B) Institute of Advance Computing, SAGE University, Indore, India e-mail: [email protected] H. R. Sah · A. Kothari Sagar Institute of Research & Technology, SAGE University, Indore, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_25

283

284

A. Jain et al.

magnificently nowadays, and still, there are many more add-ons of functions over to it. The evolvement of such products is also increasing due to integration of these systems into laptops, phones, mobiles, vehicles, electronic appliances, etc., due to which market is also growing fast with such products and hence increase and brings it to the best field of research. Artificial intelligence is used in such systems to make them smart enough to understand the human emotions when people speak to it and gives an accurate result which attracts people to more use it. The main motto of this paper is to make device in such a way that instead of recognizing only audio to play any music or just follows the inputted command, and this system will also help to understand the current mood of the speaker by recognizing human emotions through speech and work on the inputted command accordingly for best results [1]. Human emotions detection is very normal and natural in our daily routine life when discussion is about person to person, but it is a matter of research when it is expected from computer to recognize. Computers are really very experts in recognizing and retrieving content-based data and finding from the depth [2]. Here, in this system, a technique is described through which various audio files will be taken as an input and classification of various standard of emotions like happy, sad, neutral, boredomness, fear, and anger can be detected. This paper is divided into various sections. Section 2 covers literature survey part, Sect. 3 includes about speech recognition and its various techniques, Sect. 4 involves methods and steps used for emotion recognition from speech by applying several process to speech, and last section covers implementation details along with screenshot of result of accuracy that found.

2 Literature Survey Audio is very common words for everyone in daily routine. Audio makes all in connection every time, because it is the biggest source of communication with the other people. Connection with audio can be directly or sometimes indirectly, but it perfectly goes through the brain and gives the information and emotional state of the nearby person [3]. As machine learning researches are going on rapidly, application in speech recognition has also grown fast. Deep learning has enhances the speech recognition for emotions and hence results in better than the previous used techniques. Ali Bou Nassif et al. compared between many years of papers for speech recognition nearly for 174 papers [4]. Author suggested to use linear predictive coding method (LPC) for feature extraction rather than MFCCs methods which is suggested by many previous authors. Authors also suggested using hidden Markov model (HMM) and Gaussian mixture model (GMM) for efficient functioning and better results. Speech recognition can also be performed using recurrent neural network (RNN) using deep learning and using LSTM, i.e., long short time memory techniques as well. Anjali I.P. et al., here author emphasized on the speech recognitions that nowadays changing a lot in human life for interaction with many computing-based devices. The

Emotion Recognition of Speech by Audio Analysis …

285

device introduced here gave an accuracy of 90% for two languages [5]. Such devices also help in military for soldiers to communicate with foreign country civilians. Felix Albu et al. described several neural networks approaches for the children’s emotion recognition performance. The support vector machines (SVMs), radial basis function (RBF), extreme learning machines (ELM), probabilistic neural networks (PNNs), and variants were tested on recorded speech signals and face detected images. For the speech signal, the mel-frequency cepstral coefficients (MFCCs) and other parameters were computed together with their mean and standard deviation and obtained the feature vector for the neural network input [6]. For images, input parameters for emotion detection consisted in several distances computed between certain facial features using space coordinates for eyes, eyebrows, and lips. Author investigated the robustness of the classification rate to noise on both speech and image signals. Saliha Benkerzaz et al. discussed and reviewed an architecture for an automatic speech recognition system and also mentioned some problems and its solutions given by other authors in the previous papers [7]. According to author, neural network is the best ways to deal with voices of users and allow devices to interact accordingly based on computer vision techniques and also allow them to establish a good communication between the computerized machine and human being. Yuki Saito et al. proposed a solution for the vocal synthesis for high-quality parameters based on the deep neural networks and introduced use of discriminator and generator for anti-spoofing and deceiving the samples [8]. Edwin Simmonet et al. proposed systems for improving of language understanding systems in automatic speech recognition systems. These show to detect errors by enrich the set of semantic tags with some error tags as well. Apoorv Singh et al. mentioned that the accuracy of emotion recognition reached up to 71% of accuracy after applying CNN model. This type of system also recognized between male and female voices easily [9]. It also helps in understanding the mood of the human and recommended various music players to play sound according to the mood of the user. Voice can understand the excitement level of the user and also applied for various e-commerce sites as well.

3 Speech Recognition and Its Various Factors It is clear that many emotions and factors are there, which can be judged with speech of human being, and today research made devices to understand human language and convert it into an automated voice over command system. Factors like i.

Speech Processing

It is all about processing of signals and deep study of signals for needful data. Signals are mostly represented in the form of digital signals that are known as digital image processing. These are mainly used for detailing of inputted signals, capturing and saving it to the memory, along with transfer and final output. The input for the speech processing is speech recognition, and speech synthesis is the final output.

286

ii.

A. Jain et al.

Speech Recognition

It is the process that is responsible for translation of inputted words into the computer text. It is an accurate formula that understands the human language and recognizes the speech and voice perfectly in the system. There are several uses of it as it converts speech to text, for dialing purposes, for control of domestic appliances, data entry, emotions, mood set, etc. iii.

Speech Identification

Speech identification means to speaker’s identification, i.e., whenever an input is fed, then must be identified by various already stored registered speakers. This needs to be identified that which is the perfect or nearby match of the sound. Here, some speech models were already modeled to the system, and fed voice is analyzed then. Here, the chances of event can be judged with all the stored models. iv.

Speech Verification

Speech verification means to speaker’s verification, i.e., whenever speaker’s identification is to be judged it can be in the either of the forms; rejection or acceptance. Acceptance is the condition when input matches with model with good value and reached above the set threshold. In some cases, ‘open set’ can also be identified which means that no rejection and partial acceptance. v.

Speech Synthesis

An intermediate result of input of human speech is speech synthesis. This result is used for further input as in hardware and software system. These are intelligent systems, because this makes the system easy to understand and judge better. These are useful for various applications like educational, telecom, multimedia systems, also for blind and handicapped person. vi.

Speech Signals

As soon as the human speaks, the voice of the human converts into the electrical energy, and this converts also to current and voltage which is used for speech processing in the form of speech signals. These signals carry information and very much useful for devices to understand human emotions and sentiments. vii.

Speech Emotional State

Acoustical speech signals are used to find out mood of the human by analyzing cognitive attributes. Here, observation is made on human voice and different patterns related to it are extracted that uses verbal and non-verbal part of speech as well. Different moods like happy, sad, anger, neutral, boredomness, fear, stress, other sentiments can be analyzed. These emotions based on speech signals. viii.

Speech Language Recognition

These determine that what is the speech content and mode of language, but if the language is similar and with kind of correlation than it would be difficult for these

Emotion Recognition of Speech by Audio Analysis …

287

kind of languages to be determined. Such types of applications are used for speech converting systems, language translation, multilingual speech acceptance systems, and different retrieval systems. ix.

Speech Accent recognition

This is most difficult task than language recognition as regional language accent is difficult to determine than the correlated language. Accent recognition found speaker’s regional language origin and background and therefore to adjust various features used for human voice recognition to regional origin of language. Human voice accent with diverse accents utter some words very differently, and models are trained for every regional accents like for constantly varying particular phones and domestic appliances also. x.

Speech Age Recognition

In order to estimate the age of human, it can also be judged with the voice. Age recognition with the speaker’s voice is the method of determining approximate the speaker’s age like whether they belongs which age groups line kindergarten, primary, juniors, adult, trainee, etc., using uttered speech signals. There are many applications where age restrictions are required for some security purposes, and age recognition can be used there. xi.

Speech Gender Recognition

These systems are very simple as it give response in the form of binary like systems, i.e., either male or female. These systems are highly required whenever applications and requirements are restricted to only single gender. Gender recognition becomes very important part then in such applications.

4 Methodology for Speech Emotion Recognition Speech recognition is not a simple task. It is a tough process due to difference of human voice to convert that into acoustic changes; electric and electronic. Expressions and actions say and mean a lot. Speech emotion talk about pitch variations, frequency, intensities, and special energies is emphasized [12]. Speech emotion detection is area of research these days in computing vision, based on CNN which uses different modules for the emotion recognition and various feature extraction and classification techniques are applied to differentiate standard emotions such as happy, surprise, fear, anger, neutral, and sad. The dataset like RAVDESS is used with using LIBROSA package in Python, where speech samples and the some characteristics are developed and extracted. The classification performance is based on extracted characteristics. Finally, emotion of speech signal is determined. As already it is clear that speech signal performs the conversion of speech or received voice into the words that done by the use of computer program. Output here recognized as an

288

A. Jain et al.

input for the intended system. Speech recognition is nowadays widely used in many applications of daily routine. It includes various steps which are classified as i.

Speech Analysis

This is first and foremost step that includes interpretation of extracting some piece of information which is extracted from the speech. It is very necessary to look in advance that why speech analysis is required. Some factors need be seen in advance: Purpose or intention of speech, how speech is related to spectators, the relevance and effect of speech, what are speech contents, force of speech, results of speech, types of speech—positive, negative, or neutral. All must be judged and kept for record. ii.

Audio Data Preprocessing

For the recognition purposes, an audio sample formatting is quite important. This may be further used for the use for speech emotion recognition. Here, reformatting means avoiding the invalid, extra, and irrelevant noise from the audio sample by re-sampling the audio signal. Then only such samples can be used for any kind of recognition. iii.

Transformation of pre-processed data

The transformation of data is required as machine algorithm is to be applied. This is applied according to the knowledge of the problem by involving some algorithms of machine learning. In order to recognize the speech emotions through signals, transformation of data is suggested as specific to machine learning algorithms for feature extraction of audio samples. Several features are recognized while extraction. iv.

Speech Feature Extraction

This step is important and considered as heart of the recognition system because identity of the speech of speakers is identified here. Voice input here converts into the speech signals and acoustic characteristics identified. Speech signals consist of many parameters like pitch, intensity, jitter, energy, and others can be identified and techniques for speech feature extraction like can be chosen. There is difference in characteristics of each extraction linear predictive coefficients, LPC; linear predictive coding and MFFCs; mel-frequency cepstrum, RASTA; relative spectral filtering, PLDA; probabilistic linear discriminate analysis technique and can be used according to its advantages and disadvantages (Fig. 1).

Emotion Recognition of Speech by Audio Analysis …

289

Fig. 1 Steps for feature extraction for speech

The input in the form of human voice or speech will be in continuous mode that will be received by the analyzer. In the next step, continuous input that is now in the form of speech signals is outputted as frames of the window [10]. The process of windowing minimizes the disruptions that may occur at the start and end of the framing. As next step, windowed frames then sent to the discrete Fourier transform for converting window frames to magnitude spectrum. In order to produce a melspectrum, spectral analysis is confirmed, and subjective frequency scale is checked with fixed resolution, i.e., mel-frequency scale. This resultant spectrum is now passed to log and then to inverse of discrete Fourier transform that produces the final result as mel-cepstrum. The mel-cepstrum is final result which consists of the features that are extracted due to the requirement of speaker’s speech identification [11]. v.

Feature Classification cum Modeling Techniques There are many algorithms which are used as emotion classification techniques from speech as well. These algorithms take an input from dataset in form of audio samples and results after observation with several classified results. These results based on already modeled trained samples of the systems. Each algorithm has their own style for results is used on the basis of requirements by their advantages and disadvantages. Therefore, it is required to learn and compare between different classifiers. a.

Artificial Neural Network

b.

ANN is method that works like human brain and based on the concepts of classifying data through learning. ANN is the mainly commonly used algorithm, here, output layer is intended at the last in end of each iteration that transmits error to all neurons in the direction to input layer from the output layer, and weights are again adjusted as per the to the error margin. Such error margin is distributed to the previous neurons located before the said neuron in proportion to their weights. Convolutional Neural Network CNN is method that works with main four layers: convolution layer, max pooling layer, activation layer, and dense layer. CNN uses different modules for the speech emotion recognition also, and several classifiers are worn to differentiate emotions like happy, surprise, fear, anger, neutral,

290

A. Jain et al.

c.

sad, etc. The RAVDESS like dataset is used for the speech emotion recognition system. In this system, using LIBROSA package of Python, samples of speech and various characteristics are extracted. And finally, classification performance is calculated by the results of classification extracted results. Hence, the emotion of speech signal can be determined. Dynamic Time Warping

d.

Whenever speech is inputted, it is always necessary to compare between those inputs. The difference may be of time, speed, or speakers. Dynamic time warping as its name suggests handles different time, speed, and speakers-related matters with DTW algorithms. Signature matching, speaker recognition, shape matching, and speech recognition are some applications in which DTW is helpful. Hidden Markov Model (HMM)

e.

HMM is model that is used in speech recognition application as it also rich in various mathematical structures and also works and results properly. Observed data are taken into considerations, and hidden parameters found. HMM states sequence of tokens generated that can be well thought-out as a generalization of a mixture model where hidden variables control the mixture component to be selected for each observation and are related through a Markov process rather than independent of each other. Gaussian Mixture Model (GMM)

f.

GMM has represented as a standard classification method for speech signals. It is that accurate that results better with very small amount of input as well. Their results in robust result even when data available with noise and some corruptions. Even when population size is high, GMM performs well. Matching Techniques List of prerecorded words is matched with results which whole matched or partially.

vi.

Input and Datasets

Datasets and inputs that are used here will be in two ways Live voice of data of students or attendee while attending the session will be recorded firstly and second option is by using set of sample of audio signals that can be used directly. Samples of audio signals are also used for ML algorithm training purpose of the model and will be also used as test set for evaluating the overall performance and recognition percentage (Fig. 2).

Emotion Recognition of Speech by Audio Analysis …

291

Fig. 2 Signal processing in machine learning for speech emotion

5 Steps and Results Librosa and sklearn libraries along with RAVDESS dataset are used to model to implement emotion using speech recognition. MLP classifier is implemented in order to recognize sound files. Voice input will fed to the system and by speech analysis feature extraction will be performed, and RAVDESS dataset will be used for training and testing of the implemented model. RAVDESS dataset, i.e., Ryerson audio-visual database of emotional speech and song dataset that consists of 24 actors (12 male and 12 female voices), nearly about 25 GB files and has 7356 files that are rated by 247 individuals 10 times on emotional validity, intensity, and genuineness (Figs. 3, 4, and 5). In order to perform finding for accuracy, the following steps are performed: a.

Including all the important Python libraries import.

Fig. 3 Python code screenshot for accuracy of the recognizer as 82.8%

Fig. 4 Python code screenshot for accuracy of the recognizer as 72.4%

292

A. Jain et al.

Fig. 5 Code screenshot for accuracy of the recognizer, confusion matrix

b.

c.

d. e. f. g. h.

For feature extraction performing, use function extraction of feature that extracts features from inputted audio/speech file. Four parameters that are file name and other three parameters for the mel, MFCC, and chroma are required. Standard emotions like sad, calm, neutral, anger happy, boredomness, disgust, etc., are detected with the help of the RAVDESS dataset and do not forget to load dataset before the execution and give function a proper name. Now split datasets into testing 25% and training 75% for better use. Observation of extracted features is now done and tested by training sets. Next step uses classifier for identifying emotions. Further model can be used for checking purpose. In last, check the accuracy of the system and see the fitness of the model.

Emotion Recognition of Speech by Audio Analysis …

293

6 Conclusion Implemented model works well with different male and female voices and reached up to 75–81% of accuracy. According to the random input from RAVDESS dataset, accuracy varies. CNN models performed better here to produce results. The paper discussed about various factors related with speech recognition and then discussed four stages of speech recognition system that includes: speech analysis, speech feature extraction, feature classification, and modeling techniques and matching techniques. Feature extraction techniques also discussed like linear predictive coefficients, linear predictive coding, mel-frequency cepstrum, relative spectral filtering, probabilistic linear discriminate analysis among which mel-frequency cepstrum is the best fit.

References 1. Gaikwad SK, Yannawar P (2010) A review on speech recognition technique. Int J Comput Appl 10(3) 2. Bhabad SS, Kharate GK (2013) An overview of technical progress in speech recognition. Int J Adv Res Comput Sci Softw Eng 3(3) 3. Ghai W, Singh N (2015) Speech feature recognition techniques—a review. Int J Comput Appl 4(3):107–114 4. Nassif AB, Shahin I, Attili I, Azzeh M, Shaalan K (2019) Speech recognition using deep neural networks: a systematic review. IEEE Access 7 5. Anjali IP, Sherseena PM (2020) Speech recognition. IJERT 8(04). ISSN: 2278-0181 6. Albu F, Hagiescu D, Vladutu L, Puica M-A (2015) Neural network approaches for children’s emotion recognition in intelligent learning applications. In: Europe’s Journal of Psychology, proceedings of EDULEARN15 7. Benkerzaz S, Elmir Y, Dennai A (2019) A study on automatic speech recognition. JITR 10(3) 8. Saito Y, Takamichi S, Saruwatari H (2017) Training algorithm to deceive anti-spoofing verification for DNN-based speech synthesis. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). https://doi.org/10.1109/ICASSP.2017.7953088 9. Singh A, Srivastava KK, Murugan H (2020) Speech emotion recognition using convolutional neural network. IJPR 24. ISSN: 1475-7192 10. Narang S, Gupta D (2012) A literature review on automatic speech recognition. IJCSMC (0975-8887) 41(8) 11. https://en.wikipedia.org/wiki/Speech_recognition 12. https://towardsdatascience.com/building-a-speech-emotion-recognizer-using-python-4c1c7c 89d713

Decision Support System Based on the ELECTRE Method Olena Havrylenko, Kostiantyn Dergachov, Vladimir Pavlikov, Simeon Zhyla, Oleksandr Shmatko, Nikolay Ruzhentsev, Anatoliy Popov, Valerii Volosyuk, Eduard Tserne, Maksym Zaliskyi, Oleksandr Solomentsev, Ivan Ostroumov, Olha Sushchenko, Yuliya Averyanova, Nataliia Kuzmenko, Tatyana Nikitina, and Borys Kuznetsov Abstract Decision support systems (DSS) are still of great use in the different domains of automatization, where artificial intelligence methods have no proper information base for learning or require a big efforts. Among them choosing of a training trajectory in the Intelligent Tutoring Systems, choosing of a plan of repairing or scientific research works in the high schools may be considered as the major. The subject of the paper is a modern information technology for DSS in the case of O. Havrylenko (B) · K. Dergachov · V. Pavlikov · S. Zhyla · O. Shmatko · N. Ruzhentsev · A. Popov · V. Volosyuk · E. Tserne National Aerospace University “Kharkiv Aviation Institute”, Chkalov st. 17, Kharkiv 61070, Ukraine e-mail: [email protected] K. Dergachov e-mail: [email protected] V. Pavlikov e-mail: [email protected] S. Zhyla e-mail: [email protected] O. Shmatko e-mail: [email protected] A. Popov e-mail: [email protected] V. Volosyuk e-mail: [email protected] E. Tserne e-mail: [email protected] M. Zaliskyi · O. Solomentsev · I. Ostroumov · O. Sushchenko · Y. Averyanova · N. Kuzmenko National Aviation University, Huzara av. 1, Kyiv 03058, Ukraine e-mail: [email protected] O. Solomentsev e-mail: [email protected] I. Ostroumov e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_26

295

296

O. Havrylenko et al.

multi-criterial choice combined advantages of the methods based on the pairwise comparison with the simplicity of the methods based on the reducing several criteria to one. The methods utilized in the research are system analysis, sets theory, graphs theory and multi-criteria choice. The model based on the ELECTRE (Elimination Choix Traduisant la Relite—exception and choice reflecting reality) method was improved by using graphical interpretation of the concordance–discordance matrices values (ELECTRE GR). The method of weighted sums is proposed to use combining with the ELECTRE GR method for the guaranteed solution finding out. Experimental investigations of the proposed method on the randomly generated data have been showed that in comparison with the ELECTRE I it gives more limited set of the dominating alternatives and reduce by a factor of five ineffective answers in the case of the default values of the custom parameters. The information technology based on the proposed methods has been described as schema, which allow to develop universal or special DSS. Keywords Decision support system · Decision making · Multi-criteria choice · Outranking relation graph · Weighted sum method

1 Introduction To choose the optimal solution from the area of compromises, it is necessary to substantiate the axiomatic and to formulate a single rule (optimization scheme) of decision making. To solve this problem, additional information is required, which can be obtained by analyzing and formalizing the features of the system goals. It is necessary to develop approaches that are well adapted to the implementation of heuristic considerations, regarding the peculiarities of specific systems. The considered class of systems, which includes different domains, might be described by the following characteristics: repeating decision-making events; generated alternatives set of cardinality up to 100 units; different scales and range of criteria estimation;

O. Sushchenko e-mail: [email protected] Y. Averyanova e-mail: [email protected] N. Kuzmenko e-mail: [email protected] T. Nikitina Kharkiv National Automobile and Highway University, Ya. Mudroho st. 25, Kharkiv 61002, Ukraine B. Kuznetsov State Institution “Institute of Technical Problems of Magnetism of the National Academy of Sciences of Ukraine”, Industrialna st. 19, Kharkiv 61106, Ukraine

Decision Support System Based on the ELECTRE Method

297

different type of criteria optimization; decision-maker weak inclusion in the process, whereas the process is clear enough. First of all, we can talk about Information Management Systems with the planning module for repair, restoration, research, production works in a big complex enterprise or organization. Another perspective innovation area seems to be Intelligent Tutoring Systems with the automatic generation and choice of the next task for a student [1]. If we consider complex technical systems design and research we can find the problem of optimal device or prototype composition from different hardware units and its equipment with software [2, 3]. In the Transport Navigation Systems [4, 5], there are might be generated grate number of available trajectory, when human is not able to choose the best one without some analytical support. In the constantly changing social, legislative, financial conditions it is of great matter the universality of the decision-making techniques, their flexibility and adaptability to a changing situation of choice. Information technology in effective decision support system should allow decision-maker to compound its own system of the criteria, and then organize the process in the open and clear way, but without huge customization routine. The final choice in DSS usually is decision-maker responsibility. To develop effective tools with understandable steps of original alternative set gradual restriction. The formulation of the multi-criteria choice problem as well as the review of the existing methods is presented below in the paper. Then two decision-making models are proposed as the basics of the information technology. The models’ experimental research results and conclusions considered in the end of the paper.

2 Formulation of the Multi-criteria Choice Problem Let Za = {Zalt1 , Zalt2 , . . . , Zaltm } is an unordered set of the alternatives za = Zalti , i = 1, 2, . . . , m, m ∈ [10, 100], obtained at the stage of alternatives generating in some of the described systems. K = {K 1 , K 2 , ..., K n } is the space of criteria, which is a set of sets of estimates k j (za) of alternatives Zalt, measured in ordinal, interval scale or relationship scale. In addition, based on the decision-maker’s idea of the priorities of the criteria, one can construct a mapping K → V , V {v1 , v2 , . . . , vn } is a set of weighting factors for the criteria importance. It is necessary to order the set of za in order to identify a subset of Zabst ⊂ Za of the best alternatives,  the size of which satisfies the psychological capabilities of the DM analysis, i.e., Zabst  ∈ [1 . . . 4], zabst = opt F[K (za), V ],

(1)

za∈Za

where F[] is an operator of the objective function, which take into account n objective functions, which represent, respectively, the maximization or minimization of estimates for each of the criteria: k j (Za) → max or k j (Za) → min.

298

O. Havrylenko et al.

3 The Review of the Existing Methods Analysis of the methods applicable to solving such problems [6, 7] allows considering of four main approaches: 1. 2. 3. 4.

Unification (aggregation) of many objective functions into a single function; Consistent identification of preferences simultaneously with the study of the permissible set of alternatives; Finding for the existing set of alternatives partial ordering; Maximum possible reduction of uncertainty and incomparability.

The first of the listed methods involves the introduction of a single objective function (or utility function) [8, 9]. In many cases, it is relatively easy to express the assessment for each individual criterion in the form of an assessment reduced to a single (general) scale (usually monetary or other traditional units of measurement are used for this). If the estimates obtained can be considered additive, n objective functions can be replaced with one of the following form: F p(za) = v1 · k1 (za) + v2 · k2 (za) + · · · + vn · kn (za).

(2)

The class of functions that can be used to aggregate several objective functions into one is not limited to additive expressions like (2), they can have different functional forms. When using the second approach (also called pairwise comparison method) [10] in linear programming problems with many objective functions, extremal solutions are sought for each objective function separately, after which, based on an appeal to the decision-maker, a payment matrix of a “compromise solution” is constructed. In this case, at the beginning of solving the problem, the nature of the desired best action is not clear, and the procedure consists in obtaining the necessary information about the decision-maker’s preferences for each pair of alternatives. The third of the listed approaches, based on the construction of some binary relation R, which can be called a superiority relation, can be simpler and more illustrative. This attitude should be considered not as a complete reflection of the decision-maker’s preference system, but only as a representation of a part of his preferences, which can be determined based on the available data. Thus, using the relation R, one can simplify the problem of finding the best element za by extracting from Za a subset containing the best element. To select this subset, we introduce the concept of the kernel Nc of the graph Gc without contours corresponding to R, i.e., in this graph an arc from za in za exists if and only if za Rza . The method based on these ideas is called ELECTRE [7]. The advantage of this method is the ability to analyze not only quantitative but also qualitative criteria (the values of which are given in an ordinal scale). The productivity of this method was shown for solving problems in wide area of domains [11]. For the further comparison of objects that are in the core of the graph Gc corresponding to the relation R, methods of reducing uncertainty and incomparability

Decision Support System Based on the ELECTRE Method

299

(the fourth approach) can be used, including the construction of a certain formalized model [6]. The use of such a method requires quite complicated procedure of customization by a decision-maker, and the effect of its implementation might be not grater then obtaining a single solution if the graph kernel obtained by the previous method contains no more than 3–4 alternatives, which are suitable for decision-maker effective analysis due to the cognitive capabilities of a human brain.

4 The Decision-Making Models Development Before constructing the models mentioned above, the first step is to reduce the initial set preliminarily, excluding alternatives Za obviously ineffective due to one criterion is beyond the range specified by a decision-maker:       ∨ k j (za) > k max , Za0 ⊂ Za, Za0 = za  K j ∈ K 0 , k j (za) < k min j j

(3)

  where K 0 ⊂ K ,  K 0  ≤ |K |—a subset of criteria selected by the  decision-maker as the base system for decision making, i = 1, 2, . . . , r , r =  K 0 .

4.1 A Model Based on the Outranking Relation Graph Construction and Processing In order to construct an outranking relation, first of all, the set of available criteria K 0 for each pair of alternatives za j ∈ Za0 , j = 1, 2, . . . , m must   be divided into three + subsets: K zai , za j − zai is preferable to zaj ; K − zai , za j − za j is preferable to zai , K = zai , za j −zai is equivalent to zaj . When forming them, it is necessary to take into account the fact that for a pair of alternatives (zai , zaj ) there are concepts such as “small difference” and “large difference” in the estimates for each of the criteria, which does not make it possible to unambiguously assign the criterion to one or another subset of K 0 . This question can be solved using the fuzzy logic technique [12]. For such a procedure, DSS needs information about membership functions. The construction of membership functions may require an unreasonable amount of time and effort not only from the decision-maker, but also from other specialists. Therefore, the approach adopted in the ELECTRE method seems to be more rational [13, 14]. It takes into account the information about the rigidity of comparing alternatives for each criterion. For each scale of criteria, the decision-maker is able to answer the question: “What is the difference between the assessments of alternatives which can be considered as a threshold of incomparability dk?” So, the decisionmaker can indicate that the difference with the cost estimate is δkl 

 = 0    K = K  , (6) kl (zai ) − kl za j  < δk I where i, j = 1, 2, . . . , m, i > j. We assume that based on the known mapping K 0 → V , you can determine the relative importance of each of these three subsets. We will assume that for each subset a new criterion is formed that unites all the criteria included in this subset. Various modifications of the ELECTRE method [13–15] suggest introducing a + − = vector of weights P = Pi j , Pi j , P i j , P ∈ R3, i, j = 1, 2, . . . , m, reflecting, respectively, the fraction of subsets K + , K − , K = in the set K 0 taking into account the mapping K 0 → V :   n      −

n 1 vk  = v ; P ; k v i j + k  K k ∈ K (zai , za j )  K k ∈ K − (zai , za j ) k=1 k=1 k=1  n   1  =

n Pi j = vk  , v  K k ∈ K = (zai , za j )) k=1 k

1 Pi+j = n v k=1 k

n 

(7)

k=1

where i, j = 1, 2, . . . , m, vk is the weight of the criterion K k , k = 1, 2, . . . , n, It would be much more convenient from the point of view of the decision-maker, if procedure of preferences customization will be defined in terms that he understands. To obtain such a procedure, consider the entire set of possible combinations of values Pi+j , Pi−j , Pi=j , which represents a fragment of the plane Pi+j +Pi−j +Pi=j = 1 in three-dimensional Euclidean space shown in Fig. 1.

Decision Support System Based on the ELECTRE Method

P= = 0

P- = 1 P- = P=

P+ = 1 P= = P+

P- = P+

Fig. 1 Plane fragment in two-dimensional projection: , e —real and modeled areas, δp—threshold of incomparability, δp1 —threshold of equality

301

P+ = 0

P- = 0 δp

Ω Ωe P= =

δp1

1

Therefore, we will define the region e (see Fig. 1) using the following hypothesis based on empirical reasoning: Up to a certain threshold δp of incomparability Pi=j , any difference Pi+j − Pi−j is essential, and below this threshold, it is indifferent. For a better correspondence of e to the region , we use one more incomparability threshold δp1 and obtain a formal rule: P i+j − Pi−j > δp → Rule 1. ∀Pi j = P i+j , Pi−j , P i=j , P i=j ≤ 1 − δp1 ∧ ∃zai Rza j ; P i=j ≤ 1 − δp1 ∧ P i+j − Pi−j < −δp → ∃za j Rzai ; for all other vectors Pi j , there is no any relation. Note that with such a permissive rule, the time complexity of the method, in comparison with other modifications of the ELECTRE method, named as ELECTRE GR, is halved due to the possibility of comparing two pairs of alternatives once, i.e., performing comparisons only for pairs i > j. When the relation R is constructed in this way, it is possible to form the graph G. If the graph contains no contours, then its kernel N, i.e., a subset that simultaneously satisfies the conditions of external and internal stability is the only one.

4.2 Model Based on the Construction of a Generalized Objective Function One of the most common approaches to the decision-making problem solving is based on reducing a multi-criteria problem to a single-criteria one. In this case, the formation of a compromise solution search scheme is associated with the choice

302

O. Havrylenko et al.

of the type of utility function F[k1 (za), k2 (za), . . . , kn (za)], za ∈ Za0 . In this case, the main problem is to determine the type of utility functions for particular criteria, which allow mapping of the criterion space K 0 to some normed space K  :  ki (za)

=

ki (za) , ki (za)max −ki (za)min ki (za) , ki (za)max −ki (za)min

1−

if ki → min; if ki → max, i = 1, 2, . . . , r.

(8)

Despite the versatility that can be obtained from the use of the above functions, in this work for the DSS implementation it is proposed to use the simplest additive utility function (2).

5 Experimental Research For the investigation of the effectiveness of the method ELECTRE GR, based on the given above Rule 1, the system prototype was developed on Python using library implementing the methods ELECTRE modifications and WSM. Among ELECTRE methods, the ELECTRE 1 was chosen for comparison and 290 random datasets were generated with 3–19 alternatives and 3–19 criteria. The histogram of the obtained set N cardinality appearance in the carried out research is represented on Fig. 2. Firstly, it was found out that ELECTRE GR method kernel alternatives’ count varies from 0 to 9, whereas ELECTRE 1 gives the count spread from 1 to 19. The histogram on Fig. 2 shows that ELECTRE GR with greater probability gives the set of 1–3 alternatives as answer. Secondly, it was calculated that count of the non-effective answers, i.e., empty sets or sets contain all the source alternatives, is nearly the 13% for ELECTRE GR and over 60% for ELECTRE 1 method in the case of the thresholds by default. Frequency of appearance 100 80 60 ELECTRE GR 40 ELECTRE 1 20 0 kernel alternatives 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 count

Fig. 2 Histogram of the kernel cardinality frequency

Decision Support System Based on the ELECTRE Method

303

6 Conclusion and Future Work As a result of sequential modeling of the components of automated decision-making support, a transition was made from the sets theory formulation of the problem to simulation and mathematical models of its solution. The different domains contain problems of analysis and choice among large enough diversity of variants, when estimation criteria expressed in different scales: absolute, interval, relational. The formulated problem statement requires the combined use of two methods: the method of multi-criteria choice based on the construction of the outranking relation graph (ELECTRE GR), and the method of forming a generalized utility function. The construction of a model according to the first of the listed methods was improved in the part of comparison procedure for each criterion of alternatives in different scales and the procedure for forming a partial ordering of a set of alternatives. More simple for understanding and embedding in the user interface geometric interpretation of the alternatives pairwise comparison results was proposed. With minimum customization, ELECTRE GR method gives more effective results then other modifications, as was showed on the randomly generated alternatives sets. The specifics of the second method application is use of information about estimates according to the criteria not chosen by the decision-maker as basic, as well as in providing additional information to the decision-maker for the analysis of a subset of non-dominated alternatives. The proposed choice model is open to decision-makers and requiring a minimum of information reflecting the choice strategy. It is also allow implementing automatic mode in DSS.

References 1. Martínez Bastida JP, Havrylenko O, Chukhray A (2018) Developing a self-regulation environment in an open learning model with higher fidelity assessment. Commun Comput Inf Sci 826:112–131. https://doi.org/10.1007/978-3-319-76168-8_6 2. Kharchenko V et al (2013) Green computing and communications in critical application domains. In: Challenges and solutions in the international conference on digital technologies, IEEE, Zilina, Slovakia, pp 191–197. https://doi.org/10.1109/DT.2013.6566310 3. Kulik A, Dergachev K, Pasichnik S, Nemshilov Y, Filippovich E (2021) Algorithms for control of longitudinal motion of a two-wheel experimental sample. Radioelectron Comput Syst 2:16– 30. https://doi.org/10.32620/reks.2021.2.02 4. Kulik A, Dergachev K (2016) Intelligent transport systems in aerospace engineering. Studies in systems, decision and control, vol 32, pp 243–303. ISBN: 978-3-319-19150-8 5. Ostroumov I et al (2021) Modelling and simulation of DME navigation global service volume. Adv Space Res 68(8):3495–3507. https://doi.org/10.1016/j.asr.2021.06.027 6. Trakhtengerts EA (2003) Komp’yuternyye sistemy podderzhki prinyatiya upravlencheskikh resheniy. Control Sci 1:13–28 7. Munda G (2016) Multiple criteria decision analysis and sustainable development. Multiplecriteria decision analysis. State of the art surveys. Springer, New York, pp 1235–1267. https:// doi.org/10.1007/978-1-4939-3094-4_27

304

O. Havrylenko et al.

8. Sorooshian S, Parsia Y (2019) Modified weighted sum method for decisions with altered sources of information. Math Stat 7(3):57–60. https://doi.org/10.13189/MS.2019.070301 9. Marler RT, Arora JS (2019) The weighted sum method for multi-objective optimization: new insights. Struct Multidiscip Optim 41(6):853–862 10. Saaty TL (2008) Decision making with the analytic hierarchy process. Int J Serv Sci 1(1):83–98. https://doi.org/10.1504/IJSSCI.2008.017590 11. Martin KG, Brandt J (2016) ELECTRE: a comprehensive literature review on methodologies and applications. Eur J Oper Res 250(1):1–29. https://doi.org/10.1016/j.ejor.2015.07.019 12. Komsiyah S, Wongso R, Pratiwi SW (2019) Applications of the fuzzy ELECTRE method for decision support systems of cement vendor selection. Procedia Comput Sci 157:479–488. https://doi.org/10.1016/j.procs.2019.09.003 13. Figueira J, Mousseau V, Roy B (2005) ELECTRE methods. In: Figueira J, Greco S, Ehrgott M (eds) Multiple criteria decision analysis: state of the art surveys. Springer, Boston, Dordrecht, London, pp 133–162 14. Marzouk SMM (2011) ELECTRE III model for value engineering applications. Autom Constr 20:596–600. https://doi.org/10.1016/j.autcon.2010.11.026 15. Mary S, Suganya G (2016) Multi-criteria decision making using ELECTRES. Circuits Syst 7(6):1008–1020. https://doi.org/10.4236/cs.2016.76085

An Improved and Efficient YOLOv4 Method for Object Detection in Video Streaming Javid Hussain, Boppuru Rudra Prathap, and Arpit Sharma

Abstract As object detection has gained popularity in recent years, there are many object detection algorithms available in today’s world. Yet the algorithm with better accuracy and better speed is considered vital for critical applications. Therefore, in this article, the use of the YOLOV4 object detection algorithm is combined with improved and efficient inference methods. The YOLOV4 state-of-the-art algorithm is 12% faster compared to its previous version, YOLOV3, and twice as faster compared to the EfficientDet algorithm in the Tesla V100 GPU. However, the algorithm has lacked performance on an average machine and on single-board machines like Jetson Nano and Jetson TX2. In this research, we examine the performance of inferencing in several frameworks and propose a framework that effectively uses hardware to optimize the network while consuming less than 30% of the hardware of other frameworks. Keywords Object detection · YOLOV4 · Optimized inferencing

1 Introduction Object detection is also a computer vision and image processing technology used to detect the presence of significant elements in digital images and videos. Object detection is used in various computer vision applications, including photo retrieval and video monitoring. Object identification may be considered an aesthetic area within machine learning, which mostly works with picture and video streams. Deep learning detects an item in photos or videos using several algorithms such as (1) you J. Hussain (B) · B. R. Prathap · A. Sharma Department of Computer Science and Engineering, Christ (Deemed to be University), Bangalore, India e-mail: [email protected] B. R. Prathap e-mail: [email protected] A. Sharma e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_27

305

306

J. Hussain et al.

only look once (YOLO), (2) spatial pyramid pooling (SPP-net), (3) fast R-CNN, and (4) convolutional neural networks based on regions (R-CNN), (5) oriented gradient histogram (HOG), (6) single shot detector (SSD), (7) region-based fully convolutional network (R-FCN), (8) faster R-CNN. Those object detection algorithms are accurate but the speed of inferencing has always being a problem. Face detection and tracking, vehicle detection and pedestrian identification and many more applications are examples of well-known object detection applications. YOLO-you look once is considered to be state-of-the-art technology in object detection algorithms. Joseph Redmon created the algorithm in 2015, proving to be game-changing. YOLO is faster and more accurate than other faster R-CNN and single shot detector models. YOLO performs object recognition as a regression problem, and hence, the category probabilities of the discovered photographs are reported. YOLO’s real-time inference approach is built on convolutional neural networks. YOLO does have one important shortcoming, even though it is accurate on a typical system used for day-to-day operations. The article identification using YOLO inferencing does not reach 18–20 frames per second with an honest GPU with minimum use of fifty percentage. If it is on a single-board system like Jeston Nano or Jetson TX2, the performance drops dramatically due to the high GPU consumption. Researchers applied several object tracking algorithms such as deep sort, Kalman filter, track-RNN, and others to overcome this problem for improved performance. The tracking algorithms are not hardware demanding but they lack efficiency in a real-time dynamic environment. This article discusses about the YOLO algorithm being tested and optimized on GPU hardware. This method has been compared with various inferencing tools such as Open CV, Darknet CPU, and Darknet GPU; the algorithm has been modified to run with most optimal performance on the GPU hardware. The article is framed as follows. Section 2 refers the relative work on the YOLOV4 from different researchers. Section 3 explains about the methodology of the proposed methodology on experimental work. Section 4 gives on understanding of the results and discussion. Conclusion follows next.

2 Related Work Research proposed by Bochkovskiy et al. [1] states that a new model of YOLO has been proposed in this paper namely YOLOV4. And the predictions result has been discussed using Tensor RT. The real-time detection was achieved at 30 fps. This model is based on Darknet framework which uses convolution neural network as a backbone. This is one stage anchor-based detector. The outline of author Ulker et al. [2] concludes that the overall performance in various machine and edge devices has exponentially increased with the use of Tensor RT, and in the frameworks like PyTorch and TensorFlow, it is noticed to be higher as they are built upon NVIDIA Cuda framework. And the authors have compared the results with various other optimized inferences using other object detection algorithms. In the literature of

An Improved and Efficient YOLOv4 Method for Object …

307

Ildar [3], the author has explained the necessity of fast inference on single-board devices which has many scopes of application in further since they are affordable and less space consuming than a regular machine. The author mainly focuses on Jetson devices, the Raspberry pi family. According to author Howell et al. [4], the research concludes the efficiency and throughput can be increased by the use of GPU and further can be increased with the help of Tensor RT in YOLOv4-tiny. In the research discussed by Rais and Munir [5] on estimation of speed of vehicle, the author has discussed the use of Tensor RT, YOLO, and Kalman filters for the use of vehicle speed estimation to make road rules more practical without a human presence. In the research proposed by Jeong et al. [6], methodology of parallelization for real-time inference on single-board machine like Jetson devices has been compared using Tensor RT and non-Tensor RT frameworks for performance measures. In the research proposed by Howell et al. [4], the selection of targeted cells was performed using object detection algorithms and has discussed the time difference by using Tensor RT in YOLOV4-tiny. According to the research proposed by Li et al. [7], six types of YOLOV4 algorithms were trained and deployed for various weather conditions to detect various traffic conditions using edge devices. The research also proves that Tensor RT takes 10 ms detection time in YOLOV4. In the research demonstrated by Cai et al. [8], a new variation of YOLOV4 has been proposed for autonomous vehicles. Since these vehicles need to make accurate decisions with high speed, the proposed algorithm makes higher accuracy than standard YOLOV4 and is named YOLOV4-5D [7]. To maximize the model’s generalizability and prevent overfitting when training on the custom dataset, Ivai-Kos approached their research by training the model on a similar high volume public dataset before fine-tuning on a bespoke and low volume dataset. The research conducted by Li et al. [7] introduces a multi-task joint framework that incorporates person detection, feature extraction, and identification comparison for real-time person search. To reach real time, each part’s accuracy and speed are tuned separately. The research by Bochkovskiy et al. [9] the YoloV4 was built and the result of the performance are being shown. The research proposed by Sung and Yu [10] the author has detailed the importance of YoloV4 in Licence plate recognition system in real time as the algorithm is much faster than other algorithms. According to Dai et al. [11] the use of TensorRT in field robots increases the real time detection capabilities and performance. In the research proposed by Kumar and Punitha [12] the application of YoloV4 and YoloV3 and improvements to the systems is discussed. In the research discussed by Tan et al. [13] the improved and efficient algorithm of YoloV4 was proposed for the UAVs for target detection. According to the author Mahrishi et al. [14] the YoloV4 algorithm can be used for video indexing with various parameters like f1-score, precision, accuracy and recall. This is a new approach discussed in the research which has vast importance. The application of higher frames per second inferencing plays a vital role in this modern world. To achieve real-time performance, the precision and speed of each component are individually calibrated. In our current era, the use of greater frames per second inferencing is critical. In addition, unlike the 1950s to 1980s, when neural networks were abandoned owing to hardware restrictions, Since the 2000s, hardware resources

308

J. Hussain et al.

have been at their peak, and data gathering sources have also reached their peak since data is created every minute. As a result, the implementation of hardware intensive or data-intensive algorithms will no longer struggle to provide results as they did 50 years ago.

3 Methodology In this section, we have discussed the comparison of different model inference by defining the work flow of data collection, data preprocessing, model selection, training, and deployment on the dataset. The approach used in this research is YOLOv4 since it is fast and accurate compare to other models. Figure 1 shows the image classification flow. The standard development process for a deep learning-based object detector includes five phases: image data gathering, object detection model selection, data preprocessing, model training, and model evaluation. This study employs a similar development technique, but with additional evaluation and refining procedures. The basic purpose of this paper is to train an object detector for benchmarking the various methods of inference.

Fig. 1 Image classification model flow

An Improved and Efficient YOLOv4 Method for Object …

309

3.1 Data Preprocessing For data analytics, data preprocessing is critical. Outliers and noise are common as a result of data collecting that is not tightly regulated. The accuracy of the pipeline will be severely harmed by analyzing such data. Occlusions and overexposure should be considered in machine vision. Several data preparation approaches will be used to filter the dataset so that the model does not waste computer resources on data that is not needed. In terms of the model’s framework needs and input size, the dataset must be created in accordance with the specified models. The model requires that the photographs meet the model’s input size criteria. Therefore, the input parameters have been tuned for the model training input for better accuracy.

3.2 YOLO (You Only Look Once) The YOLO stands for ‘You Only Look Once.’ This can be an algorithm for accurately identifying specific objects in an exceeding snapshot (in real time). Object detection in YOLO is completed as a regression problem, and also, the identified photos’ class probabilities are delivered. Convolutional neural networks (CNN) are utilized in the YOLO method to find objects in real time. To detect objects, the procedure just requires one forward propagation through a neural network, because the name suggests. This signifies that one algorithm run is employed to anticipate the particular picture. The CNN is employed to anticipate multiple anchor boxes and sophistication probabilities at about an identical time.

3.3 Training and Implementation The data for training YOLOV4 consists of human present images of around 15,000 for training, 4370 for validation, and 5000 for testing in which each image has on average 23 humans in the scene. Each image has been annotated in two classes, namely ‘head’ for the annotation of the human head and ‘person’ for the annotation of the complete person in the image. These images are then converted into YOLO format for training. Figure 2 shows the training graph which also explains training was done for 6000 epoch. Figure 3 shows the mean average precision obtained from class. ‘Head’ was 82.10% and the class ‘person’ was 80.40%; in total, the overall mAP was 81.25% for the output weights file. The weights file was taken from 4100 epoch to prevent overfitting.

310

J. Hussain et al.

Fig. 2 Training graph

Fig. 3 mAP (mean average precision) of trained model

The trained model of YOLOv4 was obtained, and the results were compared using four methods, (1) Using OpenCV, (2) Using Tensor RT (3) Using Darknet with GPU, (4) Using Darknet with CPU (Figs. 4 and 5).

An Improved and Efficient YOLOv4 Method for Object …

311

Fig. 4 Video stream inferencing using Tensor RT

Fig. 5 Video stream inferencing using Darknet

4 Result and Discussion The result is tested using a similar dataset trained with YOLOV4 of resolution 416 × 416 on a notebook with i5-11300H with NVIDIA GeForce RTX3060 6 GB GDDR6 and 16 GB DDR4 Ram. The inference was performed on video with a resolution of 1920 × 1080, with Darknet GPU model and Tensor RT optimized model averaging 29 and 28 frames per second, respectively. OpenCV model and Darknet CPU optimized

312

J. Hussain et al.

Fig. 6 Average frames per second inferencing

model averaging 3 and 2 frames per second, respectively. As the Darknet GPU and Tensor RT were approximately equal in performance, the GPU usage of these frameworks varied a lot, with Darknet GPU at 58% of GPU consumption and Tensor RT being at 27% of GPU consumption. There was, on average, more than 30% of GPU consumption difference between these models because of which it would become difficult to deploy Darknet GPU model inferencing on critical hardware applications. Figure 6 shows the average frames per second while inferencing on 1920 × 1080 resolution video file using YOLOV4 in various frameworks. Figure 7 shows the actual frames per second in each frame while inferencing on 1920 × 1080 resolution video file using YOLOV4 in various frameworks. The above result is obtained by quantizing the YOLOV4 algorithm in INT8 form to perform hi-speed inferencing which is done by Tensor RT, and enables parallel processing in GPU with results in low latency and high throughput. This is achieved by following methods which is inbuilt in Tensor RT such as: (1) reduced precision, (2) layer and tensor fusion, (3) kernel auto-tuning, (4) dynamic tensor memory, (5) multi-stream execution, and (6) time fusion. The reduced precision layer provides FP16 or INT8, increasing throughput by quantizing models while maintaining accuracy. Layer and tensor fusion layer fuses nodes during a kernel, which utilizes the GPU memory and bandwidth efficiently. Supported the target GPU platform, the kernel auto-tuning layer chooses the top effective data layers

An Improved and Efficient YOLOv4 Method for Object …

313

Fig. 7 Actual frames per second in inferencing

and algorithms. Dynamic tensor memory layer reduces memory footprint and efficiently reuses memory for tensors. In the multi-stream execution layer, multiple input streams are processed in parallel employing a scalable approach. The time fusion layer strengthens recurrent neural networks over a number of iterations by using flexibly synthesized modules. This study indicates how the YOLOV4 technique is implemented in various frameworks on a similar dataset and examines their inference performance. Figure 8 shows the GPU utilization of 27% while inferencing Tensor RT. Figure 9 shows the GPU utilization of 58% while inferencing Darknet GPU. Figure 10 shows the comparison GPU Utilization by frameworks used for inference.

5 Conclusion Computer vision is image processing technology used to recognize important features in digital photos and movies. Object detection applications include face detection, vehicle detection, pedestrian identification, and many others. In this study, YOLOV4 object detection algorithm is integrated with enhanced inference methods in this study. In this paper, we have compared the state-of-the-art YOLO algorithm with various frameworks for inferencing. The result shows that using Tensor RT in YOLOV4 has given the same performance as Darknet framework with more than

314

Fig. 8 GPU utilization using Tensor RT

Fig. 9 GPU utilization using Darknet GPU

J. Hussain et al.

An Improved and Efficient YOLOv4 Method for Object …

315

Fig. 10 Comparison of average GPU utilization

30% lesser usage of GPU as this model can be deployed in single-board as well as NVIDIA powered machine for various applications in real time like traffic monitoring system, vehicle tracking, violation detection in a crowd, animal detection in farmland, and more. As a future scope, these models can be deployed in low hardware specification devices for better detection accuracy and inference time.

References 1. Wang C-Y, Bochkovskiy A, Mark Liao H-Y (2021) Scaled-YOLOv4: scaling cross stage partial network. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 2. Ulker B et al (2020) Reviewing inference performance of state-of-the-art deep learning frameworks. In: Proceedings of the 23rd international workshop on software and compilers for embedded systems 3. Ildar R (2021) Increasing FPS for single board computers and embedded computers in 2021 (Jetson nano and YOVOv4-tiny). Practice and review. arXiv preprint arXiv:2107.12148 4. Howell L, Anagnostidis V, Gielen F (2021) Multi-object detector YOLOv4-tiny enables highthroughput combinatorial and spatially-resolved sorting of cells in microdroplets. Adv Mater Technol 2101053 5. Rais AH, Munir R. Vehicle speed estimation using YOLO, Kalman filter, and frame sampling 6. Jeong EJ et al (2021) Deep learning inference parallelization on heterogeneous processors with TensorRT. IEEE Embed Syst Lett 7. Li X et al (2021) Research on the application of YOLOv4 target detection network in traffic scenarios by machine vision technology. J Phys: Conf Ser 2033(1) 8. Cai Y et al (2021) YOLOv4-5D: an effective and efficient object detector for autonomous driving. IEEE Trans Instrum Meas 70:1–13 9. Bochkovskiy A, Wang C-Y, Mark Liao H-Y (2020) YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934

316

J. Hussain et al.

10. Sung J-Y, Yu S-B (2020) Real-time automatic license plate recognition system using YOLOv4. In: 2020 IEEE international conference on consumer electronics-Asia (ICCE-Asia). IEEE 11. Dai B et al (2021) Field robot environment sensing technology based on TensorRT. In: International conference on intelligent robotics and applications. Springer, Cham 12. Kumar C, Punitha R (2020) YOLOv3 and YOLOv4: multiple object detection for surveillance applications. In: 2020 third international conference on smart systems and inventive technology (ICSSIT). IEEE 13. Tan L et al (2021) YOLOv4_Drone: UAV image target detection based on an improved YOLOv4 algorithm. Comput Electr Eng 93:107261 14. Mahrishi M et al (2021) Video index point detection and extraction framework using custom YoloV4 Darknet object detection model. IEEE Access 9:143378–143391

A Survey on Adaptive Authentication Using Machine Learning Techniques R. M. Pramila, Mohammed Misbahuddin, and Samiksha Shukla

Abstract Adaptive authentication is a reliable technique to dynamically select the best mechanisms among multiple modalities to authenticate a user based on the user’s risk profile generated using behavior and context-based information. Websites or enterprise applications enabled with adaptive authentication will have a more robust security system as analyzing the large volume of the user, device, and browser data in real time generates a risk score that decides the appropriate level of security. Though a significant amount of research is being carried out on adaptive authentication, no single model is suitable for a global attack. This paper provides a structured (extensive) survey of current adaptive authentication techniques available in the literature to identify the challenges which demand future research. Keywords Risk-based authentication · Browser fingerprinting · Behavior-based fingerprinting · Context-based fingerprinting · Security and privacy

1 Introduction Over the decades, computer security has faced significant challenges due to the diverse nature of the mode of communication. Authentication is one of the criteria for user validation to secure the digital data, for which passwords are considered the primary mode of authentication; due to advancements in technology, they are also the flawed and insecure mode of authentication [1, 2]. A significant amount of research has been carried out to ensure the gaps identified at each research stage are addressed to secure data from unauthorized users. Existing systems analyze user behavioral and communication patterns using ad hoc or adaptive authentication models and rule-based methodologies. Due to

R. M. Pramila (B) · S. Shukla Christ University, Bangalore, India e-mail: [email protected] M. Misbahuddin Centre for Development and Advanced Computing (CDAC), Electronics City, Bangalore, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_28

317

318

R. M. Pramila et al.

the dynamic nature of online fraud techniques, these solutions are operationally ineffective. Information networks use a variety of modes of communication, but the security of information networks on all modes of communication is a major concern. In 1975, analysis of keyboards feasibility was done to identify users. Later in the early 1990s, during the ubiquitous computing era, user activities were captured in context-aware applications [3, 4], which raised a security alarm. To make an impact on the security threats, recent technologies should be integrated to focus on providing a secure mode of communication for smart systems by combining heterogeneous authentication methods, rather than finding a substitute. Also, handling the fingerprinting issues concerning behavior, device, and browser that is used to collect a list of device and user details over the several years to obtain system access. To address the security issues, encryption, user identification, wax stamps to identify the sender, and many more were used, of which multifactor authentication is considered a secure way of communication in recent days. The concern is to authenticate the user to provide secure access to any web applications via HTTP with the assumption that users’ machine is valid to pass the network traffic. Since the recent study on adaptive authentication research mainly focuses on behavioral-based biometric techniques for adaptive authentication. A detailed study of the behavioral, device, and browser fingerprinting analysis for adaptive or risk-based authentication (RBA) is to be worked upon. In this study, we examine current advances in behaviorbased biometric device and browser fingerprinting, as well as their implications for adaptive authentication. We achieve these objectives by making the following contributions: i. ii. iii.

A detailed study on the impact of keyboard or mouse dynamics for adaptive authentication A study on the impact of combined approach of behavior and context information for adaptive authentication Discussion and future research directions in adaptive authentication using behavior, device, and browser fingerprinting.

The rest of the paper is organized as follows. Section 2 introduces the adaptive authentication techniques. The advanced fingerprinting features are discussed in Sect. 3. The attack types during authentication are discussed in Sect. 4. Detailed discussion on the open challenges and future research directions is given in Sect. 5.

2 Adaptive Authentication Techniques The essential key for the authentication procedure is the distinctiveness of security measures. It can be divided into three categories: What a user knows (password, pattern), what a user possesses (smart card, RFID, Bluetooth tokens), and what a user is (biometrics). This section provides a detailed study on behavioral and context information used to validate the user.

A Survey on Adaptive Authentication Using Machine …

319

2.1 Authentication Technique Using Behavioral Dynamics Most of the surveys associated with our work analyze user behavior in a risk-based authentication mechanism. Alaca and Van Oorschot [6] provided a detailed study on various techniques used to authenticate the user for any Web application based on devices fingerprinting vectors using adaptive authentication that allows the system to select the best techniques to authenticate a user using context or behavioral information due to the limitations of password authentication [7]. Pusara and Brodley [8] performed a detailed study on mouse movements for user identification using a supervised learning approach. This opened up research directions for behaviorbased user identification. A continuous authentication system is a mechanism to collect real-time user behavior information using any behavioral factors of which mouse dynamics played a significant role. User data is collected and compared with the authentic user’s database. The user is provided or denied access based on comparison results, and later static authentication information is used to continue system access [9]. Salman and Hameed [10] investigated the effectiveness of a continuous authentication system using mouse dynamics which gave an accuracy of 93.5%. Mondal and Bours [11] proposed a novel technique for continuous authentication (CA) and continuous identification where the user detected as an imposter by CA is identified again using a forensics tool using behavioral information of mouse and keystroke. The approach uses pairwise user coupling (PUC) to reduce multiclass problems into two-class problems; for better results, more behavioral activities could be included. Zheng et al. [12] proposed a system that used a tool to record user input (RUI). The system specified using newly specified angle-based metrics such as angle of curvature, direction, and curvature distance; the system can reliably validate the user. Support vector machine classifier was used for verification and provided considerably low EER based on the data collected from 30 users. Shen et al. [13] developed software that runs data collection on the background during authentication. The proposed method uses mouse movement speed, click elapsed time, movement acceleration, and relative speed position, where SVM is used for classification. Antal and Szabó [14] used a touch-swipe biometric method for the android application. A physiological questionnaire was used to collect touchscreen and motion data in this study. WEKA tool was used for data analysis where the SVM improved the equal error rate of authentication to five swipes (0.2%) from a single swipe (4%). Based on most related surveys, mouse movement behavior alone will not suffice to re-authenticate the user [15]. Keystroke dynamics usage was studied by Shimshon et al. [16] to authenticate users, which gave better results than the earlier methods of authentication, unlike Traore et al. [17] proposed a framework combining mouse and keystroke dynamics. A series of studies were done on the mouse and keystroke dynamics [17–21]; all of these were carried out under a controlled environment with specific specified tasks. Bailey et al. [18] performed a study on the behavioral biometric system using graphical user interface (GUI), keyboard and mouse, using two approaches; one

320

R. M. Pramila et al.

approach is feature level fusion using BayesNet, LibSVM, and J48, where all features are integrated, and feature selection is applied across the whole data collection, the second approach uses ensemble-based decision level (EBDL) fusion which makes decision based on subsets of larger dataset, later there predictions are merged into a one predictive model, EBDL outperformed the feature level fusion but real-world deployment needs better FRR. Fridman et al. [19] proposed a decision fusion method for evaluating a multimodal continuous authentication system on a real-world dataset. The technique included 12 sensors with the possibility of adding more to the sensor bank, demonstrating the decision level approach’s strength. The approach is categorized into four levels. Level 1, 12 sensors are fused. Level 2, each sensor’s relative contribution is evaluated with the overall decision. Level 3, trade-off is calculated between the time of the first authentication and the error rate. Level 4, robustness of the system is checked for partial spoofing of sensor spoofing attacks of which four sensors were compromised. Significant research work is done with the combined approach of keystroke dynamics and mouse dynamics, but no single technique for all possible methods will yield good results [5, 7, 22]. All the mentioned studies below use machine learning approach for pattern recognition and classification [Support Vector Machine (SVM), Bayesian Network (BN), Decision Tree (DT) or Artificial Neural Network (ANN), Binary Random Forest (BRF), Naïve Bayes (NB), Learning Algorithm for Multivariate Data Analysis (LAMDA), etc.]. Table 1 shows the summary of the factors such as Keystroke Dynamics (KD), Mouse Movement (MM) and Mouse Clickstream (MC), Mouse Dynamics (MD) which were used with their applied methods and achieved performance results.

2.2 Authentication Using Device or Browser-Based Fingerprinting Technique A browser fingerprint is an information gathered through the browser, such as the operating system, language of the browser, screen resolution, and many other features. A device fingerprint collects data about a user’s device via the browser or an app. The basic idea behind gathering device-specific data is to identify or increase security. During the early development of web, HTTP [28] and HTML [29] were introduced from the need for a standard way of communication between various machines. The user-agent request header [28] is included in the HTTP protocol to prevent incompatibility issues by storing the name of the browser, its version, and the platform on which it was running. Alaca and Van Oorschot [6] classified and analyzed a variety of device fingerprinting approaches that might be utilized as the foundation for authentication, emphasizing that no new user interactions are necessary. The method strongly encourages the usage of device fingerprinting to enhance user authentication. Alternatively,

Mouse dynamics

Mouse movement speed, type of action, traveled distance, direction, movement elapsed time

Mouse dynamics

Mouse clickstream

MM, mouse clickstream

Keystroke dynamics, mouse movement

Keystroke dynamics, mouse movement, and graphical user interface

Keystroke dynamics, mouse movement

Touch swipes

[8] 2004

[24] 2007

[25] 2010

[12] 2011

[13] 2012

[17] 2012

[18] 2014

[19] 2014

[14], 2016

24

28

30

48

22

11

41

Users

Support vector machine

Naïve Bayes and support vector machine

67

Bayesian network, decision 31 tree, and support vector machine

Bayesian network

Support vector machine

Support vector machine

Learning algorithm for multivariate data analysis

Artificial neural network

Decision tree

Hybrid approach Artificial neural network (keyboard + mouse-based signature

[23] 2003

ML algorithm

Factors

Study

Table 1 Comparative study of behavior-based fingerprinting

0.10%

2.10%



0.37%

0%

2.464%

0.43%

4.4%

FAR

0.20%

2.24%



1.12%

0.36%

2.461%

1.75%

0.2%

FRR

0.2%



8.21%

1.3%

ERR





Accuracy

(continued)





ACC

A Survey on Adaptive Authentication Using Machine … 321

Factors

Keystroke dynamics, mouse movement

Mouse movement

Mouse clickstream

Mouse clickstream

Mouse movement

Study

[11] 2016

[10] 2019

[9] 2019

[26] 2021

[27] 2021

Table 1 (continued)

Binary random forest

Decision tree, k-nearest neighbors, random forest, convolutional neural networks

Decision tree, k-nearest neighbors, random forest

Naïve Bayes

Support vector machine, artificial neural network

ML algorithm

10

20

10

48

25

Users



0.50–0.52%



0.01%



FAR



0.90–0.99%



0.82%



FRR



0.03–0.23%



0.08%



ERR

92%





93.56%

62.2%—closed set, 58.9%—open set

Accuracy





99.90%

0.98%



ACC

322 R. M. Pramila et al.

A Survey on Adaptive Authentication Using Machine …

323

a biometric and non-biometric-based approach was chosen by [30, 31] to design a framework that stores the authentication factors in the server and uses a deterministic system to decide the factors for user authentication. Whereas Nag et al. [32] followed a probabilistic approach to compare different authentication factors. (This might keep the security at stake as the data is stored in the server). Ding et al. [33] proposed a technique of authentication based on keystroke, mouse, and address features which customizes a linear kernel SVM algorithm using information divergence of each feature as a defined weight for the linear kernel function; the results outperformed the traditional linear or Gaussian kernel SVMs. Wiefling et al. [34] performed a study on online services RBA, which determines the features used by the services and knows the best feature in controlling the risk factor during authentication where IP was gauged as a high weighted feature. Preuveneers and Joosen [35] proposed a framework that monitors various parameters throughout a login session, i.e., client-side and server-side fingerprints, and applies a similarity preserving hash function to allow privacy-preserving fingerprint storage that permits similarity checks between stored fingerprints and subsequently collected fingerprints. The approach uses a comparison algorithm where each attribute is assigned a weight to determine if there is a significant change in fingerprint to re-authenticate the user. All of the browser fingerprinting features and their sources are listed in Fig. 1. Misbahuddin et al. [36] studied the impact of using ML techniques for risk-based authentication using context information where the RBA is divided into three blocks.

Fig. 1 Static fingerprinting attributes and sources

324

R. M. Pramila et al.

At first, profile analysis is performed to collect information of user behavior during the login process. At second stage, risk engine classifies the data based on volume of data collected. When adequate data is available, SVM is used to classify the user using hyperplane; else, it uses kernel function for classification; one-class SVM is used when there is a scarcity of anomaly data. The extracted user parameter is tested against the model with genuine users, and if the result is false, then a risk score is calculated based on the weightage of parameters to decide on the MFA. The study stated that Naïve Bayesian classifier is suitable for small amount of data which uses a probability ratio to classify the users. At last, AA block decides MFA based on risk engine scores and suitable MFA is chosen to validate the user where after a particular risk level, user is denied access to the system. The work lacks real-time user data; hence, the approach may not yield the required results in real-time. In a recent study, Martín et al. [37] proposed a security solution to the federated identity management (FIM) solutions by using the user and entity behavior analytics (UEBA) workflow, which allows relying parties within FIM to authenticate end-user based on the session fingerprint of each user collected during the login process. 11 user’s data were analyzed for three use cases against two different attacks. In the first attack, “credential theft and spoofing,” static features gave better results. In the second attack, “session hijacking,” dynamic features gave better results, whereas static features did not contribute to outlier detection. The work used three use cases to evaluate security solutions. Use Case 1: Level of assurance (LoA) is requested by relying party (RP) to the identity provider (IdP), to ensure the security is not at stake for specific applications where high security is needed. High LOA, the higher the level of authentication. Use Case 2: RP’s continuous authentication request even after the user is authenticated by the IdP when RP detects anomalous behavior. It might ask IdP to refresh the access token to authenticate the user. Use Case 3: User parameters collected are processed periodically rather than in real-time. Subsequently, a comparison of the existing work on AA using ML Models is given in Table 2. The factors are based on the combination of behavior, browser, and device information.

3 Fingerprint Authentication Using Advances Technologies Canvas: The canvas API “allows you to draw and manipulate visuals on a canvas drawing surface” by providing “objects, methods, and properties.” By rendering a specific picture, canvas fingerprinting can be used to distinguish devices with pixel precision [40] as fallback typefaces may differ from one user to the next, depending on the operating system (OS) and device fonts [41]. Particular study states that OS, installed fonts, graphics card, browser version, sub-pixel hinting [42, 43] play a major role in fingerprinting. One of the benefits of canvas fingerprinting is its stability. WebGL: It uses previously mentioned canvas elements without plugins to render dynamic 3D objects in the browser using JavaScript. It generates a fixed image to

A Survey on Adaptive Authentication Using Machine …

325

Table 2 Comparative study of context-based fingerprinting Study

Factors

ML algorithm

Users

FAR

FRR

ERR

Accuracy

ACC

[36] 2017

IP address, geo location, time zone, login time, OS version, browser version, device type, number of failed attempts

Support vector machine one-class support vector machine, Naïve Bayes

1











[33] 2019

Address (IP, Support MAC), vector keystroke machine dynamics, mouse movement

10

1.23%

3.95%







[38] 2019

Application usage, keystroke dynamics and mouse movement

Decision tree, random forest

35







81%, 84.8%

76.2%, 86.5%

[39] 2019

Session context information and keystroke dynamics, mouse dynamics

Random forest

24







97.50%



[37] 2021

Static: user-agent, screen resolution Dynamic: keystroke and mouse dynamics

Naïve Bayes, k-nearest neighbors using Manhattan, one-class support vector machine with 1-g

11

0%, 0.149%, 0.175%

0%, 0.104%, 0.168%







326

R. M. Pramila et al.

be printed on the canvas area and then generates a determined value of hash code. Mowery and Shacham [42] also investigated the use of WebGL for fingerprinting, and thoroughly the experiments conducted stated that the processing pipeline between devices is not identical. The stability, repeatability, and low resources use [6] property of WebGL facilitates exposing numerous information of the underlying browser and hardware for fingerprinting. AudioContext: In the case of audio fingerprinting, the fingerprinting is done using the audio stack of the device. Audio fingerprinting is made feasible through an API called the AudioContext API, just as canvas fingerprinting is made possible by the Canvas API. It is a web audio API interface included in most contemporary browsers. Englehardt and Narayanan [44] discovered WebGL scripts for fingerprinting devices that handle an audio signal provided by an OscillatorNode. The generated audio signal is then processed according to the device’s audio settings and audio hardware and utilized to produce an identification that can be used to track a user. Browser Extensions: Recent browsers have always encouraged customization by allowing users to customize their browser using extensions in user interface modifications, cookie management, ad blocking, which are simple add-ons that augment a browser’s functionality. Laperdrix et al. [45] have conducted a study on how website functionality can be misused to detect a user’s installed browser extensions using CSS rules injected by the browser. User monitoring techniques that rely on browser fingerprinting are becoming more common as browser vendors implement defenses against cookie-based tracking. As a result, recent browsers have lately added mechanisms to reduce the impact of fingerprints. Karami et al. [46] have provided a study on the uniqueness of extension fingerprints to date and also outlined a de-anonymization approach that uses publicly available extension evaluations to disclose a user’s identity, in addition to allowing attackers to uniquely identify a device and also monitor users using sensitive or personal user information from the extensions. Sjösten et al. [47] performed a study on non-behavioral browser extension discovery by accession only specific URLs to determine if an extension is installed in the device. The study used a web page logo processed from the device storage that the browser needs to trace to display when the web page is loaded. A simple script can utilize this technique to identify the presence of a specific extension. Not all extensions have such readily available resources, so this approach may not detect every extension. In some situations, the browser extensions may add a detectable button using the DOM of the web page [48]. JavaScript Standards Conformance: Mulazzani et al. [49] studied the underlying JavaScript engine to identify a browser. They analyzed browser behavior to see if the browser compiled with the JavaScript standard. The study states that this strategy is possible as browsers have different JavaScript engine in browser families and subsequent versions [50]. This concept was expanded to include looking for information about the system outside the browser [51]. They included as many attributes

A Survey on Adaptive Authentication Using Machine …

327

as possible to locate information that could identify the differences in the operating system and architectural levels. CSS Querying: Unger et al. [52] conducted a series of test to identify CSS attributes that are exclusive to certain browsers; certain browsers use prefixes because browsers do not share these prefixes; browser family can be discovered with their method. The similar method was used by Saito et al. [53] to identify the browser and its family. Also, they used @media and @font-face methods defined in CSS3 to collect information about the screen and installed fonts. Font Metrics: Font metrics contains information on fonts rendering on a specific screen like spacing, sub and superscripts, and aligning text of different sizes. Fifield and Egelman [54] investigated character glyphs to swiftly and effectively fingerprint web users. They found that the same character with the same style could be rendered with different bounding boxes depending on the browser and device utilized. Benchmarking: An alternative technique to learn more about the device is benchmarking the device’s CPU and GPU. The objective of traditional GPU and CPU benchmarks was to measure the performance when accessing a web page. Conversely here, the idea behind benchmarking is to know the configuration details of CPU and GPU to identify a device. To determine the performance of the browser’s JavaScript engine, with a test that can properly recognize the browser and its version, Mowery et al. [55] demonstrated benchmarking. The method’s biggest flaw is that it takes 190.8 s to execute the entire benchmark suite. Knowing that the required properties can be collected in milliseconds or less, the time gap makes it nearly difficult to deploy such approaches in the field. Benching the GPU can cause significant disparities between devices since a low-end GPU in a smartphone behaves significantly differently from the latest high-end graphic card [56]. Certain studies determined the number of cores and CPU family was also detected through benching [57]. A study on device-based fingerprinting was done by Sanchez-Rola et al. [58] by measuring the clock differences on a device. The study showed a remarkable differentiation with the same software and hardware-based device by determining the time to execute specific methods. The results proved comparatively better than canvas and WebGL fingerprinting. Still, the study lacks the details on browser versions, which makes it uncertain if the approach will work for all browser versions due to the recent changes in JavaScript API. The influence of the CPU or the operating system on the result is also undetermined. Battery Status: When you visit a website, using the HTML5 “feature” called the battery status API, we can determine how much battery power is left on your laptop, smartphone or tablet. The role of the API was to know the status of the battery, but no one anticipated the extent of usage of the API for security threats. In 2015, Olejnik et al. [43] investigated the battery status API’s privacy, which emphasized the battery level and stated that across websites, API might be used as a short-term identifier, and frequent readout could aid in determining the battery’s capacity. The standard’s authors stated that “the information revealed has negligible impact on privacy or fingerprinting” without knowing that at a later stage, this API will have a significant

328

R. M. Pramila et al.

impact on privacy. Many browser vendors tried to mitigate this issue by removing the API or spoofing the required information.

4 Categories of Attacks During Authentication This section shows how several forms of attacks can be made against the two most common forms of traditional user authentication approaches: password-based and biometric-based. Spoofing Attacks Artificial synthesis and replay attacks are the main two categories of spoofing attacks. Artificial synthesis might be considered direct attack because it uses original biometrics to generate an artificial version [59]. Fingerprints left on doorknobs, personal photos published on social media, and images retrieved from cameras placed in public places are all easily accessible to attackers. Attackers can impersonate users’ biological features using advanced techniques such as facemasks and rubber fingerprint replicas. Replay Attack is a type of indirect attack that takes advantage of various methods to obtain original biometrics. Attackers can fake the target system by using a sample taken from a legitimate user. Here, the opponent might discreetly record users’ voices or capture photographs from social media [60]. As a result, the adversary must resubmit the information to acquire access to the system. Une et al. [61] proposed a wolf attack. This is a kind of spoofing attack which uses biometric samples similar to any registered user template and tries to match those with valid user templates. They discovered that the wolf attack has a higher success rate than the brute force attack. Measures: Fortunately, several detection systems have been presented to guard against these types of attacks. Witkowski et al. [62] suggested a detection method based on highfrequency data that may directly screen out complex input data. Along with security tokens, timestamps can be used. Liveness detection techniques can be effective in detecting spoofing attacks. SSL aids in the prevention of replay attacks and is required in the event of a cookie replay attack when both parties exchange a random number, which they use for all encrypted transactions. The replay attack can be mitigated by setting the cookie timeout value for a short period, giving the attacker just a short amount of time to attack. Brute Force Attack Brute force and dictionary attacks are more common in password-based systems. The attackers try to guess the password by providing various combinations of words or phrases in a user-defined dictionary. It is tough to defend against a brute force attack in general as a result, on a trial-and-error basis, brute force attacks are attempted on a

A Survey on Adaptive Authentication Using Machine …

329

large number of key combinations [63]; it targets even unknown combinations, unlike dictionary attacks. A brute force attack is implemented using a computer program or ready-made software. The attacker may also find the user who may use the password chosen by the attacker, as most users often prefer to set simple passwords to recollect when required. Measures: For online services, a brute force attack is ineffective because if an administrator detects several login attempts from a single IP address, swiftly that IP address will be blocked as well as the attacker’s account. Another approach for slowing an attacker’s pace is tarpit. It causes an authentication delay, which reduces the number of attacks each minute. Instead, if the number of failed login attempts surpasses a given level, a honeypot approach can be used. For avoiding brute force assaults, most online companies, particularly banking and financial websites, use the completely automated public turing test (CAPTCHA) system [64]. Brute force attack is likely to fail when the key size is large, and a strong password is used. Shoulder Surfing Attack Shoulder surfing attack is one of the most common attacks, which the attackers can easily do while we enter credentials via the keyboard either by looking over an individual’s shoulder or by recording the login process via the password or some form of passwords using hidden cameras. Measures: The techniques to protect against shoulder surfing attacks are placing the device in an angle where the screen is not visible to others, using strong passwords will make the attacks difficult to monitor, adding some unwanted characters while typing password and deleting the characters before proceeding with actual characters. Can use biometric passwords if possible or may use two-factor authentication. Phishing Attack In a phishing attack, a user is flooded with email, phone calls, or fake website links like banking websites where DNS cache poisoning will navigate the user to the fake website to extract sensitive information such as user names, passwords, PINs, and credit card numbers [65]. Measures: It is best not to click on any links in an email when the source is not known; antiphishing add-ons can be added to the browser. Avoid entering any sensitive information in any site that starts with HTTP or a closed padlock icon next to the URL is not visible. HTTPS is the most secure protocol for accessing any site. Session Hijacking Session hijacking is a technique where session information shared between a web browser and server for authentication is acquired by an attacker to gain access to the

330

R. M. Pramila et al.

system. Session hijacking is the most common attack in web applications, especially banking applications, as HTTP header information may provide the required session information [66]. Measures: Wedman et al. [66] state that various measures such as long random numbers can be used to generate session tokens, and the users’ IP address can be used to link the token to a specific connection, provided that the attacker cannot use the same IP address as the user. Session token can be regenerated when switching between HTTPS and HTTP protocol for connection to mitigate the fact the session token can be used to hijack the session. Using browsers with built-in XSS mitigation tools provides improved security to the end-user. Cross-site request forgery (CSRF) mechanisms can be used to implement on user or developer, where end-users have to be aware of security measures while using a browser, like not saving the password, leaving browser session logged in, or accessing many websites at a time. Developers should reduce adding session-related information in URL, neither URL rewriting should not be allowed. Also, implementing a shorter session timeout on the servers may help mitigate session attacks. Attacks on Deep and Machine Learning Models ML and deep learning (DL) technologies are required to deal with the ever-growing massive data and rising artificial intelligence (AI) [67]. Cybersecurity has been benefited from the ML and DL for various forms of attacks [68]. The model, on the other hand, is susceptible to security attacks. The training and testing data can be exploited to the extent that models may produce wrong classification results. The attacker may add adversarial noise using any optimization algorithm to original data to interfere in classification [69]. Measures: Training the model with adversarial samples will improve the robustness of a neural network [70]. On the other hand, the adversarial trained model is subject to black-box attacks to address this issue; Tramèr et al. [71] proposed ensemble adversarial training (EAT), a training approach that utilizes adversarial data transferred from various pretrained models. EAT improves network robustness on adversarial data transferred from other models by increasing the heterogeneity in adversarial samples used for adversarial training. Random input transformation was used by either resizing the input images or padding zeros around the input images before using the data for training in DNN models [72]. Liao et al. [73] introduced feature de-noising technique, where feature level loss function can reduce the difference between the adversarial and benign samples to avoid de-noising at the pixel level.

A Survey on Adaptive Authentication Using Machine …

331

5 Discussion and Challenges The knowledge-based system is the most commonly used mechanism for authentication; however, significant research is in progress to avoid security threats during authentication especially using passwords; there are several ways imposters can gain access to the password-protected system. Multifactor authentication such as OTP, smart card, and QR code can be used to secure the authentication process. Adaptive authentication can dynamically select the best authentication technique using contextual or behavioral information of the user, device, and other attributes.

5.1 Challenges Feature Extraction and Effectiveness: The main challenge in adaptive authentication is how to identify, cumulate, and filter the best features as security decisions for adaptive authentication works effectively with sufficient precision, trustworthiness, correctness, and up-to-date context information. Collecting data on a wide scale is difficult for researchers as their trials might last for few months, during which the browsers might have updated. Parameters stated initially may no longer be available; even while developing new APIs, developers are more cautious as few APIs were used for fingerprinting, which raised a security alarm. More research is needed in this area as extracting the parameters from browser and device cannot use the same technique as the information may change to secure the system. System Administration: Administrators face a significant challenge in adopting the changes in contexts to modify the system’s behavior. Existing systems lack these capabilities, or manual modification of the complex structure has to be done for which administrators need to know in detail the functioning of the system—providing an admin interface which makes it easy to learn the functioning of an adaptive system and also be able to configure and test the system based on the requirement. Furthermore, previous usability studies have indicated that for a system to be perceived as secure, users must have some control over it, even though automated decision-making might yield good results. Integration: Adaptive authentication methods should be easy to integrate with the current application environment like web, distributed, native, etc. The current system lacks easy integration techniques; research efforts with little or no adjustments in application integration are required. Modeling and Evaluation: Main challenge in modeling techniques is to handle the unlabeled and imbalance data for anomaly detection; the data can only be extracted when a new user registers and logs in to the website. Until sufficient user behavior or context-based details are captured, predictions may not be accurate. Further research can use non-parametric models, or group similar data from different users to represent new users behavior to minimize the time needed for accurate predictions.

332

R. M. Pramila et al.

Fingerprinting Methods Versus Protection Techniques: The recent study on browser fingerprinting gives a confused direction toward research. Browser fingerprinting can be a severe threat to the user’s privacy as it bypasses the protection mechanisms. But the study also informs the developers and policymakers about the issues to take corrective measures to secure the system. On one side, ad blockers have gained popularity, and on the other side, specific browsers like Mozilla or Brave are incorporating browser fingerprinting techniques to enhance security. A detailed study in browser security is encouraged such that the attack strategies are known to the public. Fingerprinting detection is very complex as there is no defined mechanism for behavioral fingerprinting or browser fingerprinting to detect malicious activity precisely. Regulators will have to develop new methods to work with businesses to protect users’ privacy.

6 Conclusion Authentication is one of the primary issues in the information security system. Password-based authentication is not enough to secure the system; hence, multifactor authentication (MFA) was introduced. MFA provides additional security but, on the other side is a time-consuming procedure to be followed by the user to gain access to the system even if the user is legitimate. Adaptive authentication systems select the best authenticator dynamically and reduce the strain on legitimate users. We have observed that to date, the adaptive authentication systems have not been designed to handle the heterogeneous structure of the system. Also, a methodological approach is not followed for the design of the AA system. Here, we have studied the existing AA system designed using ML models and provided the challenges in the current system to foster the research in the area discussed.

References 1. Morris R, Thompson K (1979) Password security: a case history. Commun ACM 22(11):594– 597. https://doi.org/10.1145/359168.359172 2. Bonneau J, Herley C, Van Oorschot PC, Stajano F (2015) Passwords and the evolution of imperfect authentication. Commun ACM 58(7):78–87 [Online]. Available at: http://del ivery.acm.org.proxy.lib.utk.edu:90/10.1145/2700000/2699390/p78-bonneau.pdf?ip=160.36. 239.64&id=2699390&acc=ACTIVESERVICE&key=A79D83B43E50B5B8.61146380CA86 0EFC.4D4702B0C3E38B35.4D4702B0C3E38B35&__acm__=1560950280_14666f9354 0ca9fdc19545fda6291b81 3. Weiser M, Corporation X. Ubiquitous computing, vol 804, pp 71–72 4. Weiser M (1998) The future of ubiquitous computing on campus. Commun ACM 41(I) 5. Ometov A, Bezzateev S, Mäkitalo N, Andreev S, Mikkonen T, Koucheryavy Y (2018) Multifactor authentication: a survey. Cryptography 2(1):1–31. https://doi.org/10.3390/cryptography 2010001

A Survey on Adaptive Authentication Using Machine …

333

6. Alaca F, Van Oorschot PC (2016) Device fingerprinting for augmenting web authentication: classification and analysis of methods. ACM international conference proceeding series, 5–9 Dec 2016, pp 289–301. https://doi.org/10.1145/2991079.2991091 7. Arias-Cabarcos P, Krupitzer C, Becker C (2019) A survey on adaptive authentication. ACM Comput Surv 52(4). https://doi.org/10.1145/3336117 8. Pusara M, Brodley CE (2004) User re-authentication via mouse movements, pp 1–8 9. Almalki S, Chatterjee P, Roy K (2019) Continuous authentication using mouse clickstream data analysis. Springer 10. Salman OA, Hameed SM (2019) Using mouse dynamics for continuous user authentication. Springer 11. Mondal S, Bours P (2016) Combining keystroke and mouse dynamics for continuous user authentication and identification 12. Zheng N, Paloski A, Wang H (2011) An efficient user verification system via mouse movements, pp 139–150 13. Chen C, Cai Z, Guan X (2012) Continuous authentication for mouse dynamics : a pattern-growth approach 14. Antal M, Szabó LZ (2015) Biometric authentication based on touchscreen swipe patterns. Procedia Technol 22:862–869. https://doi.org/10.1016/j.protcy.2016.01.061 15. Artuner H, Application A (2009) Active authentication by mouse movements, pp 606–609 16. Shimshon T, Moskovitch R, Rokach L, Elovici Y (2010) Continuous verification using keystroke dynamics. https://doi.org/10.1109/CIS.2010.95 17. Traore I, Woungang I, Obaidat MS, Nakkabi Y, Lai I (2012) Combining mouse and keystroke dynamics biometrics for risk-based authentication in web environments. In: Proceedings— 2012 fourth international conference on digital home (ICDH 2012), pp 138–145. https://doi. org/10.1109/ICDH.2012.59 18. Bailey KO, Okolica JS, Peterson GL (2014) User identification and authentication using multimodal behavioral biometrics. Comput Secur 43:77–89. https://doi.org/10.1016/j.cose.2014. 03.005 19. Fridman L et al (2015) Multi-modal decision fusion for continuous authentication. Comput Electr Eng 41:142–156. https://doi.org/10.1016/j.compeleceng.2014.10.018 20. Jagadeesan H, Hsiao MS (1985) A novel approach to design of user re-authentication systems 21. T. Acceptance, vol 9. Purdue University 22. Laperdrix P, Bielova N, Baudry B, Avoine G (2020) Browser fingerprinting: a survey. ACM Trans Web 14(2). https://doi.org/10.1145/3386040 23. Everitt RAJ, Mcowan PW (2003) Java-based Internet biometric authentication system. https:// doi.org/10.1109/TPAMI.2003.1227991 24. Ahmed AAE, Traore I (2007) A new biometric technology based on mouse dynamics. IEEE Trans Depend Sec Comput 4(3):165–180 25. Nakkabi Y, Traoré I, Ahmed AAE (2010) Improving mouse dynamics biometric performance using variance reduction via extractors with separate features. IEEE Trans Syst Man Cybern Part A: Syst Hum 40(6):1345–1353 26. Almalki S, Assery N, Roy K (2021) An empirical evaluation of online continuous authentication and anomaly detection using mouse clickstream data analysis. Appl Sci 11(13). https://doi.org/ 10.3390/app11136083 27. Siddiqui N, Dave R (2021) Continuous authentication using mouse movements, machine learning, and minecraft, pp 9–10 28. Shen C, Cai ZM, Guan XH, Fang C, Du YT (2010) User authentication and monitoring based on mouse behavioral features. Tongxin Xuebao/J Commun 31(7):68–75 29. Bartolomeo G, Kovacikova T (2013) Hypertext transfer protocol. Identification and management of distributed data, pp 31–48. https://doi.org/10.1201/b14966-5 30. Nag AK, Dasgupta D, Deb K (2014) An adaptive approach for active multi-factor authentication. In: 9th annual symposium on information assurance, pp 39–47 [Online]. Available at: http://www.albany.edu/iasymposium/proceedings/2014/ASIA14Proceedings.pdf#page=49

334

R. M. Pramila et al.

31. Nag AK, Dasgupta D (2014) An adaptive approach for continuous multi-factor authentication in an identity eco-system. ACM international conference proceeding series, pp 65–68. https:// doi.org/10.1145/2602087.2602112 32. Nag AK, Roy A, Dasgupta D (2015) An adaptive approach towards the selection of multi-factor authentication. In: Proceedings—2015 IEEE symposium series on computational intelligence (SSCI 2015), pp 463–472. https://doi.org/10.1109/SSCI.2015.75 33. Ding X, Peng C, Ding H, Wang M, Yang H, Yu Q (2019) User identity authentication and identification based on multi-factor behavior features. In: 2019 IEEE Globecom workshops (GC Wkshps) 2019—proceedings. https://doi.org/10.1109/GCWkshps45667.2019.9024581 34. Wiefling S, Lo Iacono L, Dürmuth M (2019) Is this really you? An empirical study on riskbased authentication applied in the wild. IFIP Advances in Information and Communication Technology, vol 562, pp 134–148. https://doi.org/10.1007/978-3-030-22312-0_10 35. Preuveneers D, Joosen W (2015) SmartAuth: dynamic context fingerprinting for continuous user authentication, pp 2185–2191. https://doi.org/10.1145/2695664.2695908 36. Mohammed Misbahuddin BD, Bindumadhava BS (2017) Design of a risk based authentication system using machine learning techniques. In: IEEE SmartWorld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, internet of people and smart city innovation, vol 87, no 1–2, pp 149–200 37. Martín AG, Beltrán M, Fernández-Isabel A, Martín de Diego I (2021) An approach to detect user behaviour anomalies within identity federations. Comput Secur 108. https://doi.org/10. 1016/j.cose.2021.102356 38. Zhang H, Singh D, Li X (2019) Augmenting authentication with context-specific behavioral biometrics. In: Proceedings of the annual Hawaii international conference on system sciences, Jan 2019, pp 7282–7291. https://doi.org/10.24251/hicss.2019.875 39. Solano J, Camacho L, Correa A, Deiro C, Vargas J, Ochoa M (2019) Risk-based static authentication in web applications with behavioral biometrics and session context analytics 40. Laperdrix P, Baudry B (2016) Beauty and the beast: diverting modern web browsers to build unique browser fingerprints. https://doi.org/10.1109/SP.2016.57 41. Gómez-boix A, Laperdrix P, Baudry B (2018) Hiding in the crowd: an analysis of the effectiveness of browser fingerprinting at large scale, pp 309–318 42. Mowery K, Shacham H (2007) Pixel perfect: fingerprinting canvas in HTML5 43. Acar G, Eubank C, Englehardt S, Juarez M, Narayanan A, Diaz C (2014) The web never forgets: persistent tracking mechanisms in the wild categories and subject descriptors, pp 674–689 44. Englehardt S (2014) Online tracking: a 1-million-site measurement and analysis 45. Laperdrix P, Starov O, Chen Q, Kapravelos A, Nikiforakis N (2021) Fingerprinting in style: detecting browser extensions via injected style sheets. In: Proceedings of the 30th USENIX security symposium, pp 2507–2524 46. Karami S, Ilia P, Solomos K, Polakis J (2020) Carnus: exploring the privacy threats of browser extension fingerprinting 47. Sjösten A, Van Acker S, Sabelfeld A (2017) Discovering browser extensions via web accessible resources, pp 329–336 48. Starov O, Nikiforakis N (2017) XHOUND: quantifying the fingerprintability of browser extensions 49. Mulazzani M, Reschl P, Huber M (2013) Fast and reliable browser identification with Javascript engine fingerprinting 50. Nikiforakis N, Kapravelos A, Joosen W, Kruegel C, Piessens F, Vigna G (2010) Cookieless monster: exploring the ecosystem of web-based device fingerprinting 51. Schwarz M, Lackner F, Gruss D (2019) JavaScript template attacks: automatically inferring host information for targeted exploits 52. Unger T, Mulazzani M, Frühwirt D, Huber M, Schrittwieser S, Weippl E (2013) SHPF: enhancing HTTP (S) session security with browser fingerprinting (extended preprint) 53. Takei N, Saito T, Takasu K, Yamada T (2015) Web browser fingerprinting using only cascading style sheets. https://doi.org/10.1109/BWCCA.2015.105

A Survey on Adaptive Authentication Using Machine …

335

54. Fifield D, Egelman S (2015) Font metrics, vol 1, pp. 107–124. https://doi.org/10.1007/978-3662-47854-7 55. Mowery K, Bogenreif D, Yilek S, Shacham H (2011) Fingerprinting information in JavaScript implementations 56. Nakibly G, Shelef G, Yudilevich S (2015) Hardware fingerprinting using HTML5 57. Saito T, Yasuda K, Tanabe K, Takahashi K (2018) Web browser tampering: inspecting CPU features from side-channel information. https://doi.org/10.1007/978-3-319-69811-3 58. Sanchez-Rola I, Santos I, Balzarotti D (2018) Clock around the clock: time-based device fingerprinting, pp 1502–1514 59. Wu Z, Evans N, Kinnunen T, Yamagishi J, Alegre F, Li H (2014) Spoofing and countermeasures for speaker verification: a survey. Speech Commun. https://doi.org/10.1016/j.specom.2014. 10.005 60. Ferrer MA (2018) A biometric attack case based on signature synthesis, pp 26–31 61. Une M, Otsuka A, Imai H (2007) Wolf attack probability: a new security measure in biometric authentication systems, pp 396–406 62. Witkowski M, Kacprzak S, Zelasko P, Kowalczyk K, Gałka J (2017) Audio replay attack detection using high-frequency features audio replay attack detection using high-frequency features. https://doi.org/10.21437/Interspeech.2017-776 63. Adams C, Jourdan GV, Levac JP, Prevost F (2010) Lightweight protection against brute force login attacks on web applications 64. Bursztein E, Martin M, Mitchell JC (2011) Text-based CAPTCHA strengths and weaknesses, pp 125–137 65. Huang C, Ma S, Chen K (2011) Using one-time passwords to prevent password phishing attacks. J Netw Comput Appl 34(4):1292–1301. https://doi.org/10.1016/j.jnca.2011.02.004 66. Taylor P, Wedman S, Tetmeyer A, Saiedian H, Wedman S (2013) An analytical study of web application session management mechanisms and HTTP session hijacking attacks, pp 37–41. https://doi.org/10.1080/19393555.2013.783952 67. Siddiqi A, Ph D (2019) Adversarial security attacks and perturbations on machine learning and deep learning methods 68. Mahadi NA, Mohamed MA, Mohamad AI, Makhtar M, Kadir MFA, Mamat M (2018) A survey of machine learning techniques for behavioral-based biometric user authentication. Recent Adv Cryptogr Netw Secur. https://doi.org/10.5772/intechopen.76685 69. Yuan X, He P, Zhu Q, Li X (2019) Adversarial examples: attacks and defenses for deep learning. IEEE Trans Neural Netw Learn Syst PP:1–20. https://doi.org/10.1109/TNNLS.2018.2886017 70. Ren K, Zheng T, Qin Z, Liu X (2020) Adversarial attacks and defenses in deep learning. Engineering 6(3):346–360. https://doi.org/10.1016/j.eng.2019.12.012 71. Tramèr F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P (2018) Ensemble adversarial Training: attacks and defenses, pp 1–22 72. Xie C, Zhang Z, Yuille AL, Wang J (2018) Mitigating adversarial effects through randomization, pp 1–16 73. Liao F, Liang M, Dong Y, Pang T, Hu X (2017) Defense against adversarial attacks using high-level representation guided denoiser, pp 1778–1787

An Overview on Security Challenges in Cloud, Fog, and Edge Computing Deep Rahul Shah, Dev Ajay Dhawan, and Vijayetha Thoday

Abstract Inventive advancements, in cloud computing frameworks, offer cooperative services of assistance for end clients and medium to enormous organizations. As increasingly more data of people and organizations are put in the cloud, there is a developing worry about the security of data. Clients are apprehensive to send their business to the cloud, despite its popularity. Perhaps, the most major hindrance to cloud computing’s expansion is security concerns. It increases the inconveniences associated with information security, and information insurance continues to influence the market. Clients must understand the risk of data breaches in the cloud environment. Fog computing brings cloud computing storage, networking, and computing capabilities to the edge. The issue of security and safety is one of the most pressing concerns for fog computing systems. Edge computing has been a tremendous assistance to lightweight devices in completing complicated tasks in a timely manner; yet, its hasty development has resulted in the neglect of safety risks in edge computing stages and their enabled applications in general. In this study, we compare and explain each computing paradigm, as well as examine the security issues that cloud, fog, and edge computing paradigms. Keywords Fog computing · Edge computing · Cloud computing · Internet of things

1 Introduction Cloud computing has established itself as a formidable force in the field of information technology. Cloud computing is regarded as one of the most important components for data storage, security, access, and cost stability. Due of its centralized structure, cloud computing is unable to meet high variety, area mindfulness, and low idleness requirements. As a result, Cisco proposed a promising idea known as fog computing to control cloud computing issues. D. R. Shah (B) · D. A. Dhawan · V. Thoday NMIMS Mukesh Patel School of Technology Management and Engineering, Mumbai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_29

337

338

D. R. Shah et al.

To persuade the essential greater functionalities for utility to handle, fog computing connects IoT and cloud. Because the term fog implies a cloud close to the ground, information and calculations in fog computing are located close to the give up client. Fog computing removes some non-essential or incorrect data and consolidates the critical coordinating data within the time frame. Edge computing is involved each day in various instruments, cellphones, iPad, robots, and smart vehicles utilized in auto and assembling ventures. Edge computing additionally converges in medical care IoT and clinical checking gadgets. The delay in the treatment of information is significantly brought down as edge computing creates capacity and computing abilities straightforwardly to clients. Just from the opposite side, to alleviate the responsibility pressure, they will execute the exercises and information on the cloud server. With these qualities, edge computing has been consistently developing as of late. Cloud, fog, and edge computing are one of the most important and extensively used computing paradigms used by various organizations currently, which may carry sensitive and sequestered information of millions of users; it is very significant to be abreast of all the security challenges one can face in each of these paradigms for timely recovery of data and safety from data breaches. This study examines the various security challenges that are currently confronting cloud, fog, and edge computing. Furthermore, this paper compares cloud, fog, and edge computing and then discusses security issues in each.

2 Comparison Between Cloud, Fog, and Edge Computing 2.1 Cloud Computing Cloud computing is usually considered to be a cutting-edge computing model. Customers can use cloud computing to transmit their own apps and programming to the cloud. Cloud computing is effectively integrated, with the end user deploying each server in some fashion. Cloud computing is unsuitable for executing applications as a result of this interfering with the operation. Apart from the advantages already described, cloud computing is effectively integrated, with each server being partially deployed by the end user. Edge computing is used to circumvent the constraints of cloud computing.

2.2 Fog Computing To service IoT applications, fog computing combines cloud and edge computing. Edge computing and fog computing bring cloud computing administration closer to the end user. Fog computing hubs can be delivered to virtually any location, including

An Overview on Security Challenges in Cloud …

339

factories, railroad tracks, and road airlines. Fogs, like edge computing, include servers or hubs as well as end client devices. Switches, set-top boxes, surveillance cameras, and other devices are examples of fog hubs.

2.3 Edge Computing Edge computing’s main purpose is to move the computing office closer to the end user. All information processing takes place at the organization’s periphery. An edge network is made up of end devices and edge devices that work together. Edge devices include line switches, set-top boxes, doors, spans, base stations, remote aisles, and other items. The majority of edge computing calculations can be performed locally rather than in the cloud. Edge computing can take three forms: portable cloud computing, fog computing, and cloudlets [1]. Comparison between cloud, fog, and edge computing based of factors like node devices, deployment, hardware, internode communication, latency, jitter, scalability, local awareness, mobility, location, and sharing population is presented in Table 1. Table 1 Comparison between cloud, fog, and edge computing Cloud computing

Fog computing

Edge computing

Node devices

Large server

Server running in base stations

Routers, switches, access points, gateways

Deployment

Core network

Near edge, edge

Network edge

Hardware

Server, user devices

Server, end devices

Heterogeneous server

Internode communication

Not applicable

Supported

Partial

Latency and jitter

High

Low

Low

Scalability

Average

High

High

Local awareness

Not applicable

Yes

Yes

Mobility

Not applicable

Yes

Yes

Location

Far away from center

Between devices and data center

Radio area network

Sharing population

Large

Small

Medium

Applications

E-commerce, online Smart grids, self-driven Smart grid, predictive data storage, education vehicles maintenance, gaming

340

D. R. Shah et al.

3 Security Challenges in Cloud Computing As cloud computing involves a range of methodologies, there are a number of security threats. Cloud computing is essentially concerned about security and innovation for a greater number of these structures. The following are among the major security concerns in cloud computing.

3.1 Data Breaches Because of the further developed innovation, enormous measure of information is put away in cloud servers, which turns into an objective for the programmers. More how much information uncovered, more prominent will be the harm to the society and clients. The openness of individual profile would be an ordinary one; however, penetrates which include wellbeing data, exchanging insider facts, licensed innovation freedoms would bring a bigger obliteration. While cloud computing providers put in place security measures to protect their infrastructure, it is the clients’ responsibility to get their data onto the cloud. Information or data is multidimensionally confirmed and encoded such that only the most trusted clients have access to it.

3.2 Compromised Credentials and Broken Authentication Information breaks and different assaults often result from slack validations, powerless passwords, helpless key, or testament the executives. Now and then, not just associations even we neglect to eliminate the entrance later our task is finished. We can consider for instance, the gmail account on the off chance that we login in the general population getting to places (Web bistros) and neglect to logout later our utilization, uncovered our own private information. Multifactor verifications, including as one-time passwords, mobile verifications, OTPs, and security questions, make it increasingly challenging for the attacker to log in using stolen credentials.

3.3 Hacked Interface and APIs As of now, every cloud administration gives APIs. They are utilized to deal with the cloud administrations, the board, orchestration, and monitoring. Powerless points of interaction and API would allow security issues including such privacy, consistency, availability, and accountability to be approved. APIs and weak points of interaction may reveal links to security-related issues such as classification, responsibility, and

An Overview on Security Challenges in Cloud …

341

accessibility. APIs and points of interaction are the most exposed components of the framework because they can be accessed via the open Web [2].

3.4 Data Locations When clients use, they will most likely have no idea where their information will go or where it will be stored. Indeed, they will most likely have no idea where it will be stored. Specialist organizations should be investigated to determine whether they will be able to save and change information, specifically mediation, and whether they will be able to follow nearby security requirements based on their clients [3].

3.5 The APT Parasite The APT is a systematic hacking operation directed toward a certain organization by a single person or a group of people. Malware is used in this development cycle to bring faults (infection, bugs, and establishments) to the framework. Consider the Stuxnet PC worm, which targeted the computer equipment used by Iran’s nuclear program [4]. Because it employed malware program code that transmits itself to all PCs linked to a computer network based on security holes, the Iranian government classed it as an APT. Direct attacks and USB devices preinstalled with virus coding are two recent improvements.

3.6 Cloud Service Abuses Cloud usage was identified as one of the most serious threats in 2013 [5], and the problem is still present. The basic concept behind cloud administration abuse is that programmers employ Web-based media administrations to decipher and delete various codes in order to disrupt the cloud environment. When this happens, the associations might deal with the issues like shut down of PCs, eradicate of the important information. To avoid this problem, we need maintain track of the resources, analyze basic facts, analyze the dangers and weaknesses, and hazards, and finally fix the problem with defense safeguard layers.

4 Security Challenges in Fog Computing Cloud management extends to the organization’s peripheral since fog computing is a promising development. Fog computing, like cloud computing, delivers data

342

D. R. Shah et al.

processing and storage to end users. Fog computing is currently in its initial phases of development. Fog computing poses the following security risks.

4.1 Authentication Since fog computing has provided a great deal of support to many end customers via front fog hubs, authentication has become the primary security issue in various levels of entryways. Messages and substances must be validated in fog computing. According to Dong and Zhou [6], the attack on the person in the middle could be shot without proper authentication. They offer authentication via open key encryption as well as counterfeit innovation.

4.2 Malicious Attack Fog computing employs a variety of edge devices to ensure that the data gathered is handled with care. As a result, it is difficult to get used to vengeful edge gadgets in a hazy computing environment. Fog computing is vulnerable to malicious assaults and, if left unaddressed, can have a significant impact on the framework’s performance. Denial of service is one of the vengeful attacks sent (DOS). The attacker in this form of attack connects a large number of devices to the IoT ecosystem and responds to administrator requests for unlimited processing and hoarding [7].

4.3 Data Protection The volume of unprocessed data created by the Internet of things grows in lockstep with the number of IoT devices. This information does not require storage at both the processing and transmission levels. Processing information regarding IoT gadgets is unexpectedly difficult due to resource constraints [8]. When the IoT device detects the information, it communicates it to a neighboring fog hub. The data is then separated and directed to several fog hubs for processing. While interacting with the accurate information, one must ensure the seriousness of the information.

4.4 Privacy Fog computing relies on the computing power of circular nodes to reduce the absolute burden on the information center. Fog computing makes protection more difficult

An Overview on Security Challenges in Cloud …

343

because fog nodes near end clients can collect sensitive information about the characteristics and usage of utilities such as center organizations. In addition, fog nodes are distributed over a large area, which complicates integrated control. Compromises on edge nodes that are not effectively preserved can be the crossroads of the organization’s gate crusher [9].

4.5 Confidentiality When it comes to data protection in fog computing, there are two factors to consider. It is all about secure data storage and corporate communications. For capacity and processing considerations, IoT devices send data to the nearest fog hub. Unauthorized alterations may jeopardize this. As a result, fog computing faces a new challenge in the development of a secure framework that ensures the integrity of IoT data controlled by fog nodes. It is vital to secure the fog computing framework from busy individuals or malicious attackers while monitoring the correspondence between the fog node itself and the connected IoT devices and fog nodes in order to assure the framework’s reliability.

5 Security Challenges in Edge Computing This part attempts to depict and show the focal difficulties in edge computing and momentarily address their critical security outcomes and effects. For a couple resource restricted end devices, it is hard to hold an immense volume or to protection these gadgets’ security. Security challenges faced in edge computing are:

5.1 Privacy and Security of Data Edge computing could be defined as a computing structure that accumulates various trust areas, such as traditional information encryption and other ineffective techniques. A requirement for viable devices is one of the barriers to achieving information security and protection at the edge computing. We can say that most tools for managing edge computing’s various information assets are still insufficient. As a result, designing a data encryption procedure for channels for approval is especially important. The calculation’s ambiguity should be seen at the same time [10].

344

D. R. Shah et al.

5.2 Access Control Because of the re-appropriating of edge computing, any malevolent customers without an endorsed character could abuse the increase resources in the edge or focus establishment if there are no powerful confirmation parts in that location. Adjoining edge gadgets impart to get to or trade their substance with others. By and by, if programmers can get to one of the non-got edge gadgets, it is possible to control the rest adjoining hubs. This establishes a critical security issue for ensured access, for instance, the control arrangement of the virtualization asset of the cloud of edge servers is gotten to, abused, and adjusted if they hold any such privileges to edge machines.

5.3 Attack Mitigation Edge servers by and large furnish edge clients with offices that are viewed as mistake inclined as far as security conditions. Edge gadgets, very much like the versatile cloud, cannot acquire the absolute organization traffic needed for the IoT-DDoS area. Besides, a customer might have confined insights concerning a gadget’s functioning condition, assuming it was closed or hacked. In this way, without a doubt if an assault will in general occur on an edge apparatus, most of customers could never be prepared to do noticing it [11].

5.4 Detection for Anomalies The objective of abnormality area is not just to distinguish odd insights precisely, however in addition to make light of events wrong up-sides by rapidly changing spot to the new improvements in the data noticed. If an abnormality is not managed fittingly, its impact can be sent to all the other edge hubs from one edge focus, accordingly, diminishing the consistency of the total edge computing structure. In extension, once the ramifications of the inconsistency have circulated, it is difficult to find the genuine reason for its presence, prompting expanded upkeep expenses and postponement in recovery [12].

6 Conclusion In conclusion, cloud computing is a novel invention built on the distributed computing paradigm. Although it has not yet reached its full potential, this concept will be completely dependent on the product company’s future. Fog computing is a good

An Overview on Security Challenges in Cloud …

345

complement to cloud computing since it extends cloud management to end users. Fog computing’s characteristics, such as adaptability, proximity to end customers, attentiveness, heterogeneity, and long-term use, make it a more meaningful stage for the Web of things. The design and implementation of this invention may raise a number of security concerns. Specialists acknowledge the value of these devices on the one hand, and the problems and complexity ideas linked with such devices on the other, after assessing the research level of edge computing. All things considered, there are still flaws and challenges with how secure these devices are, as well as the need for a significant amount of future development in this area. This paper provides an overview of the different security concerns that cloud, fog, and edge computing confront. In the future, the research could be expanded to include more computing paradigms.

References 1. Dolui K, Datta SK (2017) Comparison of edge computing implementations: fog computing, cloudlet and mobile edge computing. In: 2017 global internet of things summit (GIoTS), pp 1–6 2. Kumar SN, Vajpayee A (2016) A survey on secure cloud: security and privacy in cloud computing. Am J Syst Softw 4(1):14–26 3. Dong X, Yu J, Luo Y, Chen Y, Xue G, Li M (2013) Achieving secure and efficient data collaboration in cloud computing. In: 2013 IEEE/ACM 21st international symposium on quality of service (IWQoS), pp 1–6. https://doi.org/10.1109/IWQoS.2013.6550281 4. Kuyoro SO, Ibikunie F, Awodele O (2011) Cloud computing security issues and challenges. Int J Comput Netw (IJCN) 3:247–255 5. King NJ, Raja VT (2012) Protecting the privacy and security of sensitive customer data in the cloud. Comput law Secur Rev 28:308–319 6. Dong MT, Zhou X (2016) Fog computing: comprehensive approach for security data theft attack using elliptic curve cryptography and decoy technology. Open Access Library J 3(09):1 7. Rios R, Roman R, Onieva JA, Lopez J (2017) From smog to fog: a security perspective. In: 2017 second international conference on fog and mobile edge computing (FMEC). IEEE, pp 56–61 8. Alrawais A, Alhothaily A, Hu C, Cheng X (2017) Fog computing for the internet of things: security and privacy issues. IEEE Internet Comput 21(2):34–42 9. Ashi Z, Al-Fawa’reh M, Al-Fayoumi M (2020) Fog computing: security challenges and countermeasures. Int J Comput Appl 175(15):30–36 10. Cao K, Liu Y, Meng G, Sun Q (2020) An overview on edge computing research. IEEE Access 8:85714–85728. https://doi.org/10.1109/ACCESS.2020.2991734 11. Xiao Y, Jia Y, Liu C, Cheng X, Yu J, Lv W (2019) Edge computing security: state of the art and challenges. Proc IEEE 107:1608–1631 12. Sun Z, Zhang X, Wang T, Wang Z (2020) Edge computing in internet of things: a novel sensing-data reconstruction algorithm under intelligent-migration strategy. IEEE Access 8:50696–50708. https://doi.org/10.1109/ACCESS.2020.2979874

An Efficient Deep Learning-Based Hybrid Architecture for Hate Speech Detection in Social Media Nilanjan Nath, Jossy P. George, Athishay Kesan, and Andrea Rodrigues

Abstract Social media has become an integral part of life as users are spending a significant amount of time networking online. Two primary reasons for its increasing popularity are ease of access and freedom of speech. People can express themselves without worrying about consequences. Due to lack of restriction, however, cases of cyberbullying and hate speeches are increasing on social media. Twitter and Facebook receive over a million posts daily, and manual filtration of this enormous number is a tedious task. This paper proposes a deep learning-based hybrid architecture (CNN + LSTM) to identify hate speeches by using Stanford’s GloVe, which is a pre-trained word embedding. The model has been tested under different parameters and compared with several state-of-the-art models. The proposed framework has outperformed existing models and has also achieved the best accuracy. Keywords Hate speech · Detection · Hybrid model · CNN · LSTM

1 Introduction The twentieth century has seen a rapid and massive growth in social media users, and the number is increasing every day. This growth certainly has benefits in terms of ease of communication, business opportunities, and networking, but there is a downside too. Due to the absence of proper restrictions, people exploit the freedom to bully others and use hurtful speech. Such language and comments can make users emotionally vulnerable and may lead to anxiety, depression, or self-harm. There is N. Nath (B) · J. P. George · A. Kesan · A. Rodrigues CHRIST (Deemed to be University), Bengaluru, India e-mail: [email protected] J. P. George e-mail: [email protected] A. Kesan e-mail: [email protected] A. Rodrigues e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_30

347

348

N. Nath et al.

a need for timely identification of problematic posts so that they can be taken down immediately, and proper action can be issued against the abuser as well. The use of hate speech has become one of the prime ways of spreading hatred over the Internet. There is, however, disagreement regarding the definition of hate speech as its meaning depends primarily on the context of the said phrase and the subjective interpretation of it. On any given day, millions of posts are shared over social media platforms. Due to the enormous volume, filtering these posts manually is nearly impossible. Alternatively, the lack of a proper automated filtration system is a problem for social media giants like Twitter and Facebook. They also are likely to face legal issues as a consequence of no action against hateful posts. The potential risk has given impetus to research work in this field. Initially, there was a trend of using machine learning models, but these models failed to perform well when the size of the dataset was increased. Deep learning architectures like CNN, LSTM, GRU, BiLSTM, on the other hand, have proven to be consistent and better performers [1]. Deep learning frameworks may have outperformed machine learning models when it comes to hate speech detection, but none of the existing systems are accurate. To improve the accuracy further, this paper proposes a hybrid architecture (CNN + LSTM). The pre-processing of text data has been refined as much as possible to save computational time and resources. The effect of pre-trained word embedding is investigated using Stanford’s GloVE pre-embedding. The proposed framework has achieved a good accuracy score on an imbalanced dataset and outperformed the existing models. This section contains the introduction, Sect. 2 contains a discussion of existing research papers, the proposed model is explained in Sect. 3, the algorithm in Sect. 3.1, the performance analysis is discussed in Sect. 4, and finally, Sect. 5 concludes with a conclusion.

2 Literature Review Sreelakshmi et al. [2] proposed a machine learning model for detecting hate speech written in Hindi and English. On a dataset of 10,000 data samples acquired from various sources, they examined three distinct models: SVM-linear, SVM-RBF, and random forest. The Word2Vec, Doc2Vec, and fastText characteristics of the three models were compared. With the SVM-RBF classifier, fastText features provided superior feature representation, according to the performance measures. Roy et al. [3] used the deep convolutional neural network (DCNN) to create an automated system. For the study, they used a dataset from kaggle.com. Machine learning-based classifiers including LR, RF, NB, SVM, DT, GB, and KNN were used as baseline models, with SVM outperforming the others. CNN, LSTM, C-LSTM, and DCNN were among the deep learning models utilized. DCNN combined with a tenfold cross-validation approach yielded the best results. It outperformed the

An Efficient Deep Learning-Based Hybrid Architecture …

349

previous models with precision, recall, and F1-score values of 0.97, 0.88, and 0.92, respectively. Oriola and Kotzé [4] compiled a corpus of English tweets from 21,350 tweets from South African discourses on Twitter. On several ML methods like LR, SVM, RF, and GB, they investigated three optimized machine learning models such as hyperparameter optimization, ensemble, and multi-tier meta-learning. Multi-tier meta-learning models surpassed all other models in terms of consistency and balanced classification performance, achieving an overall accuracy of 0.671, according to the performance table. De Souza et al. [5] used Twitter data to do research on automated identification of foul language. They used various pre-processing approaches to prepare a dataset of 24,783 tweets. The first trials used two machine learning algorithms: linear SVM and Naive Bayes. It was discovered that Naive Bayes performed well, with a 92% accuracy, followed by linear SVM with a 90% accuracy. Oriola and Kotzé [6] used support vector machines with distinct n-gram characteristics to detect harmful South African tweets. For testing purposes, many SVM kernels were utilized. A hybrid SVM kernel with a set of unigram and bigram as well as a set of character n-gram with length sizes ranging from 3 to 7 achieved the best results, with an accuracy of 90.95%. However, the models’ performance was hampered by a disparity in class. To detect hate speech, Setyadi et al. [7] used a sample of tweets and used an artificial neural network approach optimized with a backpropagation algorithm. They put their model to the test by changing the train-tests-split ratio, the number of epochs, and the learning rate. The entire process yielded an average precision of 80.664%, recall of 90.07%, and accuracy of 89.47%. A convolutional neural network was employed by Elouali et al. [8] to detect hate speech in multilingual tweets. They experimented with different convolutional layers, dense layers, and filters. In this study, character level representation was implemented. There were two versions of the dataset used in this study. The original edition had five different languages, whereas the second version had seven different languages. The best accuracy was reached in the first dataset at 0.8893, and in the second dataset at 0.8300. Themeli et al. [9] used two separate datasets to construct their own. They used Naive Bayes, random forest, logistic regression, K-nearest neighbor, and artificial neural networks. For the experiment, different sets of characteristics such as BoW, GLoVE, and NGG were used. BoW was found to be the most effective feature, followed by GLoVE and NGG. According to the micro and macro F-measures, a combination of all features combined with logistic regression or 3-layer neural network classifiers produced the greatest results in terms of hate speech identification.

350

N. Nath et al.

3 Methodology In this work, a novel method is employed to classify hate speech on social media. The method is implemented using a hybrid model of CNN and LSTM. The tweets are taken as the input. Tweets are text data, and it is well known that text data include a lot of unnecessary characters like # and @, and it also contains stopwords that do not provide much information. Such unwanted data needed to be eliminated to reduce computational complexity as well as computational time. As shown in Fig. 1, word-level tokenization has been used for the process. Tokenization creates tokens of different lengths. To make all the tokens of the same length, padding was employed. Machine learning or deep learning models cannot work with text data, which is why the text data were mapped to its numeric form with the help of Stanford’s GloVE pre-embedding. Finally, this data are passed to the models, and they are further optimized to classify hateful tweets properly and more accurately. a.

b.

Dataset Description This paper’s dataset is collected from kaggle.com. There are 31,962 tweets in total, with 29,720 tweets being non-hate speech and 2242 tweets using hateful language. Pre-processing Dealing with text data is challenging, but pre-processing the data properly and converting it to its numeric form before feeding it to the model makes the process easier. The pre-processing has been done in the following steps: i.

Text pre-processing Text data contain several characters like punctuations, stopwords, and special characters like @ and # which do not add any extra information to the model. In this problem, we are dealing with tweets, a form of text data where we need to remove these unnecessary characters. In this text

Fig. 1 Block diagram of the proposed methodology

An Efficient Deep Learning-Based Hybrid Architecture …

ii.

iii.

iv.

351

pre-processing step, all the letters will be converted to lowercase, and then, the unnecessary characters will be removed from the dataset. Tokenization Tokens are one of the main building blocks of natural language processing. The primary objective of a token is to create vocabulary. In this case, word-level tokenization is used in which each unique token will have a unique word_index in a sequence of words. To encounter the problem of unknown words, the oov_token technique is used. Padding Tokenization creates different sequences for each tweet. As the length of tweets varies, the length of sequences will also vary. To maintain the same length for each tweet in the dataset, padding has been used. GloVe pre-embedding The main advantage of pre-embeddings is that they have the ability to capture the semantic and syntactic meaning of words. Stanford’s GloVe pre-embedding, an unsupervised learning algorithm, is used to in this case to obtain the vector representation of words. It maps each word in the corpus to a higher-dimensional space, where words with similar semantic meaning will be close to each other. A 100-dimensional space is used in this experiment.

3.1 Hybrid Model In this study, three alternative models are compared, and a hybrid model combining CNN and LSTM is proposed. a.

Combination of CNN and LSTM The hybrid model is made using the combination of both CNN and LSTM. As shown in Fig. 2, first, the tweets are taken as the input. Then as referred in the proposed methodology, the text data are pre-processed and mapped to its numeric form. After that, the data are transferred to the CNN layer to be processed. The CNN model extracts features from text input with the help of the convolution layer, and on top of that, a max-pooling layer has been used to reduce the dimension of the data and only save the key characteristics. After that, the retrieved data are transmitted to the LSTM layer. The LSTM model uses its memory unit and three gates to store just the useful sequential information of the text input, and the output produced by this model is then fed into a fully

Fig. 2 CNN + LSTM architecture diagram

352

N. Nath et al.

Fig. 3 CNN + GRU architecture diagram

connected neural network that classifies the tweets. Because this is a binary classification problem, the output layer uses a sigmoid activation function. b.

Combination of CNN and GRU The CNN + GRU hybrid model has a similar design to the CNN + LSTM model described in section a. The main difference is that instead of using an LSTM layer, we have utilized a GRU layer, which keeps the text data’s sequential information before passing it on to the fully connected layer for classification. When compared to LSTM, the GRU layer requires one fewer gate, making it faster and more computationally efficient (Fig. 3).

4 Experimental Results and Analysis On the basis of testing accuracy and validation loss, the models were compared. The models were ran for 50 epochs each, however, neural networks are notorious for overfitting. Dropout layers early stopping and a slow learning rate were employed to solve this problem. In Table 1, the comparison between training and testing accuracy of each model is presented. It can be seen clearly in Table 1 that the training accuracy of the CNN model is the best across all the three experimented models. The testing accuracy, however, is the deciding factor for this study case as it is calculated on unseen data. In terms of testing accuracy, the hybrid model of CNN and LSTM has outperformed the other two. Table 2 lists the model loss during the training and testing phase for each model. The loss of each model was minimized as much as possible. In Table 2, it can be Table 1 Model accuracies

Table 2 Model loss

Model name

Training accuracy

Testing accuracy

CNN

0.9724

0.9515

CNN + LSTM

0.9614

0.9535

CNN + GRU

0.9659

0.9528

Model name

Training loss

Testing loss

CNN

0.0821

0.1437

CNN + LSTM

0.1023

0.1357

CNN + GRU

0.1118

0.1455

An Efficient Deep Learning-Based Hybrid Architecture …

353

seen that although the loss is the least for CNN at the time of training, and the testing loss is least for the hybrid model of CNN and LSTM. In this paper, model accuracy and model loss are selected as performance metrics. Model accuracy plots help to track the difference between training time and testing time accuracy. If the gap between these two accuracies is higher, then it can be said that the model is overfitting. To prevent that, the early stopping technique is used here. The model loss keeps a track of the training time and testing time loss. We expect it to be less for both training and testing time. Accuracy and loss plots of both training and testing time help to keep track of the performance of any model. Figure 4 portrays the difference between training and testing time accuracy and loss for the CNN model. In Fig. 5, the plots depict a comparison between training and testing time accuracy and loss for the hybrid model of CNN and LSTM. The graphs clearly show that this model performs better than the CNN model. Figure 6 shows a comparison of the performance metrics during training and testing time. The graphs give a clear indication that the hybrid model of CNN and GRU is performing better than the CNN model in terms of accuracy and loss. It fails to outperform the CNN + LSTM model in both cases. The comparative analysis done in Figs. 4, 5, and 6 helps to monitor the performance of all the three models more efficiently. The hybrid model of CNN and LSTM

Fig. 4 Model accuracy and loss plots for CNN

Fig. 5 Model accuracy and loss plots for CNN + LSTM

354

N. Nath et al.

Fig. 6 Model accuracy and loss plots for CNN + GRU

has outperformed the other two models in terms of test accuracy and test loss. It achieved the highest test accuracy and the lowest test loss.

5 Conclusion The need for accurate detection of hate speech is growing every day. Users can publish freely on social media without worrying about the consequences because there are less restrictions. As a result, hate crimes are on the rise. Failure to detect hate speech accurately and in a timely manner might lead to poor mental health. This paper makes an advancement toward the identification of hateful tweets in a beneficial way. All state-of-the-art models have been surpassed by the suggested CNN-LSTM hybrid model. In comparison with other work in this subject, the model has attained a higher level of accuracy. Our novel approach has a 95.35% accuracy rate, which is higher than all the previous research works.

References 1. Kapil P, Ekbal A, Das D (2020) Investigating deep learning approaches for hate speech detection in social media. ArXiv abs/2005.14690 2. Sreelakshmi K, Premjith B, Soman KP (2020) Detection of hate speech text in Hindi-English code-mixed data. Procedia Comput Sci 171:737–744 3. Roy PK, Tripathy AK, Das TK, Gao X (2020) A framework for hate speech detection using deep convolutional neural network. IEEE Access 8:204951–204962 4. Oriola O, Kotzé E (2020) Evaluating machine learning techniques for detecting offensive and hate speech in South African tweets. IEEE Access 8:21496–21509 5. De Souza GA, Da Costa-Abreu M (2020) Automatic offensive language detection from Twitter data using machine learning and feature selection of metadata. In: 2020 international joint conference on neural networks (IJCNN), pp 1–6 6. Oriola O, Kotzé E (2019) Automatic detection of toxic South African tweets using support vector machines with N-gram features. In: 2019 6th international conference on soft computing & machine intelligence (ISCMI), pp 126–130

An Efficient Deep Learning-Based Hybrid Architecture …

355

7. Setyadi NA, Nasrun M, Setianingsih C (2018) Text analysis for hate speech detection using backpropagation neural network. In: 2018 international conference on control, electronics, renewable energy and communications (ICCEREC), pp 159–165 8. Elouali A, Elberrichi Z, Elouali N (2020) Hate speech detection on multilingual twitter using convolutional neural networks. Rev Intell Artif 34:81–88 9. Themeli C, Giannakopoulos G, Pittaras N (2021) A study of text representations in Hate Speech Detection. ArXiv abs/2102.04521

Application of Machine Learning Algorithms to Real-Time Indian Railways Data for Delay Prediction V. Asha and Heena Gupta

Abstract Railways have always been one of the fastest and the most affordable means of transport for the common man in India. Due to various reasons like lack of infrastructure, the high rate of growth of Indian population, etc. the trains in India get delayed causing inconvenience to the general public. Moreover the festivals also cause overcrowding resulting in further delay. So the paper aims to analyze the real-time data collected from the Indian railways regional office and understand the predictors causing the delay. The paper also aims to predict if the train will delay using k-nearest neighbor, decision trees, logistic regression and random forest. Keywords Indian railways · Delay prediction · Real-time data · Exploratory data analysis · Machine learning

1 Introduction The railways are an important means of transport for any Indian commuter. It is the most economical and accessible methods for long distance travel. In India, the rail operations are handled by the Government of India as Indian Railways. The rail network in India is the fourth largest network in the world and carries millions of passengers each day. But the delay in train journey cause hassles to people. The overcrowding in trains due to festivals also attributes to delay. If train delays during festivals can be predicted so that additional trains or infrastructure can be added in advance it will lead to convenience in travel. Hence, the paper aims to process the raw data obtained from Indian railways and derive necessary graphs to understand the predictors. Train delay is also predicted. The paper is organized as various sections. Previous works done on similar topics are

V. Asha New Horizon College of Engineering, Bengaluru, India H. Gupta (B) Visvesvaraya Technological University, Belgaum, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_31

357

358

V. Asha and H. Gupta

discussed and then, technical details and experiment details are discussed. Results are then explained.

1.1 Objectives • The paper aims to transform the raw data so that it can be readily used by machine learning models and produce effective results using data preprocessing techniques. • The paper also aims to perform exploratory data analysis on the data to determine the right set of predictors. • Visualization through graphs and charts will also help to understand the large dataset in a concise manner. • The prediction of train delay will lead to convenience for the commuters.

2 Literature Survey Earlier works done on the topic explain the various algorithm and machine learning techniques applied. Various factors attributing to train delays like bad weather conditions, single track delays, etc. are discussed [1]. Initial delays lead to subsequent delays at other stations in train movement. DBSCAN algorithm was applied to understand the propagation of delays. A hybrid model based on support vector regression and Kalman filter to improve the train movement is discussed [2]. These models were trained using offline data and real-time information was also updated. The results were also compared with other machine learning models. The paper [3] explains an ensemble model for delay prediction in trains. An ensemble model combines various other models together. So random forest and kernel regression models were applied. Sensitivity to hyper parameters was also checked. The quality of service and the current delay in operational trains have largely affected the prediction model. A bi level random forest model for predicting train delays is proposed [4]. At the primary level, the delay increase or decrease was predicted and at the secondary level this result was quantified. Comparison with other machine learning algorithms was also done. In the paper [5] the authors have discussed how different machine learning models can be used to predict train delays. Finally the back propagation feed forward network technique has yielded the best possible results with the least amount of error for train delay prediction system. A delay at one station results in delays at other stations also consequently [6]. Hence, they have proposed a critical point search algorithm to find the primary delays.

Application of Machine Learning Algorithms …

359

Deep learning models are used to propose the delay. The information obtained can be used at the operational level to control rail services. In the paper [7] a Bayesian Network model is used to understand the factors affecting train delay. The domain knowledge is also incorporated. The performance of the model was evaluated using five-fold cross validation method. An LSTM technique for train delay prediction is explained [8]. The model is found to have better prediction effect than Random Forest and artificial neural network model. Various factors affecting delay are discussed. The authors in paper [9] discuss how Bayesian network can be used to accurately estimate the delays for hypothetical stops and to rightly ascertain the delay interactions between the stops. The delay estimates for ‘trip-destination-time’ set were recorded. The delay propagation patterns were studied. A hybrid of two neural network models, i.e., fully connected neural network (FCNN) and long short-term memory (LSTM) is discussed [10]. As per them, delays are affected by both operational and non-operational factors. Historical data and the sequence of train operations have helped the authors to derive the accurate results. Various machine learning algorithms are applied for crime prediction [11]. The algorithms applied are SVM, Random Forest and Naïve Bayes algorithm. The infrastructure discrepancies can also be checked with certain pattern imaging techniques [12]. These will help improving the lack in infrastructure causing train delay [13].

3 Technical Details Data preprocessing includes various steps to transform the data so that they are easily analyzed by the algorithm to generate accurate precise and non-biased results. There are four major steps in data preprocessing: • Data Cleaning: This helps to remove noisy or erroneous data. • Data Integration: This step merges the data from various sources bringing them to a common format. • Data Transformation: This involves normalizing techniques so that one standard is followed or generalization or aggregation techniques. • Data Reduction: The database size can be too large to analyze. Hence a reduced representation is often used for analysis purposes. This can be done through various techniques like sampling or dimensionality reduction. For eg, if train_no and train_name have one to one relationship then the data is redundant and it can affect the results. Hence a redundant attribute is removed to obtain unbiased results. Various machine learning algorithms for classification were applied since our dataset has the labeled attribute if the train is delayed or not.

360

V. Asha and H. Gupta

• K-Nearest Neighbor: It is one of the algorithms for classification; the new case is assigned to the category to which it is most similar. • Logistic Regression: It is used for solving classification problem. The new case is mapped to a ‘0’ or ‘1’ using a logistic function. This logistic regression function is a sigmoid function where the values above a threshold are marked as ‘1’ and the values below the threshold are marked as ‘0’. • Decision Trees: Decision trees are based on conditions reaching the leaf node which is the independent variable to be predicted. The attribute on which the condition is based is selected using Gain Ratio, Information Gain or Gini Index. • Random Forest: This technique comes under ensemble learning, where several classifiers are used to predict thus improving the performance of the model. Here there are several decision trees formed. The majority predicted class from various decision trees is taken as the final result. Receiver operating Characteristic (ROC) curve helps to ascertain the goodness of a particular model; it is majorly used as a tool for statistical performance measure. The True Positive Rate is marked against the False Positive Rate at various threshold values. The more the Area Under Curve (AUC), the better is the model.

4 Experiment Data Collection Real-time data was obtained from the Southern Division of Indian Railway Department after various rounds of approvals from the Manager, the Senior Manager, and the Head of Railways, Bengaluru Division. The obtained data consisted of the daily commute of only the delayed trains for the years 2018 and 2019 (Fig. 1). Data Preprocessing Raw data as such was not fit for analysis since it did not include the arrival and departure time. Hence, data cleaning was performed extensively including all necessary information. The actual time table for the trains was obtained from the official website for Indian railways and this was merged with data obtained. The schedule for both arrival and departure was added to the raw data. For the final delay value, scheduled time and the arrival time were compared. The target column was marked as 0 or 1 to know if the train is delayed or not.

Fig. 1 Process flow

Application of Machine Learning Algorithms …

361

The fields are: • • • • • • • • • • • • • • •

Train_No- Indicates the Train Numbers Train Name- Displays the name of the given train From- The station from where the train is originating To- The destination of the train Start Date-Date of the start of the journey Day- Day of the week like Monday, Tuesday, etc. Sch Dep-The time of departure from the origin station as per the train timetable Sch Arr- The time of arrival at the destination station as per the train timetable SchArr Date-The date when the train is supposed to arrive at the destination Delayed Time-By how much time the train has arrived later than the scheduled time Act Arrival-The exact time when the train has arrived Remarks- The reasons for delay Festival Name-Name of the major festivals celebrated in India Festival Month-Month of the year Delayed- Indicates whether the train has been delayed or not (1 = Yes 0 = No).

Dimensionality reduction is performed for Train_no and Train_name since the two attributes have one to one correlation. Hence, the Train_name attribute is removed. Visualization The dataset is visually analyzed to understand the affecting attributes. • Count Plot The count plot counts the number of occurrences for the attribute. Here the count plot for ‘from’ attribute shows that the trains origination from Bangalore are majorly considered. • Box Plot The box plot shows the distribution of data around the median and quartiles. It is an effective way to understand outliers. Here the box plot is between the delay in minutes and the train name. It shows which trains are delayed the most and the least. • Categorical Plot The categorical plot shows the data category wise. Here the frequency of delay for various trains is showed (Fig. 2). Machine Learning Algorithms The data was divided into training and test set. The algorithms were implemented using Python on Jupyter platform. Initially, the K-nearest neighbor algorithm was applied. The ROC curve shows the performance of the model (Fig. 3). The logistic regression algorithm was then applied. It gave an accuracy of 82%. For the same dataset, the decision trees algorithm was applied. A tree with condition nodes and result nodes was obtained. The ROC curve shows that the model did not fare well with the data (Fig. 4).

362

Fig. 2 Visualization Fig. 3 ROC curve for KNN algorithm

Fig. 4 ROC curve for decision trees algorithm

V. Asha and H. Gupta

Application of Machine Learning Algorithms …

363

Fig. 5 Comparison of ROC curve for decision trees and RF algorithm

Table 1 Font sizes of headings. Table captions should always be positioned above the tables

Algorithm

Accuracy (%)

k-Nearest Neighbor

92

Logistic Regression

82

Decision trees

78

Random Forest

96

The last algorithm applied was Random Forest. It had the best accuracy among all the algorithms applied. The ROC curve shows the performance of the model in comparison to ROC curve for decision trees (Fig. 5).

5 Results and Discussion The preprocessing techniques used set the dataset for applying machine learning algorithms for analysis purposes. The determined predictors also helped to forecast the right outcomes, thus helping to predict the train delay. Various machine learning algorithms like KNN, decision trees, SVM classifier and random forest were applied to understand the issue (Table 1). The random forest technique has shown the best accuracy in predicting the train delay.

6 Conclusion In conclusion, the undertaken study tries to combine data from various sources so that the complete railways dataset is formed setting the right platform for applying machine learning algorithms. The raw data did not have inclusive information and

364

V. Asha and H. Gupta

combining data from various datasets made it complete for applying machine learning algorithms. The algorithms were applied to predict train delay. Neural networks can be further applied to this dataset to check the performance.

References 1. Wang P, Zhang Q (2019) Train delay analysis and prediction based on big data fusion. Trans Saf Environ 1:79–88 2. Huang P, Wen C, Fu L, Peng Q, Li Z (2019) A hybrid model to improve the train running time prediction ability during high-speed railway disruptions. Saf Sci 122 3. Nair R et al (2019) An ensemble prediction model for train delays. Transp Res Part C Emerg Technol 104:196–209 4. Nabian MA, Alemazkoor N, Meidani H (2019) Predicting near-term train schedule performance and delay using bi-level random forests. Transp Res Rec 2673:564–573 5. Arshad M, Ahmed M (2019) Prediction of train delay in Indian railways through machine learning techniques. Int J Comput Sci Eng 7:405–411 6. Wu J, Zhou L, Cai C, Dong J, Shen GS (2019) Towards a general prediction system for the primary delay in urban railways. IEEE Intell Transp Syst Conf ITSC (2019), 3482–3487 7. Huang P et al (2019) A Bayesian network model to predict the effects of interruptions on train operations. Transp Res Part C Emerg Technol 114:338–358 8. Wen C, Mou W, Huang P, Li Z (2019) A predictive model of train delays on a railway line. J Forecast, 470–488 9. Ulak MB, Yazici A, Zhang Y (2020) Analyzing network-wide patterns of rail transit delays using Bayesian network learning. Transp Res Part C Emerg Technol 119 10. Huang P et al (2020) Modeling train operation as sequences: A study of delay prediction with operation and weather data. Transp Res Part E Logist Transp Rev, 141 (2020) 11. Kshatri SS, Singh D, Narain B, Bhatia S, Quasim MT, Sinha GR (2021) An empirical analysis of machine learning algorithms for crime prediction using stacked generalization: an ensemble approach. IEEE Access, 9 12. Asha V, Bhajantri NU, Nagabhushan P (2011) Automatic detection of defects on periodically patterned textures. J Intell Syst 20(3):279–303 13. Asha V, Bhajantri NU, Nagabhushan P (2011) GLCM-based chi-square histogram distance for automatic detection of defects on patterned textures. IJCVR 2(4):302–313

Computer Assisted Unsupervised Extraction and Validation Technique for Brain Images from MRI S. Vijayalakshmi, T. Genish, and S. P. Gayathri

Abstract Magnetic Resonance Imaging (MRI) of human is a developing field in medical research because it assists in considering the brain anomalies. To identify and analyze brain anomalies, the research requires brain extraction. Brain extraction is a significant clinical image handling method for quick conclusion with clinical perception for quantitative assessment. Automated methods of extracting brain from MRI are challenging, due to the connected pixel intensity information for various regions such as skull, sub head and neck tissues. This paper presents a fully automated extraction of brain area from MRI. The steps involved in developing the method to extract brain area, includes image contrast limited using histogram, background suppression using average filtering, pixel region growing method by finding pixel intensity similarity and filling discontinuity inside brain region. Twenty volumes of brain slices are utilized in this research method. The outcome is achieved by this method is approved by comparing with manually extracted slices. The test results confirm the performance of this strategy can effectively section brain from MRI. Keywords Brain anomalies · Brain extraction · Contrast limited; average filtering · Pixel intensity

S. Vijayalakshmi Department of Data Science, Christ—Deemed to be University, Pune, India T. Genish (B) School of Computing Science, KPR College of Arts Science and Research, Coimbatore, India e-mail: [email protected] S. P. Gayathri Department of Computer Science and Applications, The Gandhigram Rural Institute–Deemed to be University, Gandhigram, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_32

365

366

S. Vijayalakshmi et al.

1 Introduction Magnetic Resonance (MR) Images [1] elucidate the specific anatomical revelation of adult brain structure and likely give the proportion of the volumes which are useful to analyze the brain [2–4] related anomalies. It is a non-invasive and furthermore nondangerous technique. Several automated methods of fetal brain [5, [6], adult brain [7] and brain substructure [8, 9] were reported in the literature. Area-based techniques gives brain region as a collection of associated pixel information set. These regions will also have the parts of non-brain tissues. For quick investigation of brain structure, it requires accurate extraction of brain from MRI. Existing brain extraction techniques [10] have its own benefits and negative marks; hence, it cannot conclude single extraction strategy is ideal. In this paper, an automated brain extraction technique with a sequel of automated processes is proposed. Flowchart of the proposed method is given in Fig. 1. Fig. 1 The steps involved in the proposed strategy

Start Input slice Upgrading image contrast to identify brain border Filtering to remove degrading pixels Stopping Criteria Selection

Pixel Intensity Similarity Discontinuity filling inside brain mask Extracted brain region End

Computer Assisted Unsupervised Extraction and Validation Technique …

367

2 Methods The proposed strategy starts with a pre-handling step that incorporates contrast upgrade to improve the differentiation of the brain boundary which helps to identify the brain border easily and additional filter is applied to remove pixels which degrade the image. Further, the technique continues with stopping criteria selection for area growing by comparing pixel intensity similarity as well. Step 1: Human brain consists of different connected tissues such as white matter (WM) and gray matter (GM), some portions possess a similar intensity to GM. Out of this, due to contrast limitations, brain tissues can be shown with intensity overlapping of WM and GM which are closest to each other. In order to distinguish the brain region from these non-brain regions, the researcher has analyzed histogram of input MRI slices. Contrast Limited Adaptive Histogram Equalization (CLAHE) method upgrades the difference of the image contrast by changing the intensity to produce improved image. B (i, j) = G (A (i, j) ).

(1)

Step 2: The skull—cerebrospinal fluid (CSF) interface is required to flatten in order to accomplish the brain extraction. For this reason, a window is moved across the image B and the mean intensity is calculated. This iterative process is repeated for the entire image. The output image of this step is obtained by substitute mean intensity value in the subsequent pixel of the image B. Step 3: The best possible threshold TH is found for the image B by using Otsu technique [11]. TH is the best threshold point, that maximizes the between class function σ B2 (T H ∗ ), which is computed as follows 2 (T H ∗ ) T H = mx0≤t≤N −1 σ Bt

(2)

The computed threshold TH is utilized as the halting condition in the subsequent brain pixel connecting process. Step 4: All MRI head scan images has shown brain area from the center location and it is the biggest part of head MRI as well. Consequently, the pixel at the middle position is chosen to initiate the process of finding similarity with surrounding pixels. The middle point is calculated by dividing image with height by 2. The surrounding neighbor pixel values are compared with middle value and if it satisfies the condition, then the same procedure will continue with the neighbor values. The result of this step gives binary brain mask Bm as shown in Fig. 2. Step 5: The low graded pixels such as CSF comparing with close by brain intensity values are removed during pixel intensity and similarity connecting process gets started by creating openings within binary brain image Bm. Such openings look like

368

S. Vijayalakshmi et al.

Fig. 2 a input head MRI slice b image upgraded c blurred Slice d binary brain image e discontinuity filled f extracted adult brain

holes and thus need to be filled by applying flood filling process and complete whole mask of adult brain F m is obtained as:  255 if Bm (i, j) contains zero (3) Fm = 0 or else F m is the complete binary brain image which is used to extract adult brain area from the original MRI slice.

2.1 Extraction Evaluation Metrics To assess the performance of the proposed method, Jaccard coefficient (J c ) [12], Dice similarity index (Dc ) [13] Sensitivity (S) and Specificity (Sp) [14] are computed between the results obtained through manual and the automated method. Jc (A, B) =

A∩B A∪B

(4)

Computer Assisted Unsupervised Extraction and Validation Technique …

2J J +1

(5)

Sn = x/x + z

(6)

Dc =

Sp =

369

y TP S= m+y (TP + FN)

(7)

Materials: T1-W MRI volumes are collected from the image web source IBSR [15]. The MRI slice thickness is 3.0 mm and is of 256 × 256 pixels in size. All MRI volumes are available along with the manually extracted ground truth images at IBSR.

3 Results and Discussion The performance of proposed technique is accomplished by testing on the collected data sets. The data set package is bundled with ground truth brain images. For visual comparison of the results of proposed techniques with gold standard slices and existing method, randomly selected segmented brain portion is shown in Fig. 3. From Fig. 3, it is concluded that BET neglected to separate the brain images where neck region is available, hence BET under segments brain region from MRI. The specialists, by comparing the outcomes with ground truth images believe that the outcomes acquired from the proposed automated technique are in acceptable concurrence with manual outcomes. For quantitative review of the performance of our proposed technique, it is figured out the prominent metric, for example, Jc, Dc, Sn and Sp between the best quality level and the consequences of the proposed strategy. The recorded values are given in Table 1. Table 1 gives that out of 20 volumes the incentive for D for almost all volumes run from 0.90 to 0.95 and for the rest of the volumes, produces 0.86 and 0.87. The normal estimations of Sn and Sp are 0.9933, 0.9327 individually. So as to analyze the strength of the proposed method, the quantitative measures acquired by BET are appeared in Table 2 with the estimations of proposed procedure. From Table 2 it is noted that the technique produces an estimation of 0.9933 for Sn and 0.9327 for Sp. It is also noted that BET produces more prominent FPR and FNR than proposed technique which demonstrates that the procedure has included additional irrelevant pixels and the method produces accurate values. For a superior performance, the TMRate ought to be very small as reasonably expected. The technique gives the most reduced misclassification rate and worth suggesting so that the strategy is a better automated technique for brain extraction.

370

S. Vijayalakshmi et al.

Fig. 3 Visual comparison of results is obtained by ground truth, BET and proposed method. Row 1 illustrates the randomly selected input slices, row 2 gives gold standard slices, row 3 gives BET and row 4 shows the proposed method

4 Conclusion This research paper proposes simple and automated strategy to extract adult brain from medical image, MRI. Test values reveal that the proposed strategy performs admirably. Hence, this method furnishes the values which concur fine with ground truth images. The outcomes are likewise practically better to the outcomes acquired by popular existing technique.

Computer Assisted Unsupervised Extraction and Validation Technique …

371

Table 1 Recorded values obtained by the proposed technique Volume no

Jc

Dc

Sn

Sp

FPRate

FNRate

PrAn

TMRate

1_24

0.9027

0.9597

0.9954

0.9405

0.0308

0.0094

99.2519

0.0803

2_4

0.9094

0.9518

0.9953

0.9596

0.0558

0.0003

99.2736

0.0961

4_8

0.9055

0.95

0.9948

0.9417

0.0745

0.0082

99.106

0.1027

5_8

0.8971

0.9054

0.9918

0.9404

0.0829

0.0095

99.0309

0.1125

6_10

0.9173

0.9566

0.9924

0.9409

0.0378

0.0029

99.1296

0.0869

7_8

0.9193

0.9575

0.995

0.944

0.05

0.0059

99.2421

0.086

8_4

0.79

0.8657

0.9456

0.9525

0.3408

0.0044

97.3528

0.3882

11_3

0.8909

0.9004

0.9962

0.9064

0.0174

0.0035

98.5875

0.1109

12_3

0.8792

0.9153

0.9982

0.8865

0.0082

0.0034

98.2237

0.1217

13_3

0.9128

0.9542

0.9986

0.9186

0.0064

0.0013

98.6064

0.0877

15_3

0.9166

0.9563

0.9926

0.9496

0.0396

0.0003

99.1317

0.0899

16_3

0.9106

0.9438

0.9969

0.9443

0.037

0.0026

98.2007

0.0727

17_3

0.9128

0.9448

0.997

0.9576

0.0265

0.0023

98.2046

0.0388

100_23

0.9094

0.9053

0.9946

0.9401

0.1171

0.0008

99.1291

0.157

110_3

0.9137

0.9543

0.9989

0.9003

0.0071

0.0005

98.9062

0.0868

111_2

0.9198

0.9577

0.9982

0.9103

0.0114

0.0006

98.9585

0.0811

112_2

0.9029

0.9596

0.9991

0.908

0.0055

0.0009

99.134

0.0775

191_3

0.9172

0.9472

0.9975

0.9572

0.0218

0.0007

99.371

0.0345

202_3

0.8259

0.9143

0.9969

0.9406

0.1077

0.0003

98.5239

0.137

205_3

0.8195

0.8707

0.9915

0.9154

0.2839

0.0005

97.7987

0.3685

Average

0.89363

0.93353

0.99333

0.93273

0.06811

0.00292

98.7581

0.12084

Table 2 Comparison of values obtained by standard segmentation metrics for BET and proposed technique Measures

BET

Proposed

Jc

0.7622

0.8936

Dc

0.8619

0.9335

Sn

0.9982

0.9933

Sp

0.9155

0.9327

FPRate

0.8780

0.0681

References 1. Dhawan AP (2003) Medical image analysis. John Wiley Publications and IEEE Press 2. Barkovich AJ, Millen KJ, Dobyns WB (2009) A developmental and genetic classification for midbrain-hindbrain malformations. Brain-A journal of Neurology. 132:3199–3230 3. McIlwain H, Bacherlard HS (1985) Biochemistry and the Central nervous system, 5th edn. Churchill Livingstone, Edinburgh

372

S. Vijayalakshmi et al.

4. Bankman IN (2000) Handbook of medical imaging, processing and analysis. Academic Press, London 5. Gayathri SP, Siva Shankar R, Somasundaram (2020) Fetal brain segmentation using improved maximum entropy threshold. Int J Innovative Technology and Exploring Engineering 9:1805– 1812 6. Somasundaram K, Gayathri SP, Rajeswaran R, Dighe M (2018) Fetal brain extraction from magnetic resonance iumage (MRI) of human fetus. The Imaging Science J 66:133–138 7. Hwang J, Han Y, Park H (2011) Skull-stripping method for brain MRI using a 3D level Set with a speedup operator. J Magn Reson Imaging 34:445–456 8. Genish T, Prathapchandran K, Gayathri SP (2019) An approach to segment the Hippocampusfrom T2-weighted MRI of human head scans for the diagnosis of Alzheimer’s disease using Fuzzy C-means clustering. Adv Algebra Analysis. 1:333–342 9. Smith SM (2002) Fast robust automated brain extraction (BET). Hum Brain Mapp 17:143–155 10. Zu YS, Guang HY, Jing ZL (2002) Automated histogram—based brain segmentation in T1weighted three-dimensional magnetic resonance head images. Neuroimage 17:1587–1598 11. Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Transaction on Systems, Man, and Cybernetics. 9:62–66 12. Jaccard P (1912) The distribution of Flora in Alpine Zone. New Phytol 11:37–50 13. Dice L (1945) Measures of the amount of ecologic association between species. Ecology 26:297–302 14. Altman DG, Bland JM (1994) Statistics notes: diagnostic tests 1: sensitivity and specificity. BMJ 308:1552 15. International Brain Segmentation Repository, Center for Morphometric Analysis Massachusetts General Hospital, CNY-6, Building 149, 13th Street, Charlestown, MA, 02129-USA.

A Hybrid Feature Selection for Improving Prediction Performance with a Brain Stroke Case Study D. Ushasree, A. V. Praveen Krishna, Ch. Mallikarjuna Rao, and D. V. Lalita Parameswari

Abstract In the contemporary era, artificial intelligence (AI) is making strides into every conceivable field. With advancements in place, there have been applications of machine learning (ML) in healthcare domain. Particularly for diagnosis of diseases with data-driven approach, ML algorithms are capable of learning from training data and make predictions. Many supervised ML algorithms came into existence with varied capabilities. However, they do rely on quality of training data. Unless quality of training data is ensured, they tend to result in mediocre performance. To overcome this problem, feature engineering or feature selection methods came into existence. From the literature, it is understood that feature selection plays crucial role in improving performance of prediction models. In this paper, a hybrid feature selection algorithm is proposed to leverage performance of machine learning models in brain stroke detection. The algorithm is named as Hybrid Measures Approach for Feature Engineering (HMA-FE). It returns best features that could contribute toward prediction of class labels. A prototype application is built to demonstrate the utility of the proposed framework and the underlying algorithms. The performance of prediction models are evaluated without and with feature engineering. Its empirical results showed the significant impact of proposed feature engineering on various brain stroke prediction models. The proposed framework adds value to Clinical Decision Support System (CDSS) used in healthcare units by supporting brain stroke diagnosis. Keywords Feature selection · Brain stroke detection · Machine learning models · Classification

D. Ushasree (B) · A. V. P. Krishna Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India e-mail: [email protected] A. V. P. Krishna e-mail: [email protected] Ch. M. Rao Department of CSE, GRIET, Hyderabad, India D. V. L. Parameswari Department of CSE, GNITS, Hyderabad, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 S. Shukla et al. (eds.), Data Science and Security, Lecture Notes in Networks and Systems 462, https://doi.org/10.1007/978-981-19-2211-4_33

373

374

D. Ushasree et al.

1 Introduction Advancements in artificial intelligence (AI)-based approaches have paved way for improving quality of diagnosis in healthcare domain. Data-driven approaches that consider the patients’ vitals could be used in clinical decision support systems (CDSS). Especially supervised machine learning models are widely used to detect various diseases in the healthcare industry. Brain stroke detection is one such application of machine learning (ML) algorithms. An important advantage of ML algorithms is that they can exploit historical data and the ground truth underlying in the data. They also suffer from mediocre performance when the training quality of data is inadequate. Due to the redundant and irrelevant features, the algorithms take more time and suffer from performance deterioration. In order to overcome these issues, feature engineering or feature selection approaches came into existence. They are broadly classified into three categories known as filter approaches, wrapper approaches and hybrid approaches. Filter approaches are based on relevance of features by correlating them with a dependent variable. On the other hand, wrapper approaches are based on finding usefulness of features by applying a training model. The former is much faster than the latter. In this paper a hybrid measures based method is followed that comes under filter approaches. Many researchers contributed toward feature engineering. They defined different approaches or measures to determine best features. Liu et al. proposed a hybrid feature selection that combines phenotypic features and image features. The feature selection method is meant for improving prediction model for neuropsychiatric disorders. Leamy et al. focused on stroke detection and also studied EEG features and recovery of patients when neurorehabilitation therapy which is BCI-mediated. Kuhn et al. explored predictive models and the importance of feature engineering. Tasmin et al. explored different feature engineering methods such as tree-based feature selection, Random Forest, extra tree classifier, feature set generation and classifier-based models.

1.1 Problem Definition Many supervised ML algorithms came into existence with varied capabilities. However, they do rely on quality of training data. Unless quality of training data is ensured, they tend to result in mediocre performance. To overcome this problem, feature engineering or feature selection methods came into existence. From the literature, it is understood that feature selection plays crucial role in improving performance of prediction models.

A Hybrid Feature Selection for Improving Prediction Performance …

375

1.2 Motivation Feature engineering with a hybrid approach could leverage brain stroke prediction performance. This will have impact on AI-based CDSSs in the real world applications used in healthcare industry. When detection accuracy is improved it will add to Quality of Service (QoS) in healthcare units. It is the motivation behind this research.

1.3 Contribution In this paper, a hybrid feature selection algorithm is proposed to leverage performance of machine learning models in brain stroke detection. In this paper our contributions are as follows. 1.

2.

An algorithm known as Hybrid Measures Approach for Feature Engineering (HMA-FE) is defined based on a hybrid measure to identify importance of features. It gives the features that efficiently contribute the prediction of class labels toward brain stroke detection. To demonstrate a prototype application is built for utility of the proposed framework and the underlying algorithms. The performance of prediction models is evaluated with and without feature engineering.

1.4 Organization of the Paper The remainder of the paper is structured as follows. Section 2 reviews literature on different aspects of machine learning for brain stroke detection. Section 3 presents the proposed framework and underlying algorithm for efficient brain stroke detection. Section 4 presents performance evaluation. Section 5 gives paper conclusion and its directions for the future work.

2 Related Work Literature reviews were given in this section on different brain stroke methods and feature selection approaches. Liu et al. [1] proposed a hybrid feature selection that combines phenotypic features and image features. The feature selection method is meant for improving prediction model for neuropsychiatric disorders. Katz et al. [2] proposed a methodology for comprehending the scale of prehospital stroke severity. Leamy et al. [3] focused on stroke detection and also studied EEG features and recovery of patients when neurorehabilitation therapy which is BCI-mediated. In terms of recovery of lost motor control, their research could help in improving patient

376

D. Ushasree et al.

recovery. Vetten et al. [4] investigated on side effects associated with stroke. Particularly they studied on “acute corticospinal tract wallerian degeneration” results in poor motor outcome. Kuhn et al. [5] explored predictive models and the importance of feature engineering. Pathanjali et al. [6] studied different machine learning methods for ischemic stroke detection. Buck et al. [7] explored on ischemic stroke detection methods and further investigated on the relation between brain stroke and Neutrophilia development in patients. Kamel et al. [8] studied the after brain stroke effects of patients particularly on cardiac monitoring. They did it in order to identify atrial fibrillation in such patients and analyzed the cost effectiveness of their method. West et al. [9] focused on the Cryptogenic Stroke and the frequency of migraine and patent foramen ovale in patients. Soltanpour et al. [10] proposed a methodology to have automatic segmentation of ischemic stroke lesion with the help of CT perfusion maps. They used a deep learning model named MultiRes U-Net for this purpose. Tasmin et al. [11] explored different feature engineering methods such as treebased feature selection, Random Forest, extra tree classifier, feature set generation and classifier-based models. Lazer et al. [12] investigated on the re-emergence of stroke deficits with respect to Midazolam challenge. Tsivgoulis et al. [13] proposed a method to understand the mechanisms to prevent stroke second time by using cardiac rhythm monitoring. Parsons et al. [14] opined that Thrombolysis is one of the approaches to mitigate effects of stroke. In order to validate their study, they used “diffusion and perfusion weighted MRI”. From the literature, it is understood that feature selection plays crucial role in improving performance of prediction models. In this paper, a hybrid feature selection algorithm is proposed to leverage performance of machine learning models in brain stroke detection.

3 Proposed Framework The proposed methodology for brain stroke prediction is shown in Fig. 1. It has different mechanisms and underlying algorithms for brain stroke detection. The framework is aimed at having functional flow that takes brain stroke dataset as input and detects stroke probability of patients. The algorithms are used in order to have efficient detection of stroke with data-driven approach using supervised machine learning techniques. The brain stroke detection dataset is subjected to preprocessing where the data is split into training set (80%) and testing set (20%). In the testing set, the class label is removed and used as ground truth. It is done so as the prediction models need to predict the class label. From the training data, if all the features are used for learning, it may lead to deteriorated performance due to redundant and irrelevant features. In order to overcome this and improve the efficiency of feature selection or feature engineering, an algorithm named Hybrid Measures Approach for Feature Engineering (HMA-FE) which makes use of two measures in combination. They are known as entropy and information gain. Entropy is a measure which finds uncertainty which is related to a given random variable while information gain computes the

A Hybrid Feature Selection for Improving Prediction Performance …

377

Fig. 1 Methodology for stroke prediction

amount of change in entropy. 

H (X ) = −

p(x) log log p(x)

(1)

p(y) log log p(y)

(2)

x∈X

H (y) = −

 y∈Y

As shown in Eqs. 1 and 2 both H(X) and H(Y ) are associated with the entropy measure. They are used to find information gain as in Eq. 3. Information gain = H (y)−−H (y/x)

(3)

There is a hybrid measure known as symmetric uncertainty that combines both entropy and gain as in Eq. 4. SU =

2 ∗ Gain H (x) + H (y)

(4)

This measure is finally used to know the importance of a feature in the given brain stroke dataset. Since the feature engineering involves in finding importance of different features and choosing best contributing features, this measure assumes significance. It is used in the proposed algorithm named Hybrid Measures Approach for Feature Engineering (HMA-FE). An algorithm named Hybrid

378

D. Ushasree et al.

Measures Approach for Feature Engineering (HMA-FE) is proposed and implemented for identification of best features that contribute to the prediction of brain stroke. Algorithm 1: Hybrid Measures Approach for Feature Engineering Algorithm: Hybrid Measures Approach for Feature Engineering (HMA-FE) Input: Brain stroke dataset D, importance threshold th Output: Selected features F 1. Start 2. Initialize a map for holding su M Extract All Features 3. F