Expert Clouds and Applications: Proceedings of ICOECA 2021 (Lecture Notes in Networks and Systems, 209) 981162125X, 9789811621253

This book features original papers from International Conference on Expert Clouds and Applications (ICOECA 2021), organi

170 46 20MB

English Pages 744 [703] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgements
Contents
Editors and Contributors
Minimizing Energy Through Task Allocation Using Rao-2 Algorithm in Fog Assisted Cloud Environment
1 Introduction
2 Related Work
3 Optimization Algorithms
3.1 Rao-2 Algorithm
3.2 MaxmaxUtil
3.3 ECTC
4 Problem Formulation
4.1 Task Scheduling in Fog Environment
4.2 Energy Consumption Model
5 An Illustration
6 Simulation Results
7 Conclusion
References
Sensitivity Context Aware Privacy Preserving Disease Prediction
1 Introduction
2 Related Work
3 The Proposed System
4 Sensitivity Context-Aware Privacy-Preserving Disease Prediction
5 Results and Discussions
6 Conclusion
References
Designing a Smart Speaking System for Voiceless Community
1 Introduction
2 Related Works
3 Methodology
3.1 System Design
3.2 Dataset Process
3.3 Hardware Design
4 Experimental Results and Discussion
4.1 Communication from Common Person to Dumb Person
4.2 Output Analysis Graph
5 Conclusion
References
ANNs for Automatic Speech Recognition—A Survey
1 Introduction
2 Overview of Artificial Neural Network
3 Survey of ANN for Speech Recognition
4 Applications of ANN for ASR
5 Conclusions
References
Cybersecurity in the Age of the Internet of Things: An Assessment of the Users’ Privacy and Data Security
1 Introduction
2 Review of Related Work
3 Findings and Discussion
4 Conclusion
References
Application of Artificial Intelligence in New Product Development: Innovative Cases of Crowdsourcing
1 Introduction
2 Review of Related Work
3 Findings and Discussion
4 Conclusion
References
The Effect of the Topology Adaptation on Search Performance in Overlay Network
1 Introduction
2 Overlay Network
3 Related Work
4 System Model of Work
5 Simulation Result and Analysis
5.1 Simulation Setting
6 Conclusion and Future Work
References
Flanker Task-Based VHDR Dataset Analysis for Error Rate Prediction
1 Introduction
2 Related Works
3 System Design and Implementation
3.1 Data Acquisition
3.2 Loading Data
3.3 Creating an Approach
4 Result Analysis
4.1 Offline Analysis
4.2 Online analysis
5 Conclusion
6 Future Work
References
Integrating University Computing Laboratories with AWS for Better Resource Utilization
1 Introduction
2 Literature Survey
3 Integration Model
4 Implementations
5 Conclusion
References
IoT-Based Control of Dosa-Making Robot
1 Introduction
2 Problem Statement
3 Related Works
4 Wireless Protocols Used for Controlling IoT Devices
5 System Architecture
6 Design and Implementation
7 Iot-Based Control
8 Conclusion
References
Classification of Idiomatic Sentences Using AWD-LSTM
1 Introduction
2 Background
2.1 ULMFiT
2.2 AWD-LSTM
2.3 Fast AI
2.4 Algorithm for Proposed Work
3 Literature Survey
3.1 Works on Idiom Recognition:
3.2 Works on Text Classification
4 The Proposed Transfer Learning Model Experimental Setup
4.1 Experimental Setup and Analysis:
4.2 Building Language Model
4.3 Language Model Fine-Tuning
4.4 Building Classifier Model
5 Experimental Results
6 Conclusion and Future Work
References
Developing an IoT-Based Data Analytics System for Predicting Soil Nutrient Degradation Level
1 Introduction
2 Literature Review
3 Proposed Method
3.1 IoT Devices
3.2 Feature Reduction Using Principal Component Analysis (PCA)
3.3 ML Algorithm for Prediction
4 Linear Regression (LR)
5 Decision Tree (DT)
6 Random Forest (RF)
7 Conclusion
References
A Survey on Cloud Resources Allocation Using Multi-agent System
1 Introduction
2 Motivation
3 Cloud Definition
3.1 Types of Services in Cloud
3.2 Deployment Models in Cloud
4 Multi-agent System (MAS)
5 Literature Survey
6 Summary
7 Conclusions
References
IoT-Based Smart Helmet for Riders
1 Introduction
2 Literature Survey
3 Purpose
4 Scope and Objective
5 Module Description
5.1 Helmet Side
5.2 Bike Side
6 Components
6.1 Hardware Requirements
6.2 Software Requirements
7 Implementation of Proposed Methodology
8 Result
References
Collision Avoidance in Vehicles Using Ultrasonic Sensor
1 Introduction
2 Existing Modules
2.1 Central Acceleration Sensor
2.2 Recognition of Vehicles by Using Wireless Network Sensor.
2.3 Automobile Collision Avoidance System
2.4 An Obstacle Detection and Tracking System Using a 2D Laser Sensor
2.5 Intelligent Mechatronic Braking System
3 Purpose
4 Specific Objective
5 Project Justification
6 Project Scope
7 Hardware Components
7.1 Arduino Uno
7.2 Ultrasonic Sensors
7.3 Servo Motor
7.4 LCD Display
7.5 Buzzer
8 Methodology
8.1 Central Control Module
8.2 Obstacle Sensing Unit
8.3 Driver Circuit
8.4 Warning System
9 Working Principle
10 Result
11 Conclusion
References
Privacy Challenges and Enhanced Protection in Blockchain Using Erasable Ledger Mechanism
1 Introduction
2 Privacy Challenges in Blockchain
2.1 Transaction Privacy Challenge
2.2 Identity Privacy Challenge
3 Existing Privacy-Preserving Mechanisms for Blockchain
3.1 Coin Mixing Services
3.2 Zero-Knowledge Proof Mechanism
3.3 Ring Signature
3.4 Homomorphic Encryption
3.5 Hidden Address
3.6 Trusted Execution Environment
3.7 Secure Multi-party Computation
4 Applications of Existing Privacy Mechanisms
4.1 Mixcoin
4.2 Blindcoin
4.3 Dash
4.4 CoinShuffle
4.5 TumbleBit
4.6 Zerocoin
4.7 Zerocash
4.8 CryptoNote
5 Summary of Existing Blockchain Privacy Mechanisms
6 Discussion on Limitations of Existing Privacy Mechanisms
7 Proposed Method for Efficient Privacy Enhancement
8 Future Research Directions
9 Conclusion
References
Data Privacy and Security Issues in HR Analytics: Challenges and the Road Ahead
1 Introduction
2 Related Work
3 Findings and Discussion
4 Conclusion
References
Narrow Band Internet of Things as Future Short Range Communication Tool
1 Introduction
2 NB-IoT
3 Applications
4 Existing Technology
5 Challenges
5.1 Low Devices Cost
5.2 Long Battery Life
5.3 Achieving Lightweight Protocols
6 Application
7 Simulator
8 Algorithms
9 Conclusion
References
Lightweight Logic Obfuscation in Combinational Circuits for Improved Security—An Analysis
1 Introduction
2 Literature Survey
2.1 Design for Security Methods
3 Lightweight Obfuscation Approach
3.1 Fault Impact-Based Critical Node Identification
4 Results and Analysis
4.1 Area and Power Analysis
4.2 Output Corruption
5 Conclusion
References
Analysis of Machine Learning Data Security in the Internet of Things (IoT) Circumstance
1 Introduction
2 Literature Review
3 Implementation of Machine Learning IoT Security
3.1 A Challenge of Data Security
3.2 Analysis of ML with Data Security
3.3 Analysis of ML with Data Security
4 Conclusion
References
Convergence of Artificial Intelligence in IoT Network for the Smart City—Waste Management System
1 Introduction
2 Solid Waste Management with IoT Technology
3 Logistics and Collection of Waste
4 Introduction of Smart Bins
5 Vehicles
6 Research Gap
7 Conclusion and Future Scope
References
Energy Aware Load Balancing Algorithm for Upgraded Effectiveness in Green Cloud Computing
1 Introduction
2 Related Work
3 Green Cloud Computing
4 Load Balancing
5 Existing Algorithm
5.1 One-sided Random Sampling
5.2 Bumble Bee Foraging
6 Proposed Algorithm
7 Conclusion
References
Review on Health and Productivity Analysis in Soil Moisture Parameters
1 Introduction
1.1 Soil Based on Texture
1.2 Soil Based on Color
2 Materials and Methods
3 Results and Discussion
4 Conclusion
References
Soft Computing-Based Optimization of pH Control System of Sugar Mill
1 Introduction
2 Genetic Algorithm
3 Simulated Annealing Technique
4 PI Controller Design and Optimization Using GA and SA
5 Novelty
6 Conclusion
References
A Comparative Analysis of Various Data Mining Techniques to Predict Heart Disease
1 Introduction
2 Literature Review
3 Data Mining Techniques
4 Proposed Methodology
5 Results and Discussions
6 Conclusion
References
Performance Comparison of Various Controllers in Different SDN Topologies
1 Introduction
2 Related Work
3 Background
3.1 Mininet
3.2 Topologies
3.3 Round-Trip Time
3.4 SDN Controllers
4 Implementation
4.1 Test Bed Description
4.2 System Requirement
5 Result and Analysis
6 Conclusion and Future Scope
References
Preprocessing of Datasets Using Sequential and Parallel Approach: A Comparison
1 Introduction
2 Background Theory
2.1 Data Preprocessing (DPP)
2.2 Message Passing Interface
3 Research Methodology
3.1 Data Preprocessing using Sequential Approach
3.2 Data Preprocessing Using Parallel Approach
4 Results and Analysis
4.1 Dataset Used
4.2 Running Environment
4.3 Results
5 Conclusion and Future Enhancement
References
Blockchain Technology and Academic Certificate Authenticity—A Review
1 Introduction
2 Literature Review
2.1 What is Blockchain?
2.2 How Does Blockchain Works
2.3 What are the Main Advantages of Blockchain?
2.4 Key Weakness Related with Blockchain Technology
2.5 Blockchain Application in Academic Certificate Authenticity
3 Related Works
3.1 RQ1: What are the Challenges Today to Trait the Fake Degree Certificates Problem?
3.2 RQ2: What are the Solutions and Applications Suggested with Blockchain Technology for Educational System?
3.3 RQ3: How to Inspect and Find Innovation Ways to Propose a Replacement Technology?
4 Technical Challenges
5 Conclusion
References
Word Significance Analysis in Documents for Information Retrieval by LSA and TF-IDF using Kubeflow
1 Introduction
2 Literature Review
3 Implementation of the System
3.1 Term Frequency (TF)
3.2 Inverse Document Frequency (IDF)
3.3 Term Frequency–Inverse Document Frequency
3.4 Probabilistic Frequency
3.5 LSA with Dimensionality Reduction
4 Results
5 Conclusions
References
A Detailed Survey on Deep Learning Techniques for Real-Time Image Classification, Recognition and Analysis
1 Introduction
2 Literature Survey
2.1 Recognition of Face with the Help of CNN
2.2 Single-Shot Detector for Object Detection
3 Related Work
3.1 Deep Learning Techniques:
4 Crime Detection with Deep Learning Techniques
4.1 Various Malwares
5 Conclusion and Future Scope
References
Pole Line Fault Detector with Sophisticated Mobile Application
1 Introduction
2 Literature Survey
3 Methodology
3.1 Current and Potential Transformers
3.2 Bridge Circuit
3.3 Controller
3.4 Liquid Crystal Display and Buzzer
3.5 Lora Module
3.6 ESP
3.7 GPS
4 Working
5 Results and Discussion
6 Graphical Representation
7 Merits and Demerits
8 Conclusion and Future Scope
References
Learning of Advanced Telecommunication Computing Architecture (ATCA)-Based Femto Gateway Framework
1 Introductıon
2 Literature Survey
3 Methodology
3.1 3G Femtocell Network Architecture
3.2 Small Cell Gateway on ATCA Platform
3.3 ATCAv2
3.4 ATCAv2 Shelf (Front)
3.5 ATCAv2 Shelf (Rear)
3.6 Fabric and Base Dual Star Topology
3.7 Dual Star Topology
3.8 Bono Blades (Application Blades)
3.9 Malban Blades (Switching Blades)
4 4G Gateway
4.1 Directory Services
5 Conclusion
References
Infected Inflation and Symptoms Without the Impact of Covid 19 with Ahp Calculation Method
1 Introduction
2 Materials and Methods
3 Discussion
4 Database and User Interface Print Screen
5 Conclusion
References
Smartphone Application Using Fintech in Jakarta Transportation for Shopping in the Marketplace
1 Introduction
2 Literature Review
3 Proposed Systems
4 Conclusion
References
Secured Student Portal Using Cloud
1 Introduction
1.1 DES Algorithm
2 Literature Survey
2.1 Automatic Attendance Recording System Using Mobile Telephone
2.2 Data Encryption Performance Based on Blowfish
2.3 Security Analysis of Blowfish Algorithm
2.4 Transactions on Cloud Computing
3 Theoretical Analysis
3.1 Possible Attacks on DES
3.2 Enhanced Blowfish Algorithm
3.3 Working
3.4 Pseudocode of Blowfish Algorithm
4 Experimental Analysis
5 Conclusion
References
Expert System for Determining Welding Wire Specification Using Naïve Bayes Classifier
1 Introduction
2 Research Method
3 Result and Discussion
4 Design and Implementation
5 Conclusion
References
Analysis of Market Behavior Using Popular Digital Design Technical Indicators and Neural Network
1 Introduction
2 Literature Review
3 Methodology
4 Results and Discussions
4.1 Data Analysis
5 Experimental Analysis
6 Conclusion
References
Distributed Multimodal Aspective on Topic Model Using Sentiment Analysis for Recognition of Public Health Surveillance
1 Introduction
2 Related Works
3 Proposed Methodology
3.1 Data Collection
3.2 Preprocessing
3.3 Similarity Metrics
3.4 Hadoop Distributed Latent Dirichlet Allocation (HdiLDA)
3.5 HdinNMF
3.6 HdiPLSA
4 Experimental Analyses and Discussion
4.1 Evaluation Performance
5 Conclusion and Future Scope
References
Cluster-Based Multi-context Trust-Aware Routing for Internet of Things
1 Introduction
2 Literature Survey
3 Network Model
4 Trust Evaluation Scheme
4.1 Communication Trust
4.2 Nobility Trust
4.3 Data-Related Trust
4.4 Overall Trust (OT)
4.5 Routing Process
5 Experimental Evaluations
5.1 Simulation Environment
5.2 Simulation Results
6 Conclusion
References
Energy-Efficient Cluster-Based Trust-Aware Routing for Internet of Things
1 Introduction
2 Literature Survey
3 Network Model
4 Trust Evaluation Scheme
4.1 Communication Trust
4.2 Nobility Trust
4.3 Data-Related Trust
4.4 Overall Trust(OT)
4.5 Routing Process
5 Experimental Evaluations
5.1 Simulation Environment
5.2 Simulation Results
6 Conclusion
References
Toward Intelligent and Rush-Free Errands Using an Intelligent Chariot
1 Introduction
2 Literature Survey
3 Methodology
4 Results
5 Conclusion
References
NOMA-Based LPWA Networks
1 Introduction
2 System Model
2.1 Algorithm
3 Numerical Results
4 Conclusions
References
Copy Move Forgery Detection by Using Integration of SLIC and SIFT
1 Introduction
2 Literature Review
3 Proposed System
3.1 Pre-processing
3.2 SLIC Segmentation
3.3 Feature Extraction and Description
3.4 Thresholding
4 Stimulation Result and Analysis
4.1 Dataset
4.2 Evaluation Metrics
4.3 Results
5 Conclusion
References
Nonlinear Autoregressive Exogenous ANN Algorithm-Based Predicting of COVID-19 Pandemic in Tamil Nadu
1 Introduction
2 Modeling of ANN
2.1 Feed-Forward Neural Network (FFNN)
2.2 Cascaded Feed-Forward Neural Network
2.3 Nonlinear Autoregressive Exogenous (NARX)-ANN Model
3 Design of ANN for COVID-19 Prediction
4 Result and Discussions
5 Conclusion
References
Detecting Image Similarity Using SIFT
1 Introduction
2 Related Work
3 Procedure
3.1 Pixel Difference
4 SIFT Algorithm
4.1 Extrema Detection
4.2 Keypoint Selection
4.3 Keypoint Descriptors
4.4 Keypoint Matching
5 Pseudocode
6 Results
7 Conclusion
References
A Secure Key Agreement Framework for Cloud Computing Using ECC
1 Introduction
1.1 Motivation and Contribution
1.2 Organization of the Paper
2 Preliminaries
2.1 Background of Elliptic Curve Group
3 The Proposed Framework
3.1 Initialization Phase
3.2 User Registration Phase
3.3 Authentication and Key Agreement Phase
4 Security Analysis
4.1 Session Key Security
4.2 Message Authentication
4.3 Replay Attack
4.4 Man in the Middle Attack
4.5 Known Session-Specific Attack
4.6 Key Freshness
4.7 Known Key Secrecy
5 Performance Analysis
6 Conclusion and Future Direction
References
Web-Based Application for Freelance Tailor
1 Introduction
2 Previous Research
3 The Proposed Model
4 Conclusion
References
Image Retrieval Using Local Majority Intensity Patterns
1 Introduction
2 Related Study
2.1 Local Binary Pattern
2.2 Local Ternary Pattern
3 Proposed Approach
3.1 Construct the Histogram
4 Results and Discussions
4.1 Precision and Recall
4.2 F-Measure
5 Future Work and Conclusion
References
A Comprehensive Survey of NOMA-Based Cooperative Communication Studies for 5G Implementation
1 Introduction
2 Overview of Cooperative Communication
2.1 Analysis of NOMA in the Context of 5G
2.2 Cooperative Communication Structure
2.3 Types of Cooperative Communication CC
3 Survey of Existing Study on NOMA Based Cooperative Communication Technique
3.1 NOMA-Based Cooperative Communication Technique Based on V2X Multicasting
3.2 NOMA-Based Multicast for High Reliability in 5G Networks
3.3 Cooperative Relaying in 5G Networks
4 Design Observation and Identified Research Gaps
4.1 Key Design Observation from the Survey
4.2 Identifying Key Research Areas from NOMA Based Cooperative Communication
5 Conclusion
References
Analytical Study on Load Balancing Algorithms in Cloud Computing
1 Introduction
2 Cloud Service Models
2.1 Software as a Service (SAAS)
2.2 Platform as a Service (PAAS)
2.3 Infrastructure as a Service (IAAS)
3 Load Balancing
3.1 Load Balancing Types
3.2 Challenges
3.3 Load Balancing Metrics
3.4 Degree of Imbalance
4 Existing Load Balancing Algorithms
4.1 An Adaptive Firefly Load Balancing Algorithm
4.2 Bio-inspired Load Balancing Algorithm
4.3 Dynamic Load Balancing Algorithm
4.4 Throttled Load Balancing Algorithm
4.5 Hybridization of the Meta-heuristic Load Balancing Algorithm
4.6 Improved Dynamic Load Balancing Algorithm Based on Least-Connection Scheduling
4.7 Load Balancing Algorithm for Allocation of Resources
4.8 Water Flow-Like Load Balancing Algorithm
4.9 First Fit Decreasing Method
5 Conclusion
References
Smart Driving Assistance Using Arduino and Proteus Design Tool
1 Introduction
1.1 CAN Protocol
2 Proposed System
3 Hardware Components
3.1 Arduino UNO
3.2 LM 35
3.3 LDR Photoresistor
3.4 Gas Sensor
3.5 MCP2551 Transceiver
3.6 Beeper
3.7 LCD
3.8 Ultrasonic Sensor
3.9 Infrared Sensor
4 Flow Diagram of the System
5 Experimental Results
6 Conclusion
7 Future Scope
References
Fog Computing—Characteristics, Challenges and Job Scheduling Survey
1 Introduction
2 Related Works
3 Fog Computing Characteristics
3.1 Heterogeneity Support
3.2 Geographical Distribution
3.3 Decentralization
3.4 Mobility Support
3.5 Proximity to Users
3.6 Real-Time Interaction
3.7 Edge location
3.8 Low Latency
4 Challenges of Fog Computing
4.1 Models for Programming
4.2 Reliability and Security
4.3 Management of Resources
4.4 Scheduling the Jobs
4.5 Heterogeneity
4.6 Incentives
4.7 Standardization
4.8 Fog Servers
4.9 Consumption of Energy
5 Applications
5.1 Smart Traffic Control System
5.2 Video Streaming Systems
5.3 The Pressure of Water in Dams
5.4 Health care
5.5 Mobile Big Data Analytics
5.6 Connected Car
6 State of Art
6.1 Allocation of Resources and Scheduling In Fog Computing
6.2 Faults Tolerance in Fog Computing
6.3 Mobile Computing Based on Fog
6.4 Tools for Simulating in Fog Computing
7 Job Scheduling Approaches
8 Conclusion
References
A Review on Techniques of Radiation Dose Reduction in Radiography
1 Introduction
2 Literature Survey
2.1 Model-Based Deep Medical Imaging
2.2 Model-Based Iterative CT Image Reconstruction on GPUs
2.3 Feasibility of Low-Dose Computed Tomography with Testicular Cancer Victims
2.4 High Completion Model-Based Image Restoration
2.5 Model-Based Iterative Image Restoration
2.6 Nonlinear Diffusion Method
2.7 Dose Index and Dose Length Product Level
3 Discussion
4 Conclusion and Future Scope
References
Application of NLP for Information Extraction from Unstructured Documents
1 Introduction
2 Literature Review
3 Methods
3.1 Custom SpaCy Pipeline
3.2 Document Categorization
3.3 Job Vacancy Information Parsing
4 Conclusion and Future Work
References
Scoring of Resume and Job Description Using Word2vec and Matching Them Using Gale–Shapley Algorithm
1 Introduction
2 Literature Review
3 Methods
3.1 Scoring
3.2 Matching Using Gale–Shapley Algorithm
4 Conclusion and Future Works
References
Author Index
500716_1_En_39_Chapter_OnlinePDF.pdf
RETRACTED CHAPTER: Cluster-Based Multi-context Trust-Aware Routing for Internet of Things
Recommend Papers

Expert Clouds and Applications: Proceedings of ICOECA 2021 (Lecture Notes in Networks and Systems, 209)
 981162125X, 9789811621253

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 209

I. Jeena Jacob Francisco M. Gonzalez-Longatt Selvanayaki Kolandapalayam Shanmugam Ivan Izonin   Editors

Expert Clouds and Applications Proceedings of ICOECA 2021

Lecture Notes in Networks and Systems Volume 209

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA, Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the worldwide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

More information about this series at https://link.springer.com/bookseries/15179

I. Jeena Jacob · Francisco M. Gonzalez-Longatt · Selvanayaki Kolandapalayam Shanmugam · Ivan Izonin Editors

Expert Clouds and Applications Proceedings of ICOECA 2021

Editors I. Jeena Jacob Department of Computer Science and Engineering, GITAM School of Technology, GITAM University, Bengaluru, India Selvanayaki Kolandapalayam Shanmugam Department of Mathematics and Computer Science, Concordia University Chicago, River Forest, IL, USA

Francisco M. Gonzalez-Longatt Electrical Power Engineering, University of South-Eastern Norway, Notodden, Norway Ivan Izonin Department of Publishing Information Technologies, Lviv Polytechnic National University, Lviv, Ukraine

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-16-2125-3 ISBN 978-981-16-2126-0 (eBook) https://doi.org/10.1007/978-981-16-2126-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022, corrected publication 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

We are honored to dedicate the proceedings of ICOECA 2021 to all the participants, organizers, and editors of ICOECA 2021.

Preface

It is a great privilege for us to present the proceedings of the First International Conference on Expert Clouds and Applications (ICOECA 2021) to the readers, delegates, and authors of the conference event. We greatly hope that all the readers will find it more useful and resourceful for their future research endeavors. The First International Conference on Expert Clouds and Applications (ICOECA 2021) was held at GITAM School of Technology, GITAM University, Bengaluru, India, from February 18 to 19, 2021, with an aim to provide a platform for researchers, academicians, and industrialists to discuss the state-of-the-art research opportunities, challenges, and issues in intelligent computing applications. The revolutionizing scope and rapid development of computing technologies will also create new research questions and challenges, which in turn results in the need to create and share new research ideas and encourage significant awareness in this futuristic research domain. The proceedings of ICOECA 2021 provides limelight on creating a new research landscape for intelligent computing applications with the support received and the research enthusiasm that has truly exceeded our expectations. This made us to be more satisfied and delighted to present the proceedings with a high level of satisfaction. The responses from the researchers to the conference had been overwhelming from India and overseas countries. We have received 221 manuscripts from the prestigious universities/institutions across the globe. Furthermore, 55 manuscripts are shortlisted based on the reviewing outcomes and conference capacity constraints. Nevertheless, we would like to express our deep gratitude and commendation for the entire conference review team, who helped us to select the high-quality research works that are included in the ICOECA 2021 proceedings published by Springer. Also, we would like to extend our appreciation to organizing committee members for their continual support. We are pleased to thank Springer publications for publishing the proceedings of ICOECA 2021 and maximizing the popularity of the research manuscript across the globe.

vii

viii

Preface

At last, we wish all the authors and participants of the conference event a grand success in their future research endeavors. Dr. I. Jeena Jacob Associate Professor, Department of Computer Science and Engineering, GITAM School of Technology GITAM University Bengaluru, India Dr. Francisco M. Gonzalez-Longatt Professor, Electrical Power Engineering University of South-Eastern Norway Notodden, Norway Dr. Selvanayaki Kolandapalayam Shanmugam Associate Professor, Department of Mathematics and Computer Science Concordia University Chicago River Forest, IL, USA Dr. Ivan Izonin Department of Publishing Information Technologies Lviv Polytechnic National University Lviv, Ukraine

Acknowledgements

We wish to express our gratitude and appreciation to our beloved President Sri M. Sri Bharath and dynamic Chancellor Prof. K. Ramakrishna Rao for their constant encouragement and guidance to successfully organize the first conference in this series. They have strongly encouraged us in conducting the conferences at GITAM School of Technology, GITAM University, Bengaluru. Also, we like to thank ViceChancellor Prof. K. Siva Ramakrishna, Pro-Vice-Chancellor of Bengaluru campus Prof. D. S. Rao, and all the board of directors for their perpetual support during the conduct of the First ICOECA 2021 from which the conference proceedings book has evolved into existence. We like to thank all our review board members, who have assured the research novelty and quality from the initial to final selection phase of the conference. Also, we are thankful especially to our eminent speakers, reviewers, and guest editors. Furthermore, we like to acknowledge all the session chairs of the conference event for the seamless contribution in evaluating the oral presentation of the conference participants. We would like to mention the hard work put up by the authors to revise and update their manuscripts according to the review comments to meet the conference and publication standards. We would like to thank Springer publications for their consistent and timely assistance throughout the publication process.

ix

Contents

Minimizing Energy Through Task Allocation Using Rao-2 Algorithm in Fog Assisted Cloud Environment . . . . . . . . . . . . . . . . . . . . . . . Lalbihari Barik, Sudhansu Shekhar Patra, Shalini Kumari, Anmol Panda, and Rabindra Kumar Barik

1

Sensitivity Context Aware Privacy Preserving Disease Prediction . . . . . . . A. N. Ramya Shree, P. Kiran, N. Mohith, and M. K. Kavya

11

Designing a Smart Speaking System for Voiceless Community . . . . . . . . . Saravanan Alagarsamy, R. Raja Subramanian, Praveen Kumar Bobba, Pradeep Jonnadula, and Sanath Reddy Devarapalli

21

ANNs for Automatic Speech Recognition—A Survey . . . . . . . . . . . . . . . . . . Bhuvaneshwari Jolad and Rajashri Khanai

35

Cybersecurity in the Age of the Internet of Things: An Assessment of the Users’ Privacy and Data Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Srirang K. Jha and S. Sanjay Kumar

49

Application of Artificial Intelligence in New Product Development: Innovative Cases of Crowdsourcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Srirang K. Jha and Sanchita Bansal

57

The Effect of the Topology Adaptation on Search Performance in Overlay Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muntasir Al-Asfoor and Mohammed Hamzah Abed

65

Flanker Task-Based VHDR Dataset Analysis for Error Rate Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rajesh Kannan Megalingam, Sankardas Kariparambil Sudheesh, and Vamsy Vivek Gedela Integrating University Computing Laboratories with AWS for Better Resource Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kailash Chandra Bandhu and Ashok Bhansali

75

87

xi

xii

Contents

IoT-Based Control of Dosa-Making Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . Rajesh Kannan Megalingam, Hema Teja Anirudh Babu Dasari, Sriram Ghali, and Venkata Sai Yashwanth Avvari

97

Classification of Idiomatic Sentences Using AWD-LSTM . . . . . . . . . . . . . . 113 J. Briskilal and C. N. Subalalitha Developing an IoT-Based Data Analytics System for Predicting Soil Nutrient Degradation Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 G. Najeeb Ahmed and S. Kamalakkannan A Survey on Cloud Resources Allocation Using Multi-agent System . . . . 139 Fouad Jowda and Muntasir Al-Asfoor IoT-Based Smart Helmet for Riders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 N. Bhuvaneswary, K. Hima Bindu, M. Vasundhara, J. Chaithanya, and M. Venkatabhanu Collision Avoidance in Vehicles Using Ultrasonic Sensor . . . . . . . . . . . . . . 169 N. Bhuvaneswary, V. Jayapriya, V. Mounika, and S. Pravallika Privacy Challenges and Enhanced Protection in Blockchain Using Erasable Ledger Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 M. Mohideen AbdulKader and S. Ganesh Kumar Data Privacy and Security Issues in HR Analytics: Challenges and the Road Ahead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Shweta Jha Narrow Band Internet of Things as Future Short Range Communication Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 T. Senthil and P. C. Vijay Ganesh Lightweight Logic Obfuscation in Combinational Circuits for Improved Security—An Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 N. Mohankumar, M. Jayakumar, and M. Nirmala Devi Analysis of Machine Learning Data Security in the Internet of Things (IoT) Circumstance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 B. Barani Sundaram, Amit Pandey, Aschalew Tirulo Abiko, Janga Vijaykumar, Umang Rastogi, Adola Haile Genale, and P. Karthika Convergence of Artificial Intelligence in IoT Network for the Smart City—Waste Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Mohamed Ishaque Nasreen Banu and Stanley Metilda Florence Energy Aware Load Balancing Algorithm for Upgraded Effectiveness in Green Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 V. Malathi and V. Kavitha

Contents

xiii

Review on Health and Productivity Analysis in Soil Moisture Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 M. Meenakshi and R. Naresh Soft Computing-Based Optimization of pH Control System of Sugar Mill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Sandeep Kumar Sunori, Pushpa Bhakuni Negi, Amit Mittal, Bhawana, Pratul Goyal, and Pradeep Kumar Juneja A Comparative Analysis of Various Data Mining Techniques to Predict Heart Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Keerti Shrivastava and Varsha Jotwani Performance Comparison of Various Controllers in Different SDN Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 B. Keerthana, Mamatha Balachandra, Harishchandra Hebbar, and Balachandra Muniyal Preprocessing of Datasets Using Sequential and Parallel Approach: A Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Shwetha Rai, M. Geetha, and Preetham Kumar Blockchain Technology and Academic Certificate Authenticity—A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 K. Kumutha and S. Jayalakshmi Word Significance Analysis in Documents for Information Retrieval by LSA and TF-IDF using Kubeflow . . . . . . . . . . . . . . . . . . . . . . . 335 Aseem Patil A Detailed Survey on Deep Learning Techniques for Real-Time Image Classification, Recognition and Analysis . . . . . . . . . . . . . . . . . . . . . . . 349 K. Kishore Kumar and H. Venkateswerareddy Pole Line Fault Detector with Sophisticated Mobile Application . . . . . . . . 361 K. N. Thirukkuralkani, K. Abarna, M. Monisha, and A. Niveda Learning of Advanced Telecommunication Computing Architecture (ATCA)-Based Femto Gateway Framework . . . . . . . . . . . . . . 375 P. Sudarsanam, G. V. Dwarakanatha, R. Anand, Hecate Shah, and C. S. Jayashree Infected Inflation and Symptoms Without the Impact of Covid 19 with Ahp Calculation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Nizirwan Anwar, Ahmad Holidin, Galang Andika, and Harco Leslie Hendric Spits Warnars

xiv

Contents

Smartphone Application Using Fintech in Jakarta Transportation for Shopping in the Marketplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Diana Teresia Spits Warnars, Ersa Andhini Mardika, Adrian Randy Pratama, M. Naufal Mua’azi, Erick, and Harco Leslie Hendric Spits Warnars Secured Student Portal Using Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Sunanda Nalajala, Gopalam Nagasri Thanvi, Damarla Kanthi Kiran, Bhimireddy Pranitha, Tummeti Rachana, and N. Laxmi Expert System for Determining Welding Wire Specification Using Naïve Bayes Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Didin Silahudin, Leonel Leslie Heny Spits Warnars, and Harco Leslie Hendric Spits Warnars Analysis of Market Behavior Using Popular Digital Design Technical Indicators and Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Jossy George, Akhil M. Nair, and S. Yathish Distributed Multimodal Aspective on Topic Model Using Sentiment Analysis for Recognition of Public Health Surveillance . . . . . . 459 Yerragudipadu Subbarayudu and Alladi Sureshbabu RETRACTED CHAPTER: Cluster-Based Multi-context Trust-Aware Routing for Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Sowmya Gali and N. Venkatram Energy-Efficient Cluster-Based Trust-Aware Routing for Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Sowmya Gali and N. Venkatram Toward Intelligent and Rush-Free Errands Using an Intelligent Chariot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 N. J. Avinash, Hrishikesh R. Patkar, P. Sreenidhi, Sowmya Bhat, Renita Pinto, and H. Rama Moorthy NOMA-Based LPWA Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Gunjan Gupta and Robert Van Zyl Copy Move Forgery Detection by Using Integration of SLIC and SIFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 Kavita Rathi and Parvinder Singh Nonlinear Autoregressive Exogenous ANN Algorithm-Based Predicting of COVID-19 Pandemic in Tamil Nadu . . . . . . . . . . . . . . . . . . . . 545 M. Venkateshkumar, A. G. Sreedevi, S. A. Lakshmanan, and K. R. Yogesh kumar

Contents

xv

Detecting Image Similarity Using SIFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Kurra Hima Sri, Guttikonda Tulasi Manasa, Guntaka Greeshmanth Reddy, Shahana Bano, and Vempati Biswas Trinadh A Secure Key Agreement Framework for Cloud Computing Using ECC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 Adesh Kumari, M. Yahya Abbasi, and Mansaf Alam Web-Based Application for Freelance Tailor . . . . . . . . . . . . . . . . . . . . . . . . . 585 Diana Teresia Spits Warnars, Muhammad Lutfan Nugraha, and Harco Leslie Hendric Spits Warnars Image Retrieval Using Local Majority Intensity Patterns . . . . . . . . . . . . . . 601 Suresh Kumar Kanaparthi and U. S. N. Raju A Comprehensive Survey of NOMA-Based Cooperative Communication Studies for 5G Implementation . . . . . . . . . . . . . . . . . . . . . . 619 Mario Ligwa and Vipin Balyan Analytical Study on Load Balancing Algorithms in Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 Manisha Pai, S. Rajarajeswari, D. P. Akarsha, and S. D. Ashwini Smart Driving Assistance Using Arduino and Proteus Design Tool . . . . . 647 N. Shwetha, L. Niranjan, V. Chidanandan, and N. Sangeetha Fog Computing—Characteristics, Challenges and Job Scheduling Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665 K. Nagashri, S. Rajarajeswari, Iqra Maryam Imran, and Nanda Devi Shetty A Review on Techniques of Radiation Dose Reduction in Radiography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681 B. N. Shama and H. M. Savitha Application of NLP for Information Extraction from Unstructured Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695 Shushanta Pudasaini, Subarna Shakya, Sagar Lamichhane, Sajjan Adhikari, Aakash Tamang, and Sujan Adhikari Scoring of Resume and Job Description Using Word2vec and Matching Them Using Gale–Shapley Algorithm . . . . . . . . . . . . . . . . . . 705 Shushanta Pudasaini, Subarna Shakya, Sagar Lamichhane, Sajjan Adhikari, Aakash Tamang, and Sujan Adhikari Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715

Editors and Contributors

About the Editors I. Jeena Jacob is working as Professor in Computer Science and Engineering Department at GITAM University, Bangalore, India. She actively participates on the development of the research field by conducting international conferences, workshops, and seminars. She has published many articles in referred journals. She has guestedited an issue for International Journal of Mobile Learning and Organization. Her research interests include mobile learning and computing. Francisco M. Gonzalez-Longatt is currently Full Professor in electrical power engineering at Institutt for elektro, IT og kybernetikk, Universitetet i Sørøst-Norge, Norway. His academic qualifications include first class in Electrical Engineering of Instituto Universitario Politécnico de la Fuerza Armada Nacional, Venezuela (1994), Master of Business Administration (Honors) of Universidad Bicentenaria de Aragua, Venezuela (1999), Ph.D. in Electrical Power Engineering from the Universidad Central de Venezuela (2008), Postgraduate Certificate in Higher Education Professional Practice from Coventry University (2013), and Diploma in Leadership and Management (ILM Level 3), Loughborough University (2018). He is Vice-President of Venezuelan Wind Energy Association, Fellow of the Higher Education Academy (UK), Senior Member of the IEEE, Member of The Institution of Engineering and Technology—The IET (UK), and Member of International Council on Large Electric Systems—CIGRE. His research interest includes innovative (operation/control) schemes to optimize the performance of future energy systems. Selvanayaki Kolandapalayam Shanmugam is currently working as Associate Professor in Computer Science, Concordia University, Chicago, USA. She had over all 15+ years of lecturing sessions for theoretical subjects, experimental and instructional procedure for laboratory subjects. She presented more research articles in the national and international conferences and journals. Her research interest includes image processing, video processing, soft computing techniques, intelligent

xvii

xviii

Editors and Contributors

computing, Web application development, object-oriented programming like C++, Java, scripting languages like VBscript and Javascript, data science, algorithms, data warehousing and data mining, neural networks, genetic algorithms, software engineering, software project management, software quality assurance, enterprise resource planning, information systems, and database management systems. Ivan Izonin graduated from Lviv Polytechnic National University in 2011 (M.Sc. in Computer Sciences) and Ivan Franko National University of Lviv in 2012 (M.Sc. in Economic Cybernetics). He got his Ph.D. in Artificial Intelligence in 2016. He is currently working as Assistant at Publishing Information Technologies Department, Lviv Polytechnic National University, Lviv, Ukraine. He has published more than 80 publications, including 8 patents for inventions and 1 tutorial. His major research interests are computational intelligence; high-speed neural-like systems; non-iterative machine learning algorithms. Dr. Izonin participated in developing two international Erazmus+ projects—MASTIS and DocHub. He is Technical Committee Member of several international conferences.

Contributors K. Abarna Electronics and Instrumentation Engineering, Sri Ramakrishna Engineering College, Coimbatore, India Mohammed Hamzah Abed College of Computer Science and IT, University of Al-Qadisiyah, Diwaniyah, Iraq Aschalew Tirulo Abiko School of Computing and Informatics, Wachemo University, Hosana, Ethiopia Sajjan Adhikari Nagarjuna College of Information Technology, Bangalamukhi, Lalitpur, Nepal Sujan Adhikari NAMI College, Gokarneshwor, Kathmandu, Nepal D. P. Akarsha Ramaiah Institute of Technology, Bangalore, India Muntasir Al-Asfoor College of Computer Science and IT, University of AlQadisiyah, Al Diwaniyah, Iraq Saravanan Alagarsamy Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India Mansaf Alam Department of Computer Science, Jamia Millia Islamia, New Delhi, India R. Anand CMR Institute of Technology Bengaluru, Bangalore, India Galang Andika Functional Motor Vehicle Inspector Transportation Department of South Tangerang City, Tangerang, Indonesia

Editors and Contributors

xix

Nizirwan Anwar Faculty of Computer Science, Esa Unggul University, Jakarta, Indonesia S. D. Ashwini Ramaiah Institute of Technology, Bangalore, India N. J. Avinash Department of Electronics and Communication Engineering, Shri Madhwa Vadiraja Institute of Technology and Management Bantakal, Udupi, India Venkata Sai Yashwanth Avvari Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India Mamatha Balachandra Manipal Institute of Technology, Manipal, India Vipin Balyan Department of Electrical, Electronics and Computer Engineering, Cape Peninsula University of Technology, Cape Town, South Africa Kailash Chandra Bandhu Medi-Caps University, Indore, India Shahana Bano Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Sanchita Bansal University School of Management Studies, Guru Gobind Singh Indra Prastha University, New Delhi, India B. Barani Sundaram Bule Hora University, Bule Hora, Ethiopia Lalbihari Barik Department of Information Systems, Faculty of Computing and Information Technology in Rabigh, King Abdul Aziz University, Jeddah, Kingdom of Saudi Arabia Rabindra Kumar Barik School of Computer Applications, KIIT Deemed to be University, Bhubaneswar, India Ashok Bhansali OP Jindal University, Raigarh, India Sowmya Bhat Department of Electronics and Communication Engineering, Shri Madhwa Vadiraja Institute of Technology and Management Bantakal, Udupi, India Bhawana Graphic Era University, Dehradun, India N. Bhuvaneswary Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India Praveen Kumar Bobba Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India J. Briskilal SRM Institute of Science and Technology, Chengalpattu District, Tamil Nadu, India J. Chaithanya Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India V. Chidanandan Department of Computer Science and Engineering, Dr. Ambedkar Institute of Technology, Bangalore, Karnataka, India

xx

Editors and Contributors

Hema Teja Anirudh Babu Dasari Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India Sanath Reddy Devarapalli Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India G. V. Dwarakanatha BMS Institute of Technology and Management, Bangalore, India Erick Information Systems Department, School of Information Systems, Bina Nusantara University, Jakarta, Indonesia Sowmya Gali Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation (KLEF), Vaddeswaram, Guntur, Andhra Pradesh, India S. Ganesh Kumar SRM Institute of Science and Technology, Kattankulathur, India Vamsy Vivek Gedela Department of EE, University of Cincinnati, Cincinnati, OH, USA M. Geetha Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India Adola Haile Genale Department of Information Science, Adola Haile Genale, Bule Hora University, Bule Hora, Ethiopia Jossy George Department of Computer Science, CHRIST (Deemed to be University), Lavasa Pune, India Sriram Ghali Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India Pratul Goyal Bhimtal Campus, Graphic Era Hill University, Dehradun, India Gunjan Gupta Department of Electrical, Electronics and Computer Engineering, French South African Institute of Technology, Cape Peninsula University of Technology, Cape Town, South Africa Harishchandra Hebbar School of Information Sciences, Manipal, India K. Hima Bindu Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India Ahmad Holidin Master of Information Technology, University of Raharja, Tangerang, Indonesia Iqra Maryam Imran Computer Science and Engineering, M S Ramaiah Institute of Technology, Bengaluru, India M. Jayakumar Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore, India S. Jayalakshmi Department of Computer Applications, VISTAS, Chennai, India

Editors and Contributors

xxi

V. Jayapriya Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India C. S. Jayashree BMS Institute of Technology and Management, Bangalore, India Shweta Jha Apeejay School of Management, New Delhi, India Srirang K. Jha Apeejay School of Management, New Delhi, India Bhuvaneshwari Jolad ECE, KLE Dr. MSSCET, Belagavi, India Pradeep Jonnadula Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India Varsha Jotwani Department of Computer Science and Information Technology, Rabindranath Tagore University, Bhopal, M.P, India Fouad Jowda College of Computer Science and IT, University of Al-Qadisiyah, Al Diwaniyah, Iraq Pradeep Kumar Juneja Graphic Era University, Dehradun, India S. Kamalakkannan Department of Computer Science, School of Computing Sciences, Vels institute of Science, Technology and Advanced Studies (VISTAS), Chennai, India Suresh Kumar Kanaparthi Department of Computer Science and Engineering, National Institute of Technology Warangal, Warangal, India P. Karthika Kalasalingam Academy of Research and Education, Krishnankoil, India V. Kavitha Department of Computer Applications, Hindusthan College of Arts and Science, Coimbatore, India M. K. Kavya Department of Computer Science and Engineering, RNSIT, Bengaluru, India B. Keerthana School of Information Sciences, Manipal, India Rajashri Khanai Department of Electronics and Communication, KLE Dr. MSSCET, Belagavi, India Damarla Kanthi Kiran Department of Computer Science Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India P. Kiran Department of Computer Science and Engineering, RNSIT, Bengaluru, India K. Kishore Kumar Department of CSE, Vardhaman College of Engineering, Hyderabad, India

xxii

Editors and Contributors

Preetham Kumar Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India S. Sanjay Kumar University School of Management Studies, Guru Gobind Singh Indra Prastha University, New Delhi, India Adesh Kumari Department of Mathematics, Jamia Millia Islamia, New Delhi, India Shalini Kumari School of Computer Applications, KIIT Deemed to be University, Bhubaneswar, India K. Kumutha Tagore College of Arts and Science, VISTAS, Chennai, India S. A. Lakshmanan Department of Electrical and Electronics Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Chennai, India Sagar Lamichhane Herald College Kathmandu, Hadigaun Marg, Kathmandu, Nepal N. Laxmi Department of Electronics and Communication Engineering, Guru Nanak Institute of Technology, Ibrahimpatnam, R.R, India Mario Ligwa Department of Electrical, Electronics and Computer Engineering, Cape Peninsula University of Technology, Cape Town, South Africa V. Malathi Department of Computer Applications, Hindusthan College of Arts and Science, Coimbatore, India Guttikonda Tulasi Manasa Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Ersa Andhini Mardika Information Systems Department, School of Information Systems, Bina Nusantara University, Jakarta, Indonesia M. Meenakshi SRM Institute of Science and Technology, Kattankulathur, Tamilnadu, India Rajesh Kannan Megalingam Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India Stanley Metilda Florence SRM Institute of Science and Technology, Kattankulathur, India Amit Mittal Bhimtal Campus, Graphic Era Hill University, Dehradun, India N. Mohankumar Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore, India M. Mohideen AbdulKader SRM Institute of Science and Technology, Kattankulathur, India

Editors and Contributors

xxiii

N. Mohith Department of Computer Science and Engineering, RNSIT, Bengaluru, India M. Monisha Electronics and Instrumentation Engineering, Sri Ramakrishna Engineering College, Coimbatore, India H. Rama Moorthy Department of Cloud Technologies & Data Science, iNurture Education Solutions, Srinivas University, Mangalore, India V. Mounika Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India M. Naufal Mua’azi Information Systems Department, School of Information Systems, Bina Nusantara University, Jakarta, Indonesia Balachandra Muniyal Manipal Institute of Technology, Manipal, India K. Nagashri Computer Science and Engineering, M S Ramaiah Institute of Technology, Bengaluru, India Akhil M. Nair Department of Computer Science, CHRIST (Deemed to be University), Lavasa Pune, India G. Najeeb Ahmed Department of Computer Science, School of Computing Sciences, Vels institute of Science, Technology and Advanced Studies (VISTAS), Chennai, India Sunanda Nalajala Department of Computer Science Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India R. Naresh Associate Professor, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Tamilnadu, India Mohamed Ishaque Nasreen Banu SRM Institute of Science and Technology, Kattankulathur, India Pushpa Bhakuni Negi Bhimtal Campus, Graphic Era Hill University, Dehradun, India L. Niranjan Department of Computer Science and Engineering, HKBK, College of Engineering, Bangalore, Karnataka, India M. Nirmala Devi Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore, India A. Niveda Electronics and Instrumentation Engineering, Sri Ramakrishna Engineering College, Coimbatore, India Muhammad Lutfan Nugraha Information Systems Department, School of Information Systems, Bina Nusantara University, Jakarta, Indonesia Manisha Pai Ramaiah Institute of Technology, Bangalore, India

xxiv

Editors and Contributors

Anmol Panda Department of Computer Engineering, IIIT, Bhubaneswar, Odisha, India Amit Pandey College of Informatics, CSRSCD Bule Hora University, Bule Hora, Ethiopia Aseem Patil Department of Electronics Engineering, Vishwakarma Institute of Technology, Pune, India Hrishikesh R. Patkar Department of Electronics and Communication Engineering, Shri Madhwa Vadiraja Institute of Technology and Management Bantakal, Udupi, India Sudhansu Shekhar Patra School of Computer Applications, KIIT Deemed to be University, Bhubaneswar, India Renita Pinto Department of Electronics and Communication Engineering, Shri Madhwa Vadiraja Institute of Technology and Management Bantakal, Udupi, India Bhimireddy Pranitha Department of Computer Science Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Adrian Randy Pratama Information Systems Department, School of Information Systems, Bina Nusantara University, Jakarta, Indonesia S. Pravallika Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India Shushanta Pudasaini Advanced College of Engineering and Management, Lalitpur, Nepal Tummeti Rachana Department of Computer Science Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Shwetha Rai Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India S. Rajarajeswari Computer Science and Engineering, M S Ramaiah Institute of Technology, Bengaluru, India U. S. N. Raju Department of Computer Science and Engineering, National Institute of Technology Warangal, Warangal, India A. N. Ramya Shree Department of Computer Science and Engineering, RNSIT, Bengaluru, India Umang Rastogi MIET, Meerut, Uttar Pradesh, India Kavita Rathi Faculty, CSED, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Sonipat, Haryana, India Guntaka Greeshmanth Reddy Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India

Editors and Contributors

xxv

N. Sangeetha Department of Electronics and Communication Engineering, Dr. Ambedkar Institute of Technology, Bangalore, Karnataka, India H. M. Savitha Department of Electronics and Communication Engineering, St Joseph Engineering College, Vamanjoor, Mangaluru, India T. Senthil Kalasalingam Academy of Research and Higher Education, Krishnankoil, Tamil Nadu, India Hecate Shah Nokia Solutions and Network, Bangalore, India Subarna Shakya Institute of Engineering, Tribhuvan University, Pulchowk, Lalitpur, Nepal B. N. Shama Department of Electronics and Communication Engineering, St Joseph Engineering College, Vamanjoor, Mangaluru, India Nanda Devi Shetty Computer Science and Engineering, M S Ramaiah Institute of Technology, Bengaluru, India Keerti Shrivastava Department of Computer Science and Information Technology, Rabindranath Tagore University, Bhopal, M.P, India N. Shwetha Department of Electronics and Communication Engineering, Dr. Ambedkar Institute of Technology, Bangalore, Karnataka, India Didin Silahudin Architecture Department, Faculty of Engineering, Bina Nusantara University, Jakarta, Indonesia Parvinder Singh Faculty, CSED, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Sonipat, Haryana, India A. G. Sreedevi Department of Computer Science Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Chennai, India P. Sreenidhi Department of Electronics and Communication Engineering, Shri Madhwa Vadiraja Institute of Technology and Management Bantakal, Udupi, India Kurra Hima Sri Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India C. N. Subalalitha SRM Institute of Science and Technology, Chengalpattu District, Tamil Nadu, India Yerragudipadu Subbarayudu Computer Science and Engineering, Jawaharlal Nehru Technological University, Anantapur, Andhra Pradesh, India R. Raja Subramanian Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India P. Sudarsanam BMS Institute of Technology and Management, Bangalore, India Sankardas Kariparambil Sudheesh Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India

xxvi

Editors and Contributors

Sandeep Kumar Sunori Bhimtal Campus, Graphic Era Hill University, Dehradun, India Alladi Sureshbabu Computer Science and Engineering, JNTUA College of Engineering, Anantapur, Andhra Pradesh, India Aakash Tamang Patan College for Professional Studies, Kupondole, Patan, Nepal Gopalam Nagasri Thanvi Department of Computer Science Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India K. N. Thirukkuralkani Electronics and Instrumentation Engineering, Sri Ramakrishna Engineering College, Coimbatore, India Vempati Biswas Trinadh College of Arts and Sciences, Georgia State University, Atlanta, GA, USA Robert Van Zyl Department of Electrical, Electronics and Computer Engineering, French South African Institute of Technology, Cape Peninsula University of Technology, Cape Town, South Africa M. Vasundhara Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India M. Venkatabhanu Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India M. Venkateshkumar Department of Electrical and Electronics Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Chennai, India H. Venkateswerareddy Department of CSE, Vardhaman College of Engineering, Hyderabad, India N. Venkatram Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation (KLEF), Vaddeswaram, Guntur, Andhra Pradesh, India P. C. Vijay Ganesh St. Joseph Engineering College, Mangalore, Karnataka, India Janga Vijaykumar Department of Information Technology, College of Informatics, Bule Hora University, Bule Hora, Ethiopia Diana Teresia Spits Warnars Information Systems Department, School of Information Systems, Bina Nusantara University, Jakarta, Indonesia Harco Leslie Hendric Spits Warnars Computer Science Department, BINUS Graduate Program—Doctor of Computer Science, Bina Nusantara University, Jakarta, Indonesia Leonel Leslie Heny Spits Warnars Architecture Department, Faculty of Engineering, Bina Nusantara University, Jakarta, Indonesia

Editors and Contributors

xxvii

M. Yahya Abbasi Department of Mathematics, Jamia Millia Islamia, New Delhi, India S. Yathish Department of Computer Science, CHRIST (Deemed to be University), Lavasa Pune, India K. R. Yogesh kumar Center for Wireless Networks & Applications (WNA), Amrita VishwaVidyapeetham, Amritapuri, India

Minimizing Energy Through Task Allocation Using Rao-2 Algorithm in Fog Assisted Cloud Environment Lalbihari Barik , Sudhansu Shekhar Patra , Shalini Kumari , Anmol Panda , and Rabindra Kumar Barik

Abstract Nowadays, fog assisted cloud environment is a dominant field in the computational world which provides computational capabilities through virtualized services. The fog centers which promise their clients to deliver edge computing services contain many computational nodes which are responsible for consuming a large amount of energy. Transmitting all the data to the cloud and getting back from cloud causes high latency and requires high network bandwidth. In industrial IoT applications, there is an adequate amount of energy required in the fog layer which is encouraging area to be managed by the cloud service providers. Task scheduling is an important factor which contributes to the energy consumption in fog servers. In this paper, a Rao-2, a metaphor-less and parameter-less algorithm, is implemented, for scheduling the tasks in the fog center for energy conservation by achieving the QoS. Keywords Task scheduling · Energy optimization · Rao 2 Algorithm · Fog computing · IIoT

1 Introduction Fog computing which is crammed between the layers of the cloud server and end users is a paradigm that offers services to specific users at the edge networks [1, 2]. The fog devices are resource-constrained associated to the cloud servers, the geographical scattered and the distributed nature of the fog computing framework helps in offering consistent services over a wide area. Further, with fog computing framework, several producers and service providers recommend their services at L. Barik Department of Information Systems, Faculty of Computing and Information Technology in Rabigh, King Abdul Aziz University, Jeddah, Kingdom of Saudi Arabia S. S. Patra (B) · S. Kumari · R. K. Barik School of Computer Applications, KIIT Deemed to be University, Bhubaneswar, India A. Panda Department of Computer Engineering, IIIT, Bhubaneswar, Odisha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_1

1

2

L. Barik et al.

reasonable rates. Another advantage of fog computing environment is the physical location of the devices, which are closer to the users than the cloud servers, which ultimately eases operational latency considerably [3, 4]. Fog computing system is an emergent architectural framework for providing computing, control, storage, and networking capabilities for recognizing IoT [5]. Due the growth of the data centers, there is a rise in energy consumption; the emission of CO2 in the data centers is growing exponentially [6, 7]. Task scheduling in fog server is complex and challenging NP-complete problem, as inefficient task scheduling can result in higher energy consumption. We implemented the Rao-2 a metaphor-less algorithm-specific parameter-less algorithm used for minimizing energy consumption. The rest of the paper is organized in this way. In Sect. 2, the earlier works in this direction are discussed. Section 3 describes the Rao-2 algorithm. Section 4 depicts the problem formulation and the energy model. In Sect. 5, an illustration is given, and Sect. 6 shows the simulation result. Section 7 finally concludes the paper with future directions.

2 Related Work The task allocation in heterogeneous distributed system is to assign the tasks to the VMs in the cloud system. Utilization of the resources efficiently with the tasks plays an important role in the efficacy of the system. The efficient use of resources maximizes the performance of the system, minimizes the error probabilities, and maximizes the load balance of the system. The problem of allocating the tasks to VMs is NP-hard. Getting an optimal solution is impossible and therefore suboptimal algorithms are used in many literatures. In recent years, the task scheduling has drawn the attention of several researchers for seamless processing of jobs in many aspects due to efficient task scheduling algorithms. These scheduling algorithms have immediate impact on energy saving and resource utilization [6–9]. It also recommended an energy efficient technique for conserving the energy consumption in between IoT layer and fog computing gateway [6, 10]. Meta-heuristic algorithms compete an essential role in solving the task scheduling problems in fog servers [11]. It examined the performance of the system which hinge on the robust queue length to scale the VMs along with enhances the QoS parameters of the system [12]. It originated metaheuristics for energy efficient task consolidation algorithms in cloud environment [13, 14]. It devised the suitable algorithms for profit maximization by conserving energy and spot allocation quality guaranteed services in fog assisted cloud environment [12, 15]. It also mentioned the suitable task consolidation for saving of energy by lessening the unexploited nanodata centers in fog computing ecosystem and also utilized for the expanding the CPU utilization [16, 17].

Minimizing Energy Through Task Allocation Using Rao-2 Algorithm …

3

3 Optimization Algorithms 3.1 Rao-2 Algorithm Many population-based meta-heuristic algorithms have been proposed in the recent years. Many of them are based on the metaphor of natural phenomenon, and in many cases, behavior of animals such as fishes, ants, insects, cultures, and musical instruments. Some of the newly proposed algorithms are no takers and also not implemented in any field and are seems to be dying. But others received success in some extent. Rao developed a metaphor-less optimization algorithm which is parameter-less and helps to solve complex problems. Let f(x) be a function need to be optimized either to minimize or maximize [18]. Let at the ith iteration the size of design variables and candidate solutions be m and n respectively, (f (x))best and (f (x)) worst be the best and worst candidates among the entire candidate solutions. During the ith iteration, the value of the jth variable for the candidate k is modified as shown in Eq. (1).   / X j,k,i = X j,k.i + r1, j,i X j,best,i − X j,wor st,i   + r2, j,i |X j,k.i or X j,l.i | − |X j,l.i or X j,k.i |

(1)

Here, X j,best,i and X j,worst,i are the jth variable for the best and the worst candidate during the iteration i. X r j,k,i is the new value of X j,k,i . The two random variables in the range of [0,1] for the jth variable during the ith iteration are denoted by r 1,j,i and r 2,j,i . X j,k,i or X j,l,i indicates the value of the candidate solution k is compared with an candidate solution l which has been randomly picked and the data is exchanged depending on their fitness values. In case the kth solution fitness value is better than the l’s solution’s fitness value, the data is exchanged depending on the fitness values, then X j,k,i is assigned to X j,k,i or X j,l,i ; otherwise X j,,i,i is assigned to X j,k,i or X j,l,i . Similarly if the kth solution’s fitness is better than the lth solution’s fitness, then X j,l,i is assigned to X j,k,i or X j,l,i . If the lth solution’s fitness is better than the kth solution’s fitness, then X j,k,i is assigned to X j,k,i or X j,l,i . Figure 1 shows the flow diagram of a metaphor-less parameter-less algorithm. This algorithm is implemented in this article to optimize the energy consumption in the fog center.

3.2 MaxmaxUtil MaxMaxUtil algorithm [13] at a particular instance of time collects all the tasks and creates a task queue data structure. The task which needs maximum CPU utilization is being scheduled to the VM which is currently utilizing maximum CPU. It should be taken care that after the inclusion of the new task, the CPU utilization of the VM

4

L. Barik et al.

Fig. 1 Flow chart of Rao-2 algorithm

should not be above a defined threshold or it may be 100%. In the task queue, the task may have to wait if no such VM is available.

3.3 ECTC ECTC implements FCFS, and using ECTC, the new task allocation is being done to VM which is more energy efficient compared to the other ones by the inclusion of the new task [13]. The algorithm computes the change in energy consumption by the inclusion of the new task. ECTC allocates the task to the VM in which the increase in energy consumption is least.

Minimizing Energy Through Task Allocation Using Rao-2 Algorithm …

5

f i, j = {(max − min )} ∗ t1 − [{(max − min ) ∗ u i + min } ∗t2 + (max − min ) ∗ u i ∗ t3 ] Here, max , min are the power consumption at 100% CPU utilization and 1% CPU utilization, respectively, t 1 , t 2, and t 3 are the new task’s total execution time, the new task’s execution time alone, and the new task’s parallel running time with other tasks, respectively.

4 Problem Formulation Fog servers are present in between the cloud layer and the end-user device layer, as in Fig. 2. It is mentioned as computing platform with large number of distributed nodes, which are highly virtualized and scalable in nature. As shown in Fig. 2, the fog layer contains multiple fog nodes. The hypervisor or Virtual Machine Monitor (VMM) programs allow creating and managing new Virtual Machines (VMs) [13, 15]. Assigning the tasks to the VMs is an NP-complete problem. Our aim is in optimization of energy consumption, while assigning the tasks to the VMs. The following sections describes the task scheduling problem and the energy consumption model in the fog center.

Fig. 2 Architecture of fog and intermediate fog layers where intermediate fog layer creates a logical interface between fog and cloud layers [4]

6

L. Barik et al.

4.1 Task Scheduling in Fog Environment There are n number of VMs (VM1 , VM2 , … VMn ) that can run on a fog node; we may say fog node 1 running VM11 ,VM12 , …VM1n VMs. Let in general, there are n VMs are running in the system. All VMs share the resources of the fog server, such as memory, disk, and CPU cores. These resources can effectively scale up and scale down as per the SLA and saves the infrastructure resources. Task scheduling is a problem in which it has to schedule m number of heterogeneous tasks to be scheduled on n heterogeneous VMs [12, 13, 15].

4.2 Energy Consumption Model When a job submitted, it is dived into tasks and then assigns to VMs. The power utilization of the CPU has been considered for the estimating the power consumption as it is the major component which contributes the maximum variance in terms of the utilization rate. The correlation of CPU utilization with the power consumption is shown by Eq. (2).   Pk (u) = Pk,max − Pk,idle · u + Pk,idle

(2)

Here, u, Pk,idle and Pk,max are the CPU utilization, power consumption in idle mode and power consumption in maximum usage mode, respectively. The total power consumption can be written as: P=

m 

Pk (u)

(3)

i=1

The objective is to minimize =

n 

Pk (u)

(4)

i=1

Equation (4) is the overall energy consumption in the fog layer and our objective of the research is the minimization of energy consumption in the fog layer.

Minimizing Energy Through Task Allocation Using Rao-2 Algorithm … Table 1 Task table

7

Task #

Execution time

CPU utilization

Arrival time

T1

13

34

1

T2

6

22

1

T3

11

32

2

T4

9

13

3

T5

17

31

3

T6

10

25

4

T7

8

17

4

Table 2 Task allocation table using Rao-2 algorithm Task #

Start Time

End Time

CPU Utilization

Machine Id

T1

1

14

34

1

T2

1

7

22

1

T3

2

17

32

2

T4

3

12

13

3

T5

3

20

31

2

T6

4

14

25

3

T7

4

12

17

1

5 An Illustration Let us take three VMs V = {V 1 , V 2 , V 3 } and seven independent tasks T = {T 1 , …, T 7 }. Each task is defined with four tuples such as {Task #, Execution Time, CPU Utilization, Arrival Time}. The CPU utilization is considered to be 100%. The task table is shown in Table 1. Each task has Task #, Execution time, CPU utilization and Task #. The task allocation algorithm applying Rao-2 algorithm is demonstrated for three VMs by taking seven tasks has been shown in Table 2.

6 Simulation Results The simulation has been done for 400 tasks to study the behavior of the task allocation algorithm. Since it is not possible to get the workloads and the tasks from the commercial data center providers, a simulation is done with 400 tasks and n VMs. The simulation is done using python 3.7. The task arrives to the system at a rate of λ to the system. The synthetic dataset is generated randomly using probabilistic distribution. The arrival rate between the tasks is taken as 1 and 30 tasks are entering into the system per unit interval.

8

L. Barik et al.

The CPU utilization for 400 tasks has been shown in Fig. 3 which shows the CPU utilization is as par as the state-of-the-art algorithms such as MaxMaxUtil and ECTC algorithms, and Fig. 4 shows the energy consumption comparison to other algorithms which shows that the energy consumption by implementing the Rao-2 algorithm in task allocation is less as compared to ECTC and MaxMaxUtil. The simulation is accomplished by taking the different number of VMs in the fog center.

Fig. 3 CPU utilization for 500 tasks in 10 VMs

Fig. 4 Energy comparison

Minimizing Energy Through Task Allocation Using Rao-2 Algorithm …

9

7 Conclusion Task allocation and CPU utilization problems are the area which used to minimize the power consumption as well the cost incurred in the data center as well as fog centers. A metaphor-less and parameter-less based meta-heuristic algorithm Rao-2 has been implemented for generating the optimized solution, and various experimental results show that the above stated algorithm performs better in comparison to the state-of-theart algorithms. An analysis has been done for the studying the average case behavior of the algorithm with ETC matrix. While simulating the algorithm, independent tasks are considered allocated to the fog centers. It can be extended for dependent tasks and workflows. We are leaving this for future work.

References 1. Von Laszewski, G., Wang, L., Younge, A.J., He, X.: Power-aware scheduling of virtual machines in DVFS-enabled clusters. Paper presented at: IEEE International Conference on Cluster Computing and Workshops; New Orleans, LA (2009) 2. Barik, R.K., Dubey, H., Samaddar, A.B., Gupta, R.D., Ray, P.K.: FogGIS: Fog Computing for geospatial big data analytics. In: 2016 IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics Engineering (UPCON), pp. 613–618. IEEE (2016, December) 3. Barik, R.K., Dubey, H., Mankodiya, K.: SOA-FOG: secure service-oriented edge computing architecture for smart health big data analytics. In: 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 477–481. IEEE (2017, November) 4. Barik, R., Dubey, H., Sasane, S., Misra, C., Constant, N., Mankodiya, K.: Fog2fog: augmenting scalability in fog computing for health GIS systems. In: 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), pp. 241–242. IEEE (2017, July) 5. Barik, R.K., Dubey, H., Mankodiya, K., Sasane, S.A., Misra, C.: GeoFog4Health: a fog-based SDI framework for geospatial health big data analysis. J. Ambient Intell. Humanized Comput. 10(2), 551–567 (2019) 6. Jiang, C., Wang, Y., Ou, D., Li, Y., Zhang, J., Wan, J., Luo, B., Shi, W.: Energy efficiency comparison of hypervisors. Sustain. Comput. Inform. Syst. 22, 311–321 (2019) 7. Rao, R.: Rao algorithms: three metaphor-less simple algorithms for solving optimization problems. Int. J. Ind. Eng. Comput. 11(1), 107–130 (2020) 8. Zhang, X., Wu, T., Chen, M., Wei, T., Zhou, J., Hu, S., Buyya, R.: Energy-aware virtual machine allocation for cloud with resource reservation. J. Syst. Softw. 147, 147–161 (2019) 9. Sharma, Y., Si, W., Sun, D., Javadi, B.: Failure-aware energy-efficient VM consolidation in cloud computing systems. Future Gener. Comput. Syst. 94, 620–633 (2018) 10. Beloglazov, A., Buyya, R.: Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers. Concurrency Comput. Pract. Experience 24(13), 1397–1420 (2012) 11. Gourisaria, M.K., Patra, S.S., Khilar, P.M.: Minimizing energy consumption by task consolidation in cloud centers with optimized resource utilization. Int. J. Electr. Comput. Eng. 6(6), 3283 (2016) 12. Rout, S., Patra, S.S., Mohanty, J. R., Barik, R.K., Lenka, R.K.: Energy aware task consolidation in fog computing environment. In: Intelligent Data Engineering and Analytics, pp. 195–205. Springer, Singapore (2020)

10

L. Barik et al.

13. Patra, S.S.: Energy-efficient task consolidation for cloud data center. Int. J. Cloud Appl. Comput. (IJCAC) 8(1), 117–142 (2018) 14. Horri, A., Mozafari, M.S., Dastghaibyfard, G.: Novel resource allocation algorithms to performance and energy efficiency in cloud computing. J. SuperComput. 69(3), 1445–1461 (2014) 15. Goswami, V., Patra, S.S., Mund, G.B.: Performance analysis of cloud with queue-dependent virtual machines. In: 2012 1st International Conference on Recent Advances in Information Technology, pp. 357–362. IEEE (2012) 16. Bui, D.M., Tu, N.A., Huh, E.N.: Energy efficiency in cloud computing based on mixture power spectral density prediction. J. Supercomput. 1–26 (2020) 17. Mittal, M., Kumar, M., Verma, A., Kaur, I., Kaur, B., Sharma, M., Goyal, L.M.: FEMT: a computational approach for fog elimination using multiple thresholds. Multimedia Tools Appl. (2020). https://doi.org/10.1007/s11042-020-09657-0 18. Patra, S.S., Amodi, S. A., Goswami, V., Barik, R.K.: Profit maximization strategy with spot allocation quality guaranteed service in cloud environment. In: 2020 International Conference on Computer Science, Engineering and Applications (ICCSEA), pp. 1–6. IEEE (2020)

Sensitivity Context Aware Privacy Preserving Disease Prediction A. N. Ramya Shree, P. Kiran, N. Mohith, and M. K. Kavya

Abstract In today’s world, healthcare has become one of the prominent domains of the real-world because it is associated with individuals looking after or retainment of health through disease prevention by early diagnosis, proper treatment, on-time recovery, relief from disease, a period of sickness, and injury. The diagnosis is treated as the major aspect of healthcare because the right diagnosis of disease at the right time avoids the individual from accidental death or helps to perform early-stage chronic disease detection and avoids further damage. The disease prediction helps to determine the existence of disease at earlier stages using the disease symptoms and patient personal details. The disease prediction is mainly achieved by a category of data analytics called predictive analytics, which makes use of patient disease symptoms and personal details to perform disease prediction. The patient privacy breach is treated as one of the critical issues related to disease prediction because of the usage of patient personal details. To preserve the patient privacy, the data used to perform prediction is altered using privacy-preserving techniques but it leads to another issue called data utility; i.e., what extent modified data become useful for predictive analytics after applying privacy-preserving operations. Hence, there is a tradeoff between prediction data privacy and utility. The existing privacy-preserving approaches mainly focus on preserving privacy and result in loss of data usefulness. Our proposed approach called sensitivity context aware privacy preserving disease prediction mainly deals with how to preserve the privacy of data and meantime retain its usefulness while performing disease prediction. Keywords PPDP · IDP · NLP · ML · QAUE · PII · NCP

A. N. Ramya Shree (B) · P. Kiran · N. Mohith · M. K. Kavya Department of Computer Science and Engineering, RNSIT, Bengaluru 560098, India e-mail: [email protected] P. Kiran e-mail: [email protected] N. Mohith e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_2

11

12

A. N. Ramya Shree et al.

1 Introduction Disease prediction is a type of predictive analytics operation where the disease symptoms and personal details of patients are used. The majority of existing prediction models are used to perform early detection of disease, predict the reoccurrence of disease, control, and progression of the disease. In recent years, artificial intelligence used to automate real-world tasks has moved in a direction toward Machine Learning models. The ML models can learn from real facts, carry out efficient data processing, and yield more accurate results. A variety of research has been conducted by performing selection/extraction of the entity features automatically from large data repositories to improve the accuracy of classification/prediction tasks. ML in health care is treated as a boon for better and significant patient care. Machine learning approaches help to check the existence of different diseases and also efficiently suggest the appropriate diagnosis. ML algorithms support in an effective way to predict the existence of disease and hence helps to give proper treatments for patients. The healthcare organizations yield huge amounts of data on daily basis and can be used to predict the disease existence and also the possibility of disease occurrence in future by using the patients’ disease history data. The medical facilities currently available in the healthcare domain need to be upgraded and improved so that it supports to a greater extent for patient disease diagnosis and related treatment decisions. ML in the healthcare domain helps to analyze and process huge and very complex datasets which further leads to the generation of useful clinical reports. These reports can be used by physicians to make better decisions and providing necessary medical services. In our paper, we focus on the patient’s privacy protection, which is a key aspect of the entire work [1–3].

2 Related Work Privacy is considered as the right of an individual or organization to decide what details about them to be disclosed or not and it is subjected to regulatory permissions which are different from country to country. ML has become one of the prominent research areas in recent years. The majority of the researchers have researched disease prediction using ML approaches, and they have not focused on an individual privacy. The privacy-preserving data publishing emerged as a key research area where data is subjected to different privacy-preserving operations and published to data analyzers who perform different analytical operations on the privacy preserved data. The existing PPDP methods help to preserve individual data privacy, and in the meantime, there is a possibility of lack of data utility, i.e., usefulness of the data after applying PPDP operations. The existing methods are categorized into (1) In Data Swapping where rows of attributes are swapped in such a way that from the resultant tabular data we are not able to identify individual sensitive attributes. (2) In Cryptography where the data converted into coded form and not able to identify an individual.

Sensitivity Context Aware Privacy Preserving Disease Prediction

13

(3) In Randomization where a random noise is added to the original data values such that the resultant data will not reveal an individual identity. (4) In Anonymization, where k-anonymity, l-diversity, and t-closeness approaches offer different procedures to preserve the privacy of an individual. The major issue related to these methods is data utility, i.e., to what extent the resultant data after applying PPDP methods is useful for different analytical operations [4–6].

3 The Proposed System The proposed system mainly describes the different stages involved in model development and its usage for disease prediction described in Fig. 1. The first stage is data preprocessing where the data collected from patients in the form of questionaries, and it is of type unstructured text. So the patient details are subjected to natural language processing (NLP)-based preprocessing which involves tokenization, stop Fig. 1 Proposed system architecture

Input PaƟent Records

Data Preprocessing

Records CategorizaƟon

Records AggregaƟon

Privacy Preserving Disease PredicƟon

Output- Predicted PaƟent Records

14

A. N. Ramya Shree et al.

words removal, and stemming operations. The outcome of the preprocessing stage is used to build the data frame [7]. The second stage involves IDP-based records categorization, the third stage involves classified records aggregation, the fourth stage involves usage of aggregated released data for privacy-preserving disease prediction, and the last stage involves test records disease prediction using the proposed model.

4 Sensitivity Context-Aware Privacy-Preserving Disease Prediction The data is treated as the key ingredient in the world of data analytics. Unstructured data is a type of data that does not have a predefined data model and treated as one of the major categories of real-time data. It usually collected from the patients through the questionaries which mainly involves disease symptoms details and patient unique information like name, phone no, email, age, sex, location, existing disease details and stored in either documents or records. In our paper, we considered the symptoms of COVID-19 pandemic for ML based disease prediction model development. The survey has been made to collect COVID-19 specific symptoms from the country citizens from different zones with the help of questionaries, and they are used for model development. COVID-19 affects the human population in different ways. Highly infected people may develop mild to high illness symptoms and recover without hospitalization if they are not suffering from any other disease. Any population age group can have mild to severe symptoms. The people and/or adults who are suffering from severe medical conditions, for example, heart, diabetes, lung disorders, or aids which seems to be a very high chance of developing severe unanticipated problems/complications due to COVID-19. Some of the disease like aids, cancer or tuberculosis are considered as sensitive disease and related individual details must be protected from disclosure during COVID-19 disease prediction. The COVID-19 symptoms possibly appear from two to fourteen days after exposure to the coronavirus through the contacts of infected people. The proposed model utilizes these symptoms to build the training set, and they are fever, cold, sneezing, cough with/without sputum, breathing difficulty, fatigue, body aches, loss of taste or smell, mild to severe headache, sore throat, congestion, or runny nose, vomiting, nausea or diarrhea, and other details like do the proper hand wash, wear a mask, maintains social distance, any existing disorder, roams outside or not and IDP. When these patient details are utilized in analytics operations like disease prediction, then there will be a possibility of a major issue called patient privacy breach. In the paper, we proposed an approach Sensitivity Context Awareness(SCA) based privacy preserving disease prediction where we add data utility aspects such that resultant data retains privacy and meantime data usefulness [8]. The anonymization is treated as one of the popular and powerful PPDP approaches where it is categorized into two groups, and they are generalization and suppression.

Sensitivity Context Aware Privacy Preserving Disease Prediction

15

The generalization is a process where the entity’s data modified to a higher conceptual level such that it should not disclose any private information about the entities. The types of generalization are categorized into complete and partial, where the complete generalization reduces the data utility to a maximum extent and partial generalization reduces data utility to a minimal extent. The suppression is another kind of anonymization, where the entity’s private data modified such that special symbols are used to replace the private data such that it should not reveal the entity’s private information. The entity’s personal and disease-specific details are collected and subjected to data preprocessing operations. The SCA-based anonymization receives an extra field from patients called IDP it means identity disclosure permission with the value either 0 or 1. The value 1 indicates the user not ready to disclose his/her sensitive details, and value 0 indicates the user ready to disclose his/her sensitive information. This field binary value is mainly used to discriminate sensitive records from non-sensitive records, and only sensitive records are subjected to the quasi-attribute utility enhancement hybrid transformation process [9–11].The nonsensitive records are not subjected to anonymization. It leads to the enhancement of data utility and meantime helps to do patients’ privacy preservation because their data utilized and exposed in the analytics process. Before disease prediction, the sensitive records are subjected to anonymization. The aggregation of both sensitive and non-sensitive records is used for machine learning-based disease prediction model development. Algorithm 1 describes the sensitivity context-aware privacypreserving disease prediction model development. The patient attributes are classified into explicit, quasi, sensitive, and non-sensitive groups. The explicit attributes are specific to a patient like name, email, and phone number. The quasi-attributes like age, location etc., which indirectly help attackers to reveal individual patient [12, 13]. The QAUE approach mainly focuses on anonymizing quasi attributes. The age is a continuous value attribute and subjected to binning operation where the actual values are replaced with the bin average such that the original sensitive age attribute is not disclosed. Similarly, location is a discrete attribute and it can be modified using the concept hierarchy; i.e., it is a tree representation such that the leaf values are replaced by immediate root value. It can be done on different levels like partial or complete. The tf-idf frequency vectorizer technique is used to convert the data frame into the tf-idf matrix. The symptom terms occurrence is mainly identified and treated as features, and a feature matrix is constructed, which is used as input for the classifier model. The classifier model utilizes the training labeled data frame and predicts the test records [14, 15]. Algorithm 1

16

A. N. Ramya Shree et al.

Input: Patient Records with disease symptoms Output:Predicted Patient Records Start: 1. Preprocess the records using modules of natural language processing and remove PII(personally identifiable information). 2. Using the input field Identity Disclosure Permission add the target label. 2.1 if IDP==1: Target_label=sensitive else: Target_label=nonsensitive 3. Generate the dataframe using the patient attributes and Target_label 4. for begin: if Target_label==sensitive : anonymize the records using QAUE Hybrid approach end: 5.The Qausi Attribute Hybrid Anonymization5.1: identify quasi attributes in the records.

5.2: for all sensitive rows: Begin for for each Quasi attribute in the sensitive row Begin for 5.2.1 If attribute is continuous: Binning based continuous attribute modification 5.2.2 If attribute is categorical: CH based categorical Attribute Generalization. 5.2.3 If attribute is ordinal: Lattice Generalization End for End for 6: Find and replace the attributes in the sensitive rows which results into sanitized rows 7.Convert records in data frame into tf-idf based vector representation. 8. The Naive Bayes Classifier , Logistic Regression and Extremely Randomized Tree Classifier Machine Learning approahes are used to predict the disease using prior and posterior probabilities of symptoms occurence. 9.The model evaluated using the testing records. Stop

Sensitivity Context Aware Privacy Preserving Disease Prediction

17

5 Results and Discussions The developed model used to perform privacy preserving disease prediction of test patient records. The Multinomial Naive Bayes (MNB) model considers words or terms prior and posterior probability of occurrences related to the COVID-19 positive label and COVID-19 negative label related to the data frame. The probability of words that occurred in records belongs to COVID-19 positive and COVID-19 negative classes are calculated, and the class with the highest probability is used to assign a label value 1 for COVID-19 positive record or a label value 0 for COVID-19 negative record. The logistic regression is an another model used to build the classifier which estimates the connection between the categorical dependent variable and independent variables. It works with dependent variables of binary values. The logistic regression considers the probability scores as the predicted values related to the dependent variable i.e., disease symptom terms which lies in the range 0 to 1 and takes natural logs of the odds of the dependent variable to create a modified version of the dependent variable using logit function. The extremely randomized tree (ERT) classifier, which is an ensemble technique used to perform disease prediction and obtained better results when compared to LR and MNB methods. Our Disease Prediction model evaluation achieved by using the confusion matrix which contains the summary of classification model evaluation measures. The accuracy of the MNB and LR classifiers are compared with the test data. The MNB accuracy score on testing data is 0.801, and the LR accuracy score w.r.t testing data is 0.77. The MNB performs well compared to LR. ERT classfier performs better when it compared with MNB and LR classifiers and the result is described in Table 1. The classifiers’ performance comparison is based on the accuracy metric described in Fig. 2 [16, 17]. PPDP metrics are used to calculate the loss of information from the anonymized data. The normalized certainty penalty (NCP) metric is used to calculate the utilization of anonymized data. The numeric attribute is subjected to QAUE hybrid anonymization. The NCP defined for a numeric attribute is given in Eq. 1, where RT is a relational table, r i is the ith attribute of a row, and qi is the ith quasi-attribute. NCP(ai) (RT) = ri −qi /|ai |

(1)

In the published data, age attribute is a numeric QA subjected to anonymization. The data frame consists a total of 200 tuples among them, 89 tuples are categorized Table 1 Classification model evaluation measures Accuracy

Holdout score

Cross-validation score

0.857

0.818

Recall

0.875

0.784

Precision

0.893

0.833

F1 measure

0.857

0.816

Log loss

0.524

0.559

18

A. N. Ramya Shree et al.

Classfier Model EvaluaƟon 0.857 Accuracy

0.801 0.77

0.72

0.74

0.76

0.78

Classifier ERT

0.8

0.82

0.84

Classifier MNB

0.86

0.88

LR

Fig. 2 Classifier models comparison w.r.t accuracy metric

as sensitive based on IDP, where the age QA value modified based on binning method defined ranges like 1–20, 21–40, 41–60 and 61–100, then age attribute NCP is equal to 0.78. NCPa(RT ) = size(u)/|a|

(2)

The location attribute is related to the patient’s locality name and is generalized to the state he/she belongs to. The number of the descendent node is one so the size of u 1 and the NCP of origin QA are 0.010. It indicates the minimal loss for origin QA. The information loss metric applies different weights for different attributes to distinguish higher discriminating generalizations and the same described in Eq. 3 [18, 19]. Information Loss(Record)=

  wi ∗ Iloss(value generalization)

(3)

vg∈r

A predefined weightage is assigned for every attribute which is subjected to anonymization and summation of values used to calculate information loss in anonymized data with a comparison with original data. Our proposed method resulted in overall loss as 44.2% and it mainly depends on the number of sensitive rows present in the dataframe [20].

6 Conclusion Our proposed method can be used by the data publisher of the healthcare facility to publish data to organization data scientist or third-party data scientist. It ensures patients privacy by ensuring that their data is used only for disease prediction purpose.

Sensitivity Context Aware Privacy Preserving Disease Prediction

19

The PII attributes are removed and quasi-attributes related to sensitive categories are subjected to anonymization. It leads to an increase in the data utility; i.e., the attributes are retained as provided by patients which in turn helps in increasing the analytics efficiency and effectiveness which will be very useful for healthcare service provider’s decision-making process. The proposed approach mainly concentrates on patient’s privacy. The COVID-19 positive victim’s privacy is very crucial and has to be protected from the compromised analyzer or attacker. The patient details are made available only if he/she agreed to disclose their details by setting the identity disclosure permission value by setting value as 0 else patient set IDP as 1 if he/she not ready to disclose their identity. The approach takes the privacy concern from the data provider or patients who provide data for disease prediction which leads to privacy preserved disease prediction. Our proposed approach can be further enhanced with the application of cryptography techniques at the level of data transmission and hybrid approaches to perform either anonymization or randomization of patient attributes.

References 1. Aggarwal, C.C., Philip, S.Y.: A general survey of privacy-preserving data mining models and algorithms. In: Privacy-Preserving Data Mining, pp. 11–52. Springer, Berlin (2008) 2. Rama Satish, K.V., Kavya N.P.: Hybrid optimization in big data: error detection and data repairing by big data cleaning using CSO-GSA. Communications in Computer and Information Science, vol. 801. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-9059-2_24 3. Rama Satish , K.V., Kavya, N.P.: A framework for big data preprocessing and search optimization using HMGA-ACO: a hierarchical optimization approach. Int. J. Comput. Appl. (2018). https://doi.org/10.1080/1206212X.2017.1417768 4. Chen, S.W.: Confidentiality protection of digital health records in cloud computing. J. Med. Syst. 40(5), 124(2016) 5. Xiao, X.K., Tao, Y.F.: Personalized privacy preservation. In: Proceedings of the ACM Conference on Management of Data (SIGMOD), pp. 229–240 (2006) 6. Rajman, M., Besançon, R.: Text mining: natural language techniques and text mining applications. Data Mining and Reverse Engineering, pp. 50–64. Springer, Boston, MA (1998). Yang Liu. Fine-tune BERT for extractive summarization (2019). arXiv preprint arXiv:1903.10318 7. Ramya Shree, A.N., Kiran, P., Chhibber, S.: Sensitivity context-aware privacy preserving sentiment analysis. In: Reddy, A., Marla, D., Favorskaya, M.N., Satapathy, S.C. (eds.) Intelligent Manufacturing and Energy Sustainability. Smart Innovation, Systems and Technologies, vol. 213. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-4443-3_39 8. Kiran, P., Kavya, N.P., Sathish Kumar, S.: A Novel Framework using Elliptic Curve Cryptography for Extremely Secure Transmission in Distributed Privacy Preserving Data Mining (2012). arXiv 1204.2610 9. Raju, N.V.S.L., Seetaramanath, M.N., Srinivasa Rao, P.: An optimal dynamic KCi-slice model for privacy preserving data publishing of multiple sensitive attributes adopting various sensitivity thresholds. Int. J. Data Sci. 4(4), 320–350 (2019) 10. Shree, R.: Privacy-preserving unstructured data publishing (PPUDP) approach for big data. Int. J. Comput. Appl. 178(28), 4–9 (2019) 11. Fung, B.C.M., Wang, K., Yu, P.S.: Anonymizing classification data for privacy preservation. IEEE Trans. Knowl. Data Eng. 711–725 (2007) 12. Ramya Shree, A.N., Kiran, P.: QAUE-quasi attribute utility enhancement - a hybrid method for PPDP. Int. J. Innov. Technol. Explor. Eng. 9(2S), 330–335 (Dec 2019). https://doi.org/ 10.35940/ijitee.B1087.1292S19

20

A. N. Ramya Shree et al.

13. Han, J., Kamber, M.: Data Mining Concepts and Techniques, pp. 1–40. China Machine Press, Beijing (2006) 14. Soria, C.: Improving the utility of differentially private data releases via k-anonymity. In: 12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (2013) 15. Wu, X., et al.: Privacy preserving data mining research: current status and key issues. In: International Conference on Computational Science. Springer, Berlin, Heidelberg (2007) 16. Patil, N.S., Kiran, P., Kiran, N.P., KM, N.P.: A survey on graph database management techniques for huge unstructured data. Int. J. Electr. Comput. Eng. 8(2), 1140 (2018) 17. Naqvi, M.R., Arfan Jaffar, M., Aslam, M., Shahzad, S. K., Waseem Iqbal, M., Farooq, A.: Importance of big data in precision and personalized medicine. In: 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1–6 (2020). https://doi.org/10.1109/HORA49412.2020.9152842 18. Shree, A.N., Ramya, Kiran, P.: Sensitivity context awareness based privacy preserving recommender system (April 27, 2021). Available at SSRN: https://ssrn.com/abstract=3835011 or https://doi.org/10.2139/ssrn.3835011 19. Shree, A.N.R., Kiran, P.: Sensitivity context aware privacy preserving text document summarization. In: 2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA), pp. 1517–1523 (2020). https://doi.org/10.1109/ICECA49313.2020.929 7415 20. Kiran, P., Kavya, N.P.: A survey on methods, attacks, and metric for privacy preserving data publishing. Int. J. Comput. Appl. 53 (2012)

Designing a Smart Speaking System for Voiceless Community Saravanan Alagarsamy, R. Raja Subramanian, Praveen Kumar Bobba, Pradeep Jonnadula, and Sanath Reddy Devarapalli

Abstract Voiceless people are facing a lot of issues in establishing a communication with normal people. Usually, the normal people are not professionally trained in sign language, and hence, it becomes very hard to establish a proper communication. While traveling to other places with normal people, the voiceless people are facing many challenges to convey their message. In order to incorporate a simple communication, new products have been designed especially for the voiceless community to recognize voice through hand motions and gestures. This mechanism utilizes the concept of a hand motion reading system, which is integrated with flex sensors and hand gestures along with a speaker unit. The whole framework circuit is then motorized by a gatherer controller. The entire data is processed and operated with the help of Raspberry Pi. The system reads the person’s hand motions by recognizing different forms of hand movements. The fully automated smart speaking system is intended to help the voiceless people to communicate with normal people by using a simple wearable device. Keywords Flex sensors · Raspberry Pi 3 · Speaker · Gloves · Battery · Printed circuit board · Accelerometer · Bluetooth module

1 Introduction Nearly 20% of population are facing the disability problems. Disabled people are facing more issues while initiating a communication with normal people. In order to make the communication effective and easy, a smart speaking system has been designed [1]. Hand signals are used for the voiceless network individuals present under the smart speaking system. This process is used to clarify the goal and entire S. Alagarsamy (B) · R. R. Subramanian · P. K. Bobba · P. Jonnadula · S. R. Devarapalli Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Anand Nagar, Krishnankoil, Tamil Nadu, India e-mail: [email protected] R. R. Subramanian e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_3

21

22

S. Alagarsamy et al.

affirmation of the assessment technique [2]. Similarly, it will portray the framework constraints, interface, and connections with other outside applications. An endeavor has been made to study the need and inspiration for clarifying the American gesturebased communication, which will give opportunities to hear the impeded individuals present in the industry [3]. This task is created for the genuinely impeded individuals, and it would also be profitable as they can speak with everybody. In this strategy, a glove that is fixed with sensors will be worn by a genuinely debilitated individual and the truly impeded individual will give a signal. At the point, when he/she makes the motions, it will be caught by the required places of the finger. The caught signal will then be mapped with the one that has recently put away, and in like manner, the required motion from the database will be perceived [4]. Proceeding in this way, the genuinely impeded individual will have the option to go along the whole sentence that is required to convey. Later on, this sentence will change over discourse with the goal that it is perceptible to everybody through the interfaced speaker. By using this method, the physically impaired people would be benefited, and they can communicate with everyone freely, which indeed would be a great achievement for mankind. The product is economically low and the hardware used is also very simple. Restrictions of the regular stem are survived, and genuinely weakened individuals can also impart without any problem. Information of all hand motions will be made beforehand [5]. In India, around 4 million individuals are physical hindered, since it has been hard for the voice impaired individuals to convey their messages to customary individuals, and also, there was a period of time when a voiceless individual required assistance from a standard individual or the other way around, yet the researchers’ approach to impart become boundary [6]. This might be because of the absence of instruction about gesture-based communications to the standard individuals, quiet individuals neglect to convey their messages to the customary individuals and misjudging between sound weakened individual and normal individuals.

2 Related Works The astounding advanced gloves provide voice to the voiceless community. By using the glove-based frameworks, voiceless people speak at least to a minimum of one among the foremost significant endeavors planned for gaining the hand development information [7]. By monstrous moronic people use motion-based correspondence for correspondence, anyway, they find inconvenience in passing on with those that do not fathom correspondence through marking. Furthermore, it is in light of the need for developing an electronic contraption that may make an understanding of signal that has been put together for correspondence into talk so with respect to it causes the correspondence to occur between the calm systems with the general individuals, who are conceivable and another point-by-point gloves is utilized, which is an ordinary material driving gloves fitted with flex sensors along with the length of each finger. Calm people can utilize the gloves to perform hand signs, and it will be changed over

Designing a Smart Speaking System for Voiceless Community

23

into talk so that common people can fathom their manner [8]. This paper gives the manual for incorporating an automated glove. It similarly explores the characteristics of the device and looks at future work. A head target of this paper is to outfit and scrutinize with an explanation behind getting glove system advancement used in bioscience [9]. A voice transformation framework is proposed for communication. Around 2.78% of individuals groups couldn’t able to talk. Some people groups are effectively able to get the info from their movements. The remaining cannot comprehend their method for passing on the message. This framework depends on the movement sensor. The message is kept in an exceeding database. In like manner, all layouts are kept within the database with the continual, the layout database is taken care of into a microcontroller, and also, the movement sensor is fixed in their grasp [10]. Every activity of the movement sensors gets quickened and provides the sign to the microcontroller. The microcontroller matches the movement of the info and produces the respected signaling. The yield of the framework is utilizing the speaker. By appropriately refreshing the database, the stupid will talk variety of a typical individual utilizing the counterfeit mouth. The framework additionally incorporates a book to discourse change (TTS) obstruct that deciphers the coordinated signals [11, 12]. The simple methods were to implement hand gesture recognition, namely with the combination of principal components analysis and rotation invariant. First, create a picture database consisting of nearly four different hand gesture images [13, 14]. Before possessing the capacity for photographs of shifted movement signal classifications in the hand gesture recognition framework, the abovementioned strategy referenced was applied on input test pictures caught structure the sensor gadget of the framework to look out the satisfactory match structure the database. By using this method, we will easily detect the right matches. The results were supported accuracy, and speed was analyzed [15]. The perceived distinctive hand gestures and the development method are used for communication. This framework consists of five fundamental stages: palm trimming, skin separating, edge discovery, and order which includes extraction [16]. Right off the bat, the hand is recognized by utilizing palm, trimming, and skin sifting was performed to separate through the palm segment of the hand. The images which were extricated will then handle by utilizing the watchful edge is an identification strategy to get rid of the framework pictures of hand. After palm extraction, the highlights of the palm were separated by utilizing the KL change strategy; lastly, the data motion was perceived and distinguished by utilizing an appropriate classifier [14]. Consequently, this can be a straightforward proposal of thanks to handling perceive distinctive hand development and signals. The voice change framework is for differentially capable idiotic individuals. Huge doltish people are used to get the paperwork done for correspondence; anyway, they find inconvenience in passing on message with others who does not quainter with hand phonetic correspondence [17, 18]. These endeavors expect to bring down this obstruction in correspondence. The glove is fitted with flex sensors along with the thumb and with the remaining finger. Calm people can perform hand movements

24

S. Alagarsamy et al.

Table 1 Summary of exiting work S. No.

Technique used

Merits of the technique

Limitations of technique

1

Neural networks [10]

Simulated the relationship between user and machine

The parameters used for the training the network are the limitation

2

Convolutional neural networks [15]

Automatically identifies the Reliability of the system recognition and sign of traffic can be further enhanced system

3

Sign language [16]

Sign language is used to reduce the gap between the normal and deaf–dump people

Size of the device and interaction with the users are the limitation of the method

4

Gesture recognition method [17]

Wireless device is used to make the communication between various people

Translation and usage of gloves are the barriers in this method

5

Markov’s model [19]

Gesture system can use for easily understanding the sign language

Similar types of community can used this technique

with the gloves, and it will be changed over into talk so run of the mill people can understand their aura (Table 1). The limitations of existing techniques are listed below. • Cost of designing for the equipment is high. • The existing systems are target toward the same communities. • The devices proposed are targeted toward the various communities. The research significance of the paper is listed below: (i) (ii) (iii) (iv)

The proposed system reduces the communication gap between the deaf–dump people and normal people. The people could not able to speak can communicate easily with normal people without any assistance. The system helps those voice weakened individuals who are concentrating to understand new things and empower them to ask and answer inquiries. The proposed system gives solace to those voice debilitated individuals to note feel separated.

The novelty of the paper is justified with the following points. In the existing system, the motions are captured through the camera, and matching is performed using sign language. In the proposed system, based on the glove moment, the gestures can be identified. The device is very compact to carry over to any place. Accuracy of hand motion can be captured better using gloves. The sensor-based technique captures the accurate motion through the flex sensors and transmits data to the Raspberry Pi. The digital data is used as input for matching purposes.

Designing a Smart Speaking System for Voiceless Community

25

3 Methodology Figure 1 shows the block diagram of gloves used for the translation purpose. First, the user gesture is given as input to the flex sensor and then the microcontroller and accelerometer are used for the purpose of translation, and finally, the Bluetooth module is used as connecting devices. The detailed process of the proposed methodology process is explained in the section.

3.1 System Design The main desire of design is the device that utilizes a hand movement recognizing system with motion and flex sensors with a speaker. Raspberry Pi model is used to process the entire system. The system contains some stored messages like help dump people to convey messages [10]. The system detects the hand motions of a person based on different hand movements. The Raspberry Pi model processor continues to receive input sensor data and then processes it. Now it checks for correct matching messages for the set of sensor data messages. Once it is found in storage, messages are transformed by using text to speech processing through an interconnected speaker. This project contains two important modules, one is the flex sensor, which depends Fig. 1 Block diagram of glove design

User Gesture

Flex Sensor

Micro-controller

Accelerometer

Bluetooth module

26

S. Alagarsamy et al.

on the symbol of the hand motion recognition module and the next one is the voice module. Flex sensors and accelerometers are fixed on the glove, and they are fitted with little and direction fingers. A flex sensor is used to calculate the angle to which the sign symbols are shown by hand. An accelerometer within the symbol detection system is used as an angle sensing device, which in turn detects the angle and degree to which the finger is tilted. The flex sensor is interfaced with the digital ports of Raspberry Pi model [11] (Fig. 2). The output information from the flex sensor and accelerometer is given to the Raspberry Pi, where it is processed and converted into its matching words or characters. The model compares these readings with the predefined words, the symbols are recognized, and messages are displayed on LCD. These messages output comes from the sensor base system is then sent to the sound or voice module [4]. The sound Fig. 2 System design

Start

Hand Gesture

Sensor Data

Sign Recognized

Speaker and Display unit

End

Designing a Smart Speaking System for Voiceless Community

27

module consists of some channels, in which some messages can be recorded. The voice recording and playback module are used for giving audio data to the person. So that symbol alphabets will be contained and obtain in audio format through the speaker.

3.2 Dataset Process Dataset collection due to not available of the datasets for preparing the machine with our sensors information the dataset for preparing the model was made where information at the serial port for each sign representing the alphabets and some most often used words were collected and saved in comma-separated values file format. All the information collected with more features as per the sign were categorized with their corresponding word as their result value, and the last collected information were randomly in order to short the variance and to make sure that the model remains general and over fit less. A total of some information taken at various times, condition, and setting including different places of palm on air for the exact category as well as different levels of the fingers shown the exact category. In the proposed system, the data is captured with the hand gesture method. The captured information is transferred into sensor data for the machine understanding purpose. Finally, the display unit and speaker are used to output the identified symbols.

3.3 Hardware Design 3.3.1

Raspberry Pi Model

Raspberry Pi model is an ARM cortex-based modern and best development board design for electronic engineers and developers [9]. It is a one board computer working on small power. With the processing fast and storage, Raspberry Pi can be used for performing several functions at the same time (Fig. 3).

3.3.2

Flex Sensor

The sensor contains flexed qualities, the resistance over the sensor improved. The flex sensor performance depends on the resistivity of carbon materials. The flex sensor contains the best form onto a flexible substrate. Bending the sensor bends metal plates thus improves the resistance of the sensor. When a user shows a symbol, this sensor detects the sign symbols and after gets detected that symbols are ready to go for the process (Fig. 4).

28

Fig. 3 Raspberry Pi model Fig. 4 Flex sensor

S. Alagarsamy et al.

Designing a Smart Speaking System for Voiceless Community

29

Fig. 5 Accelerometer

3.3.3

Accelerometer

An accelerometer is a small, thin, low-power device used to detect hand motions. Some time user makes unwanted hand movements that are not related to sign language. The accelerometer is with one certain voltage outputs. It has the capability of quantifying the rate of gravity from tilt sensing activities or motions, in addition to lively acceleration affected by motion, shock, or vibration (Fig. 5).

3.3.4

Bluetooth Module

Bluetooth module has some integrated shareable functions. It is a shareable device used to share different types of files after connected to other devices [6]. It has some range to communicate with other devices, which have several modules depend on its usage. Bluetooth sends the sign symbols to Raspberry Pi for checking matched symbols. This Bluetooth is used to make speech out by using some stored messages in memory which is fixed in Raspberry Pi module (Fig. 6).

3.3.5

LCD Display

An LCD is an electric display module which utilizes liquid crystal to give a visible picture. The 32 LCD is a very normal module more used in DIY’s and electric circuits. In that LCD, each word and character appears in a pixel matrix. The defined work like installing, the managing screen will display on display after switches on. After getting the process through Raspberry Pi, some messages or characters are visible on the LCD [8] (Fig. 7).

30

S. Alagarsamy et al.

Fig. 6 Bluetooth module

Fig. 7 LCD display

3.3.6

Speaker

Speaker is used to produce sound after sign is matched with built in messages and gets processed by Raspberry Pi module. Speaker is connected to Raspberry Pi module (Fig. 8).

4 Experimental Results and Discussion This work focuses on designing an electronic plan that can be utilized for correspondence between voiceless individuals and ordinary individuals. The accompanying focuses could be short outline of the discoveries from this works. Step 1: The plan is progressively good and quicker responsive when contrasted with existing structure utilizing Raspberry Pi. Step 2: A responsive time of a few seconds. Step 3: Progressively solidified and mobile. Productive correspondence is among voiceless and typical individuals.

Designing a Smart Speaking System for Voiceless Community

31

Fig. 8 Speaker

Step 4: Appoint language includes various signals and hand development, improves little engine aptitudes of diversely capable individuals. A portable application can be worked for the proficient utilization of the structure and to make it easy to understand. There are many positive sides in our project which produce the high-performance skill and straightforward to grip of our system. The advantages of the project are: (i) (ii) (iii)

The response time of our project is incredibly less. Our system is fully automated. The value of the system is very low and flexible to user.

At that point, utilizing python programming the continuous image of sign will be caught and will be contrasted and the information. Then python will give the output to the Raspberry Pi and that output will be in line with the matched input. Then from the Raspberry PI, this output will be given to the storage circuit which has 10 inputs. All the 15-information source will have an alternate message put away. Then in line with the output received by the Raspberry PI, respective message will be displayed. There is a sound speaker through which message can be heard easily (Fig. 9).

4.1 Communication from Common Person to Dumb Person For a specific sign, there are two parameters to be known: First, the fingers position that can be calculated by the compressive and second, the hand tilting which can be detected by the accelerometer. The information from the compressive sheet and the accelerometer are sent to the smart gloves Raspberry Pi model to transmit it to the receiving end using RF module. The receiving end Raspberry Pi model then generates a visible and audible word or phrase corresponding to the certain symbol. At the receiving end, there is controller contains some memory contains with some

32

S. Alagarsamy et al.

Fig. 9 Integrated diagram of proposed system

sign language database. Message from the normal person comes as a speech or as a text. Message from the microphone or the keypad is sent to the receiving end module, which displays the matching message or phrase as a text and sign language images on the LCD.

4.2 Output Analysis Graph

Accuracy in seconds

Figure 10 shows the accuracy of hand gesture with the accuracy in seconds. The proposed system produces accuracy results in terms of translation of the voiceless individuals. More than 20 gestures are used to verify the effectiveness of the proposed system. 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

1

Fig. 10 Output analysis graph

2

3 4 5 6 7 8 Different types of Hand gestures

9

10

Designing a Smart Speaking System for Voiceless Community

33

5 Conclusion The proposed strategy is tried on various signals. It produces stable and great outcomes to daze network and differed cannot come and offer their musings with these truly disabled individuals. The recommended strategy could empower the stupid to chat with every single one by utilizing the raspberry. This electronic talking framework can help the discourse debilitated individuals to converse with ordinary individuals inside the universe, and a data glove is at long last produced for the voiceless individuals. Presently, they are doing not should confront any very issue with their correspondence. The task proposes a translational gadget for voiceless individuals utilizing glove innovation. The proposed strategy has empowered the position of two flex sensors on a glove to identify the motions of a private. This strategy has its voice yield in provincial language and will be utilized as an interpreter to converse with individuals of grouped areas without any problem.

References 1. Aditya, C., Siddharth, T., Karan, K.: Priya, G: Smart glove learning Assistant for Mute Peoples. Int. J. Innov. Res. Comput. Commun. Eng. 5, 1–5 (2017) 2. Archana, S., Khatal, R., Khupase, S., Asati, S.: Hand sign detection for Indian sign language. In: Proceeding in IEEE International Conference on Computer Communication and Informatics. 1–4 (2012) 3. Ayachi, N., Kejriwal, P., Kane, L., and Khanna, P.: Analysis of the hand motion trajectories for recognition of air-drawn symbols. In: 2015 Fifth International Conference on Communication Systems and Network Technologies, Gwalior (2015) 4. Brashear, H., Starner, T., Lukowicz, P., Junker, H.: Using multiple sensors for mobile sign language. In: Seventh IEEE International Symposium on Wearable Computers, 1–8 (2003) 5. Horesh, H.: Computer vision based traffic sign sensing for smart transport. J. Innov. Image Process. (JIIP) 1, 11–19 (2019) 6. Joshi, H., Bhati, S., Sharma, M., Matai, A.: Detection of finger motion using flex sensor for helping speech blind. Int. J. Innov. Res. Sci. Eng. 6, 20,798–20,804 (2017) 7. Li, Y., Jianxun, X., Zhang, X., Wang, K.: Automatic detection of sign language sub words based on portable accelerometer and EMG sensors. In: Proceedings of the 12th International Conference on Multimodal Interfaces. 1–7 (2010) 8. Ravikiran, J., Mahesh, K., Mahishi, S., Dheeraj, R., Sudheender, S., Nitin, V.: Finger detection for sign language. In: Proceedings of the International Multi Conference of Engineers and Computer Scientists. 1,1–5 (2009) 9. Rebollar, H., Lindeman, K.: A raw materials approach for transfer American signing into sound and words as text by automatic face and hand symbol detection. In: Proceeding in 6th IEEE International Conference. 1–6 (2004) 10. Reda, M.M., Mohammed, N.G.: Sign-voice bidirectional communication system for normal deaf/dumb and blind people based on machine learning. In: 2018 Ist ˙International Conference on Computer Applications and ˙Information Security (ICCAIS) (2018) 11. Sneha, T., Sushma, D., Sahana, M.: Hand gesture based dumb and deaf communication using smart Gloves. Int. J. Eng. Comput. Sci. 7, 23806–23807 (2018) 12. Joshva Devadas, T., Raja Subramanian, R.: Paradigms for ıntelligent IOT architecture. In: Peng, S.L., Pal, S., Huang, L. (eds.): Principles of Internet of Things (IoT) Ecosystem: Insight Paradigm. Intelligent Systems Reference Library. 174 (2020)

34

S. Alagarsamy et al.

13. Subramanian, R.R., Babu, B.R., Mamta, K.: Design and evaluation of a hybrid feature descriptor based handwritten character ınference technique. In: 2019 IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS) (2019) 14. Subramanian R.R., Seshadri K., Design and evaluation of a hybrid hierarchical feature tree based authorship ınference technique. In: Kolhe, M., Trivedi M., Tiwari S., Singh V. (eds.) Advances in Data and Information Sciences. Lecture Notes in Networks and Systems, vol. 39. Springer, Singapore (2019) 15. Suma, V.: Computer vision for human-machine ınteraction review. J. Trends Comput. Sci. Smart Technol. (TCSST) 1, 131–139 (2019) 16. Tanahashi, M., Ochiai, H.: A new reduced-complexity conditional-mean based MIMO signal detection using symbol distribution approximation technique. IEEE Trans. Signal Process. 59, 5644–5651 (2011) 17. Verma, M.P., Shimi, S.L., Chatterji, S.: Design of smart gloves. Int. J. Eng. Res. Technol. (IJERT) 3, 210–214 (2014) 18. Vutinuntakasame, S., Jaijongrak, V.: An assistive body sensor network glove for speech and hearing problems. In: 2011 International Conference on Body Sensor Networks (2011) 19. Verma, P., Shimi, S.L., Priyadarshani, R.: Design of communication ınterpreter for deaf and dumb person. Int. J. Sci. Res. (IJSR) 4, 2640–2643 (2015)

ANNs for Automatic Speech Recognition—A Survey Bhuvaneshwari Jolad and Rajashri Khanai

Abstract The rapid and widespread industrial growth increases the need for industrial automation. Automatic speech recognition (ASR) is an important aspect that has been incorporated in the industrial automation by using the artificial intelligence. In the last two decades, artificial neural network (ANN) has attracted a significant research attention toward speech recognition due to its ability to deal with the complex problem. This paper provides a recent survey on the utilization of artificial neural network (ANN) for various speech recognition applications. The survey focuses on the ANN architecture, methodology, hyperparameters, database, and performance evaluation. Keywords Artificial neural network · Automatic speech recognition · Artificial intelligence · Speech processing

1 Introduction Speech is considered as an efficient, versatile, and natural way of communication. Speech signal is generated by the air obtained from the lung and shaped by pharynx, vocal tract, mouth, tongue, and teeth. Depending upon the production and shaping, sound signals are classified into four types, namely fricatives, non-fricatives, nasal, and plosives [1, 2]. Speech recognition systems are greatly affected by articulation, accents, background noise, emotional state, echoes, gender, pronunciation, roughness, pitch, speed, and volume. Speech expresses linguistic, speaker, and environmental information. Linguistic information consists of message and language; speaker information includes psychological, emotional, physiological, and regional characteristics of the voiced component; environmental information depicts the speech production and transmission information. Human auditory

B. Jolad (B) ECE, KLE Dr. MSSCET, Belagavi, India R. Khanai Department of Electronics and Communication, KLE Dr. MSSCET, Belagavi, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_4

35

36

B. Jolad and R. Khanai

Fig. 1 Generalized flow diagram of ASR

system can understand such complex speech information. This human ability has encouraged researchers to build up systems that would imitate such ability [3, 4]. ASR can be defined as the self-determining computational algorithm that can convert the speech signal into control signal or text. ASR systems are further classified into speaker dependent, speaker independent, continuous speech, isolated speech, large, medium and small vocabulary ASR systems based on the speaker mode, speaking style, and database used [5–7]. ASR system consists of speech recording, preprocessing, feature extraction, and classification stages as shown in Fig. 1. Speech preprocessing phase deals with the speech enhancement, noise removal, voice activity detection and speech separation. Feature extraction stage is related to extraction of distinguishable parameters of the speech signal. Speech features can be prosodic, spectral, voice quality, and Teager energy operator (TEO)-based features. In the past, various feature extraction techniques are used for the speech feature extraction such as independent component analysis (ICA), principal component analysis (PCA), wavelet transform (WT), linear predictive coding (LPC), mel-frequency cepstrum coefficient (MFCC), relative spectral filtering approach (RASTA), and perceptual linear coding (PLC). For the classification, various machine learning classifiers such as K-nearest neighbor classifier, vector quantization, linear discriminant analysis (LDA), support vector machine, and artificial neural network. ANN has been steadily used for the various speech processing applications. ANN has capability to work at any phase of speech recognition system. It can be used for speech enhancement, speech separation, speech feature extraction, and speech classification [8–12]. This paper presents survey of recent work on the ANN for various speech recognition systems. It focuses on the various artificial neural network architectures, learning methods, and hyperparameters of ANN such as number of layers, learning rate, iterations. Further, focus is given on the various applications of speech recognition using ANN. The content of remaining paper is organized as follows: Section 2 provides the brief information artificial neural network. Further, Section 3 describes the survey of recent work on speech recognition based on ANN. Next, Section 4 provides the applications of ANN for ASR. Finally, Section 5 gives the conclusions of survey.

ANNs for Automatic Speech Recognition—A Survey

37

2 Overview of Artificial Neural Network ANN is a computational framework enthused by a biological nervous system. The process of ANN works equally to the neurons in the brain. ANN architecture consists of the input layer, hidden layer, and output layer. An ANN is made up of basic and extremely interrelated unit known as neurons. Each neuron of one layer is connected with all other neurons of the adjacent layer to pass and accept the information. The process of ANN consists of data compilation, processing and analysis, selection of number of hidden layers and hidden neurons, initialization and adjustment of weights/bias, training, testing, and optimization of the network. The hidden layers are considered on the basis of trial and error basis. The weights are adjusted using learning algorithms. The process of weight adjustment is repeated until the error is within the threshold range [13–16]. The structure of an artificial neuron is shown in Fig. 3. Each input is multiplied by the weight value and the product is given to summation block along with bias value. This value is further given to the transfer function to produce the output. Each neuron in input layer represents the single sample/value from the input data/feature vector. Weights are initialized randomly but for later iterations updated automatically based upon the learning technique. Number of output neuron is same as number of output classes of the given problem [17, 18]. For the input signal X = {x1 , x2 , x3 , . . . x N } and weight values ,W = {w1 , w2 , w3 , . . . w N } the output of neuron is given by Eqs. 1 and 2. output = x1 w1 + x2 w2 + · · · + x N w N + b output =

N 

xi wi + b

(1)

(2)

i=1

Once the network is trained, the test signal is given to the network as input and then output is computed. The output neuron having maximum value gives the output class as given in Eq. 3.   ClassLabel = max Outputi i=1 to N

(3)

Depending upon the data flow, ANNs are broadly classified in to feed- forward and feedback neural network as shown in Fig. 2. In the feed-forward neural network (FFNN), the signal flows in only one direction from input toward output as shown in Fig. 4. FFNNs are again categorized in three types such as single-layer perceptron (SLP), multilayer perceptron (MLP), and radial basis function network (RBFN). SLP includes one input layer with no hidden layer, and it can be used for simple application whereas MLP includes multiple hidden layers that can be used to represent the complex problem. RBF network is similar to FFNN where radial basis function is used as activation function.

38

Fig. 2 Classification of ANNs Fig. 3 Structure of artificial neuron

Fig. 4 Feed-forward neural network (FFNN)

B. Jolad and R. Khanai

ANNs for Automatic Speech Recognition—A Survey

39

In feedback neural network (FBNN), data can flow in bidirectional mode as shown in Fig. 5. FBNNs are classified into recurrent neural network (BRNN), Kohonen’s self-organizing map (KSOM), Hopfield network (HN), and competitive network (CN) [19]. RNN consists of memory unit with the help of self-looping to the hidden layer neuron. It gives the input to the next time step rather than giving input to next layer. This network is used to demonstrate dynamic behavior of speech in time domain. HN includes a single layer which consists of one or more fully connected recurrent neurons. It is generally used for optimization and auto-association tasks. KSOM uses competitive leaning for the training network. The self-organizing map can be represented by two-dimensional sheet of processing elements. Each unit has its own weight vector, and learning of self-organizing map (SOM) depends on the adaptation of these vectors. The processing elements of the KSOM are prepared competitive in a self-organizing process. Competitive learning is unsupervised learning in which nodes compete to react to a subset of the input data. Competitive learning is based on Hebbian learning which works by improving the uniqueness of each node in the network. Mostly, it is suitable for discovering the cluster in unseen data. Some of the major ANN architectures are shown in Fig. 6. Learning of ANN can be performed using various error-based techniques such as Widrow−Hoff rule, Winner-taker all, outstart learning rule (Grossberg learning), stochastic gradient descent learning (SGD), back-propagation learning (BP), particle swarm algorithm (PSO), genetic algorithm (GA), and simulated annealing (SA). Different unsupervised techniques utilized for the learning of ANN are Hebbian and competitive learning techniques. ANN has capability to complete the task based on the training data, learning data, and initial experience. ANN can learn independently as it has facility to learn in unsupervised learning mode. ANN supports parallel computation. Development of ANN-based system is done via learning instead of Fig. 5 Feedback neural network (FBNN)

40

B. Jolad and R. Khanai

Fig. 6 Structure of ANNs: a Single-layer perceptron. b Radial basis function network. c Multilayer perceptron network. d Recurrent neural network. e Hopfield’s network. f Kohonen’s self-organizing map network

ANNs for Automatic Speech Recognition—A Survey

41

programming. ANN is nonlinear model that can be used in dynamic environment [20].

3 Survey of ANN for Speech Recognition Various speech disorders add restriction on the response of the ASR systems. Hence, Nidhyananthan et al. [21] explored ANN for the speech recognition of dysarthric subjects. They have used glottal features, voiced and unvoiced stops as the features for ANN algorithm. ANN has helped to increase the distinctiveness of the raw speech features to classify correctly. Further, Ahammad et al. [22] presented connected Bangla speech recognition using back-propagation neural network (BPNN) that achieved 98.46% accuracy (for 50 hidden layers and learning rate of 0.3) for Bangla digit recognition. They suggested using genetic algorithm for the learning of ANN instead of traditional gradient descent methods. Speaker independency plays an important role in speech emotion recognition (SER). Combination of prosodic features such as pitch, formant frequency, energy, and MFCC spectral features along with ANN classifier can give better performance for SER [23]. To fulfill the need of noise robust speech recognition, there is need of robust and compressed features. Gupta et al. [24] presented ANN to compress and increase the robustness of raw MFCC features for noise robust speech recognition. Their method is trained using scaled conjugate gradient back-propagation (SCG) algorithm and shown better results in very low signal-to-noise (SNR) levels. Later, Souissi and Cherif [25] proposed speech recognition system based on MFCC features and ANN. They have used LDA classifier to improve the discriminatory ability of MFCC features. Their ANN consists of 250 hidden neurons and learned using the Bayesian regularization algorithm. Subsequently, Yusnita et al. [26] recommended gender recognition based on linear predictive coding (LPC) and MLP-based ANN. Their proposed system resulted in 93.3% gender recognition rate but its performance is limited because of noisy data. Subsequently, Bajpai et al. [27] implemented ANN for the extraction of isolated words from the continuous speech for speech recognition based on MFCC and Euclidean distance matching. They have used scaled conjugate gradient backpropagation (SCG) algorithm for the training of ANN. It is noted that performance of SCG is quit superior than Polak-Ribiere conjugate gradient (CGP), resilient backpropagation (RP), and conjugate gradient with Powell/Beale restarts (CGB) learning algorithms. Further, in [28], multilayer feed-forward BPNN (FFBPNN) is presented for the female voice recognition based on MFCC features. It has given better results for single speaker training compared to multiple speaker training. They have investigated that effect of homonyms and noise can degrade the performance of the ANN. Speech recognition systems performance is challenging if the dataset consists of too many similar sounding words. To overcome this problem, Shukla et al. [29] proposed ANN optimized using opposition artificial bee colony algorithm (OABC)

42

B. Jolad and R. Khanai

that considers multispeaker data for the speech recognition. In their work, ANN is learned using Levenberg–Marquardt algorithm and amplitude modulation spectrogram algorithm used for speech feature extraction. Afterward, Eljawad et al. [30] described Arabic voice recognition based on multilayer neural network and Sugenotype fuzzy logic algorithm. They have used three-level discrete wavelet transform (DWT) for feature extraction. MLP has given better performance (94.5%) than the fuzzy logic algorithm (77.1%). Hyperparameter selection in neural network is critical and time-consuming task. To cope with this problem, Feiye et al. [31] presented artificial bee colony (ABC) algorithm for the optimization of feed-forward ANN that enhances the global search capability and balances the exploration and exploitation of the ANN algorithm. Consequently, Gowda et al. [32] presented MFCC and ANN for controlling the air vehicle. ANN has given stable recognition rate than hidden Markov model (HMM)-based system. It has given 93% word accuracy. Most of the speech recognition systems performance hugely depends upon the accent of the language. To make the speech recognition system robust against the accent, Sarma et al. [33] investigated artificial neural network for the spoken digit recognition for Indian northeastern accent. They have used linear prediction coefficient (LPC) and PCA for the feature extraction of speech samples. It is observed that increasing the sample size increases the performance of the digit recognition system. Next, Devi et al. [34] used multilayer perceptron (MLP) model trained using genetic algorithm (GA) for the automatic speaker recognition. They have MFCC and PCA algorithms for feature extraction and feature reduction. It resulted in 100% accuracy for the 20 testing samples and 150 training samples. Further, Fulya et al. [35] offered neural network for the gender classification based on voice signal. Increasing the number of iteration for the training of ANN increases the accuracy of the system. ANN has given 81.4% gender recognition accuracy for 100 iterations. Subsequently, Wan et al. [36] presented BPNN for the isolated word speech recognition based on mel-frequency cepstral coefficient (MFCC). BPNN has ability to map complex nonlinear function. Batzorig et al. [37] developed neural network model for Mongolian language speech recognition based on MFCC features. K-mean algorithm is used to balance the length of MFCC features and speech signal length as neural network requires input with constant length. Levenberg–Marquardt (LM) back-propagation algorithm is used for the training of the neural network. It is observed that this approach is less sensitive for local search and better suitable for small-size database. Table 1 gives the concise comparative analysis of the various ANNs employed for different speech recognition applications.

4 Applications of ANN for ASR Because of faster computation and ability to handle complex problems, ANNs are used for various speech recognition applications as shown in Fig. 7. Speech recognition plays an imperative role in speaker recognition applications which can be used

Souissi and Cherif [25]

Bajpai et al. [27]

Stanley et al. Speaker [28] recognition

Wan et al. [36]

Sarma et al. [33]

4

5

6

7

8

Shukla et al. [29]

Gupta et al. [24]

3

9

Shaw et al. [23]

2

276 samples Single speaker—112 Multiple speakers—370 4 words (400 samples)

50 samples of 10 digit (English)

MFCC + ANN + Euclidean distance MFCC + FFBPNN MFCC + BPNN

LPC + PCA + ANN ANN + OABC

Saarbrucken voice database (2225 samples)

MFCC-LDA + ANN

1080 samples (30 words)

(continued)

Sensitivity (90.41%), Specificity (99.66%), Accuracy (99.36%)

Accuracy (83.84%)

Accuracy (80%)

Single Speaker (100%) Multiple Speaker (87.20%)

Accuracy (96.74%)

Accuracy (87.92%)

Accuracy (99.1%)

65 word

MFCC + ANN

Accuracy (98.46%)

Performance metrics

Accuracy (86.87%)

260 samples

MFCC + BPNN Prosodic and spectral features + 68 samples ANN

Database

Methodology

Speech recognition

Digit recognition

Isolated word speech recognition

Speech recognition

Speech recognition

Speech recognition

Speech emotion recognition

Bangla digit recognition

Ahammad et al. [22]

1

Application

Reference

S. No.

Table 1 Comparative analysis of ANNs for ASR

ANNs for Automatic Speech Recognition—A Survey 43

Devi et al. [34]

Batzorig et al. [37]

Fulya et al. [35]

11

12

13

Reference

Eljawad et al. [30]

10

S. No.

Table 1 (continued) Methodology

Gender recognition

Speech recognition

Speaker recognition

Speech recognition

Application

Free spoken digit dataset (FSDD) (170 samples) 40 word (400 samples) database TIMIT

MFCC + PCA + MLP- GA MFCC + K-mean + ANN MFCC + ANN

Accuracy (81.4%)

Accuracy (99.1%)

Accuracy (100%)

Accuracy (94.5%)

Performance metrics 250 samples

Database DWT + MLP

44 B. Jolad and R. Khanai

ANNs for Automatic Speech Recognition—A Survey

45

Fig. 7 Application of speech recognition

for the correct verification of the person [34]. In this, speech can be treated as behavioral biometric trait. Mobile robot navigation [38], robotics [39], and conversation speech recognition [40] help to increase the efficient interaction between human and machine where the machines are automated based on the voice-based commands. Speech signal typifies the physical condition of the person, and hence, it can be used for the medical verdict of the person such as speech therapy [41], dysphonic voice classification [42], and disordered speech recognition [43]. Speech represents the psycholinguistics of the person and can forecast the behavioral uniqueness which can be utilized for the emotion recognition in call centers [44] and human emotion recognition [45] for person profiling. Human emotion recognition plays vital role in the forensic analysis of the suspects and monitoring of person. Speech separation [46] is important aspect of separation of the speaker or word in the gathering or mass speech. Nowadays, there is gigantic escalation in the social media use, and a large number multimedia messages are being posted on the diverse community

46

B. Jolad and R. Khanai

media platforms. Text-to-speech conversion has been employed for speech tagging of the social media text [47] and hate speech classification in the social media audio content to pursue social guidelines [48]. It can be used in interactive games [49] to create the game more users friendly.

5 Conclusions Thus, this paper presents the extensive survey of various artificial neural networks for the various speech recognition applications. It is observed that ANN is simple to implement and capable of handling complex nonlinear problems effectively. Parallel processing capability and unsupervised learning approach can handle the complex data efficiently. The main challenges in design of ANN are selection of number of neuron and number of layer. Setting large number of layers may result in overfitting because network may not be able to process large information and leads to larger time consumption during training, whereas assigning very few parameters may result in underfitting of the network. Several optimization techniques have been used to optimize the hyperparameters of the ANN. This optimization adds extra computation overheads on the network. Again the performance of the network is influenced by several factors such as number of input and output, error function complexity, noise, network architecture, and training algorithm. Because of black box and empirical nature of ANN, it gives unpredictable performance. In future, it is expected to improve the performance of ANN using efficient optimization techniques that can be trained with lesser computational demands. There is need of construction of hybrid neural network that can attain better performance.

References 1. Rabiner, L., Ronald, S.: Theory and Applications of Digital Speech Processing. Prentice Hall Press (2010) 2. Benesty, J., Sondhi, M.M., Huang, Y. (eds.): Springer Hand-book of Speech Processing. Springer, Berlin (2007) 3. Morgan, D.P., Scofield, C.L.: Neural networks and speech processing. In: Neural Networks and Speech Processing, pp. 329–348. Springer, Boston, MA (1991) 4. Juang, B.H., Chen, T.: The past, present, and future of speech processing. IEEE Signal Process. Mag. 15(3), 24–48 (1998) 5. Trentin, E., Gorim, M.: A survey of hybrid ANN/HMM models for automatic speech recognition. Neurocomputing 37(1–4), 91–126 (2001) 6. Nakagawa, S.: A survey on automatic speech recognition. IEICE Trans. Inf. Syst. 85(3), 465– 486 (2002) 7. Rudnicky, A.I., Hauptmann, A.G., Lee, K.-F.: Survey of current speech technology. Commun. ACM 37(3), 52–57 (1994) 8. Solera-Ureña, R., Padrell-Sendra, J., Martín-Iglesias, D., Gallardo-Antolín, A., Peláez-Moreno, C., Díaz-de-María, F.: Svms for automatic speech recognition: a survey. In: Progress in Nonlinear Speech Processing, pp. 190–216. Springer, Berlin, Heidelberg (2007)

ANNs for Automatic Speech Recognition—A Survey

47

9. Blyth, B., Piper, H.: Speech recognition-a new dimension in survey research. J. Market Res. Soc. 36(3), 183–204 (1994) 10. Sonawane, A., Inamdar, M.U., Bhangale, K.B.: Sound based human emotion recognition using MFCC & multiple SVM. In: 2017 International Conference on Information, Communication, Instrumentation and Con- trol (ICICIC), pp. 1–4. IEEE (2017) 11. Bhangale, K.B., Titare, P., Pawar, R., Bhavsar, S.: Synthetic speech spoofing detection using MFCC and radial basis function SVM. IOSR J. Eng. (IOSRJEN) 8(6), 55–62 (2018) 12. Jolad, B., Khanai, R.: An art of speech recognition: a review. In: 2019 2nd International Conference on Signal Processing and Communication (ICSPC), pp. 31–35. IEEE (2019) 13. Hassoun, M.H.: Fundamentals of artificial neural networks. MIT Press (1995) 14. Jain, A.K., Mao, J., Mohiuddin, K.M.: Artificial neural networks: a tutorial. Computer 29(3), 31–44 (1996) 15. Mehrotra, K., Mohan, C.K., Ranka, S.: Elements of Artificial Neural Networks. MIT Press (1997) 16. Dawson, C.W., Wilby, R.L.: Hydrological modelling using artificial neural networks. Prog. Phys. Geogr. 25(1), 80–108 (2001) 17. Kamble, B.C.: Speech recognition using artificial neural network—a review. Int. J. Comput. Commun. Instrum. Eng. (IJCCIE) 3(1) (2016) 18. El-Shahat, A.: Introductory chapter: artificial neural networks. In: Advanced Applications for Artificial Neural Networks, IntechOpen ( 2018) 19. Abiodun, O.I., Jantan, A., Omolara, A.E., Dada, K.V., Mohamed, N.A., Arshad, H.: State-ofthe-art in artificial neural network applications: a survey. Heliyon 4(11), 1–41 (2018) 20. El-Shahat, A.: Artificial Neural Network (ANN): Smart & Energy Systems Applications. Scholar Press Publishing, Germany (2014). ISBN 978-3-639-71114-1 21. Nidhyananthan, S.S., Shenbagalakshmi, V., Kumari, R.S.S.: Automatic speech recognition system for persons with speech disorders: artificial neural networks and hidden markov model. Int. J. Appl. Eng. Res. 10(45), 31920–31924 (2015) 22. Ahammad, K., Rahman, M.M.: Connected Bangla speech recognition using artificial neural network. Int. J. Comput. Appl. 149(9), 38–41 (2016) 23. Shaw, A., Vardhan, R.K., Saxena, S.: Emotion recognition and classification in speech using Artificial neural networks. Int. J. Comput. Appl. 145(8), 5–9 (2016) 24. Gupta, S., Bhurchandi, K.M., Keskar, A.G.: An efficient noise-robust automatic speech recognition system using artificial neural networks. In: 2016 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, pp. 1873–1877 (2016) 25. Souissi, N., Cherif, A.: Speech recognition system based on short-term cepstral parameters, feature reduction method and artificial neural networks. In: 2016 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Monastir, pp. 667–671 (2016) 26. Yusnita, M.A., Hafiz, A.M., Fadzilah, M.N., Zulhanip, A.Z., Idris, M.: Automatic gender recognition using linear prediction coefficients and artificial neural network on speech signal. In: 2017 7th IEEE International Confer- ence on Control System, Computing and Engineering (ICCSCE), Penang, pp. 372–377 (2017) 27. Bajpai, A., Varshney, U., Dubey, D.: Performance enhancement of automatic speech recognition system using euclidean distance comparison and artificial neural network. In: 2018 3rd International Conference On Internet of Things: Smart Innovation and Usages (IoT-SIU), Bhimtal, pp. 1–5 (2018) 28. Brucal, S.G.E., Africa, A.D.M., Dadios, E.P.: Female voice recognition using artificial neural networks and MATLAB voicebox toolbox. J. Telecommun. Electron. Comput. Eng. (JTEC) 10(1–4), 133–138 (2018) 29. Shukla, S., Jain, M.: A novel system for effective speech recognition based on artificial neural network and opposition artificial bee colony algorithm. Int. J. Speech Technol. 22(4), 959–969 (2019) 30. Eljawad, L., et al.: Arabic voice recognition using fuzzy logic and neural network. Int. J. Appl. Eng. Res. 14(3), 651–662 (2019). ISSN 0973-4562

48

B. Jolad and R. Khanai

31. Xu, F., Pun, C.-M., Li, H., Zhang, Y., Song, Y., Gao, H.: Training feed-forward artificial neural networks with a modified artificial bee colony algorithm. Neurocomputing 416, 69–84 (2019) 32. Gowda, S.M., Rahul, D.K., Anand, A., Veena, S., Durdi, V.B.: Artificial neural network based automatic speech recognition engine for voice controlled micro air vehicles. In: 2019 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), pp. 121–125. IEEE (2019) 33. Sarma, P., et al.: Automatic spoken digit recognition using artificial neural network. Int. J. Sci. Technol. Res. 8, 1400–1404 (2019) 34. Devi, K.J., Thongam, K.: Automatic speaker recognition from speech signal using principal component analysis and artificial neural network. J. Adv. Res. Dyn. Control Syst. 11(4), 2451– 2464 (2019) 35. Akdeniz, F., Becerikli, Y.: Performance comparison of support vector machine, K-nearestneighbor, artificial neural networks, and recur rent neural networks in gender recognition from voice signals. In: 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–4 (2019) 36. Wan, Z., Dai, L.: Speaker recognition based on BP neural network. In: 2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian, China, pp. 225–229 (2020) 37. Batzorig, Z., Bukhtsooj, O., Chensky, A.G., Galbaatar, T.: Speech recognition in Mongolian language using a neural network with pre-processing technique. In: 2020 International Youth Conference on Radio Electronics, Electrical and Power Engineering (REEPE), Moscow, Russia, pp. 1–5 (2020) 38. Patel, P., Doss, A.S.A., PavanKalyan, L., Tarwadi, P.J.: Speech recognition using neural network for mobile robot navigation. Trends Mech. Biomed. Des. 665–676 39. Antunes, A., Pizzuto, G., Cangelosi, A.: Communication with speech and gestures: applications of recurrent neural networks to robot language learning. In: Proceedings of GLU 2017 International Work-shop on Grounding Language Understanding, pp. 4–7 (2017) 40. Chan, W., Jaitly, N., Le, Q., Vinyals, O.: Listen, attend and spell: a neural network for large vocabulary conversational speech recognition. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4960–4964. IEEE (2016) 41. Schipor, O., Geman, O., Chiuchisan, I., Covasa, M.: From fuzzy expert system to artificial neural network: application to assisted speech therapy. Artif. Neural Netw. Models Appl. 165, (2016) 42. Teixeira, J.P., Fernandes, P.O., Alves, N.: Vocal acoustic analysis–classification of dysphonic voices with artificial neural networks. Procedia Comput. Sci. 121, 19–26 (2017) 43. Chadha, A.N., Zaveri, M.A., Sarvaiya, J.N.: Isolated word recognition using neural network for disordered speech. In: The Third IASTED International Conference on Telehealth and Assistive Technology (TAT 2016), pp. 5–10 (2016) 44. Petrushin, V.: Emotion in speech: Recognition and application to call centers. In: Proceedings of Artificial Neural Networks in Engineering, vol. 710, p. 22 (1999) 45. Hussain, L., Shafi, I., Saeed, S., Abbas, A., Awan, I.A., Nadeem, S.A., Rahman, B.: A radial base neural network approach for emotion recognition in human speech. IJCSNS 17(8), 52 (2017) 46. Perotin, L., Serizel, R., Vincent, E., Guérin, A.: Multichannel speech separation with recurrent neural networks from high-order ambisonics recordings. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 36–40. IEEE (2018) 47. Meftah, S., Semmar, N.: A neural network model for part-of- speech tagging of social media texts. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) (2018) 48. Wang, C.: Interpreting neural network hate speech classifiers. In: Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pp. 86–92 (2018) 49. Ganzeboom, M.S., Yilmaz, E., Cucchiarini, C., Strik, H.: An ASR-based interactive game for speech therapy. In: Proceedings of the 7th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), pp. 63–68 (2016)

Cybersecurity in the Age of the Internet of Things: An Assessment of the Users’ Privacy and Data Security Srirang K. Jha and S. Sanjay Kumar

Abstract Cybersecurity has emerged as one of the most critical issues in recent times due to the fact that almost everyone in the world are touched by the influence of Internet or Internet of Things in some way or the other. Personal data of the people shared over the World Wide Web is incredibly large thanks to the mandatory fields that they have to fill while availing any services offered by the government, corporate houses or charitable organizations. Cybersecurity threat looms large both from the custodians of the users’ personal data and the hackers who are out there to take advantage of the loopholes in the systems and processes to steal the critical information and ruin the fortunes and peace of mind of the people through their nefarious activities. Based on a critical and in-depth review of 32 researches on the theme, this article provides a comprehensive insight on users’ privacy and data security risks and concomitant mitigation strategies. Keywords Internet · Internet of Things · Privacy · Cybercriminals · Cybersecurity

1 Introduction Internet of Things (IoT) has transformed the way the communities, act, react and interact in a largely interconnected world today. There has been a major paradigm shift in the processes of accessing information, technologies, and services ever since Internet has been invented [1]. While Internet connects people, IoT connects things and devices. The IoT captures various combinations of sensing, tagging or identifying ‘things’ by means of RFID, bar codes, quick response codes, sensors and actuators over the World Wide Web for detecting, monitoring, sensing or stimulating other devices which are also there within the range of Internet protocols [2]. In fact, IoT is

S. K. Jha (B) Apeejay School of Management, New Delhi, India S. S. Kumar University School of Management Studies, Guru Gobind Singh Indra Prastha University, New Delhi, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_5

49

50

S. K. Jha and S. S. Kumar

a broad notion which includes sensor networks and machine-to-machine communication. The IoT facilitates transmission of electronic information by way of material things which entails a novel facet of the design and deployment of Internet. The IoT drives lots of applications, which are likely to act in tandem with billions of intelligent systems, thereby creating different kinds of end-users who display unorthodox social conduct. Also, interconnected material objects in IoT ecosystem are capable of interacting with one another without any human mediation [3]. IoT has aggressively made inroads into all the spheres of communities, workplaces, hospitality, health care, education, logistics, energy distribution, and infrastructure development by way of seamlessly interconnecting the objects and devices using appropriate Internet protocols. Smart cities, smart villages, fully automated shop floor, hospitals, toll plazas, etc. are the new realities which have touched the lives of millions of people across the globe. Indeed, IoT has made lives of the people so comfortable. However, the critics have raised several questions related to privacy and data security and possible misuse of the personal information of millions of people hooked to a number of devices including the mobile phones. There is a genuinely serious risk of embezzlement of personal information collated by the companies through mobile apps which invariably use tracking mechanism. Consent and permission of the user of the mobile apps are limited to a particular app. However, there is no way to ensure that the personal data is not misused by unscrupulous elements in the corporate world. Already the nexus between search engine operators and online marketers display their conmanship by throwing suggestive advertisements based on the most recent activities of the users within shortest possible time. Also, there is overarching electronic surveillance through IoT in most of the public places about which the common people cannot do much. Smartphones are quick to recall the data entered by the users during any transaction which make them susceptible to fraud by the hackers from any corner of the world.

2 Review of Related Work Within an IoT ecosystem, data security and privacy of users have emerged as the most important issues all over the world. Data security in IoT ecosystem depends on secrecy, readiness, truthfulness, ownership, value, authority, steadfastness, heftiness, protection, pliability and staying power [4, 5]. It has been observed that IoT applications have put heterogeneousness, validation, identity management, consent, access control, culpability, health issues, orchestration, smart grid technologies, etc. at risk [6–14]. The data security concerns are primarily linked to ownership of data collated and processed by a number of IoT applications. It is difficult to figure out who owns the data captured through IoT devices, especially in multi-actor settings. Ambiguities about the data ownership are a major concern for all the stakeholders which can lead to unauthorized access and data theft [2]. In some of the sectors such as health care, data ownership issues become all the more critical [15]. Susceptibilities of the smart healthcare devices such as pacemakers and insulin monitors can play a havoc in case

Cybersecurity in the Age of the Internet of Things …

51

Fig. 1 Summary view of potential reasons for data breach. Source Li et al. [17]

of data breach either by malicious ware or unscrupulous elements having access to critical information [16]. Figure 1 provides a summary view of latent reasons for data breach. For a long time, Internet security revolved around cryptography, protected communication, and privacy assertions. On the other hand, IoT security incorporates a broader assortment of chores such as data secrecy, services accessibility, integrity, confidentiality fortification, among other things [18]. Intricacies of IoT make it extremely susceptible to assaults against handiness, amenity uprightness, safety, and confidentiality. It has been observed that the sensing devices/technologies have very limited computation capacity and energy supply, thus they have inbuilt constraints in terms of providing adequate security protection at the lower layer of IoT, i.e., the sensing layer, while at the middle layers, i.e., the network layer, service layer, the IoT relies on networking and communications which are vulnerable to eavesdropping, interception and DoS attacks [17]. An autonomous topology without unified controller is disposed to bouts against validation in the network layer, such as node replication, node suppression and node impersonation [17]. Besides, there are rising industrial security concerns linked to intelligent sensors, embedded programmable logic controllers, robotic systems, and hybrid system security controls [17]. It is imperative to go for system-level security analytics and self-adaptive security policy framework in order to create reliable IoT ecosystems as the data aggregation and encryption at application layer expedite resolution of issues related to scalability and exposure across all layers [17]. Some of the security threats impeding the IoT ecosystem have been collated as under [17]: • Unauthorized access: the subtle data at the end-nodes is taken by the mugger;

52

S. K. Jha and S. S. Kumar

• Availability: end-node stops working as they might be either materially seized or condemned rationally; • Spoofing attack: the mugger efficaciously cover-ups as IoT end-device, end-node, or end-gateway by rigging data with the help of malware node; • Selfish threat: at times, IoT end-nodes stop working to save resources or bandwidth to cause the failure of network; • Malicious code: virus, trojan, and junk message may disable the software and cause systemic failure; • Denial of services: deliberately constraining the availability of IoT end-node resource at the user end; • Transmission threats: interrupting or blocking transmission, data manipulation, forgery, etc. • Routing attack: disrupting transmission through routers. In Indian context, National Association of Software and Service Companies (Nasscom) has listed certain security challenges as presented in Fig. 2. These security challenges may be encountered across the world. Moreover, breach of privacy and scouting without consent or knowledge of the individuals have emerged as the most critical ethical and socio-legal issue linked to IoT by far [20]. A number of scholars have raised an alarm regarding the privacy issues triggered by multitude of IoT applications [21–24]. All the smart gadgets at the offices and homes including the smart toys are capable of transmitting personal information to the outside world. Thus, reassurance regarding protection of privacy might increase the use of IoT applications in a big way [25]. There are a few technological advancements such virtual private networks and private information retrieval to facilitate protection of privacy [26]. However, need of the hour is comprehensive state policy and regulation to ensure protection of privacy of individuals who are using the IoT applications without being aware of their vulnerabilities at the hands

Fig. 2 IoT security challenges. Source Nasscom [19]

Cybersecurity in the Age of the Internet of Things …

53

of the custodians of the personal information in the so-called corporate world. For example, in a pervasive experiment in Boston (USA), it was observed that out of 17,000 android applications, 9000 had access to cameras and mics of the mobile phones and 8000 of them shared screen recordings and app interactions to Facebook and AppSee [27]. It is a clear-cut case of misuse of consent as the end-user would never give permission to share his/her personal information including the routine activities while using the subscribed apps. Even the Google is said to have read the emails of the users bypassing their consent [28].

3 Findings and Discussion True, cybersecurity has emerged as a leading cause of concern among various stakeholders and users of IoT devices across the world. Maximum number of spam attacks are reported in India while the country stands third in terms of bug bouts and overall threats to cybersecurity [29]. According to a survey, about 80 users of Internet/IOT get afflicted by cyberattacks in India every minute [29]. The threats are posed by not only the cybercriminals and hackers but also the custodians of the personal information as reflected from various cases of data breaches by the employees/managers of leading companies like Appsee, Facebook, Google, etc. Both internal and external threats are caused by lacunae in the IoT ecosystems. Some of the common loopholes in the IoT ecosystems include emaciated permission arrangement and inadequate testing and updating of IoT devices/applications, undependable threat detection systems, data integrity and interoperability, among others. These loopholes make the IoT ecosystems vulnerable to internal as well as external threats. However, it is possible to plug the loopholes by greater involvement of all the stakeholders. A good beginning toward improved cybersecurity can be made by intensive focus on users’ education. If the users are fully aware of the potential threats and take precautionary measures, they can protect themselves from the evil eyes of cyberattackers who are always looking for the week spots to steal personal information and cheat the gullible users of IoT devices across the world. Best way to inculcate a habit of being alert regarding protection of privacy and data security is to include such topics in the curriculum of computer science in schools and colleges. Besides, non-profit organizations can come forward to educate the poor citizens who do not have sufficient access to education and training. Unfortunately, there are only a few non-profit organizations like Digital Empowerment Foundation, Cyber Society of India, etc. working in this domain. There is an urgent need of a good number of dedicated NGOs or users’ communities to come forward and contribute their mite toward raising awareness about cyberattacks and educating people about protecting themselves from impending threats to privacy and data security. Even the international communities can come together to form a coalition for the prevention of cyber-crimes and augmenting cybersecurity. Besides, the governments of the world need to come up with stringent regulatory norms and policies for curbing cyber-crimes and creating a formidable ecosystem for

54

S. K. Jha and S. S. Kumar

prevention of all sorts of data breaches jeopardizing the users’ privacy and putting their wealth at the risk of being plundered by the hackers who are active on the World Wide Web, constantly scouting for opportunities to cheat the innocent people at the slightest hint of loopholes. Existing laws in India are not sufficient to ensure users’ privacy and data security. Absence of a single comprehensive law to protect privacy and data security in India call for immediate attention of the policy makers to change the scenario in the best interest of the users [30]. The existing Information Technology Act 2000 focuses primarily on issues revolving around e-commerce and banking and financial transactions. It is high time that the government comes up with a comprehensive piece of legislation to ensure users’ privacy and data security in view of the pervasive influence of Internet and IoT touching people from all segments of the society. Apart from increased awareness about cybersecurity and strong regulatory framework for ensuring data protection, technology can also play a significant role in preventing cyber-trespassing through intrusion detection systems [31, 32].

4 Conclusion Although there have been a lot of public discourses on issues revolving around data security and privacy in recent times, concerted efforts on capturing the intensity and analyzing these concerns with strong theoretical or academic rigor are almost missing. Furthermore, there is feeble hue and cry about the data security concerns and privacy issues in developing and under-developed countries. Already the IoT has penetrated the private lives of the people in most subtle way which is generally difficult to figure out. In the wake of Covid-19, a large number of people have been forced to work from home, not as choice but as a compulsion. While IoT has made working from home significantly easier, it has also affected the domestic ecosystem of homes especially in emerging markets where the people have space constraints. Further, Government of India is aggressively pursuing the ‘Digital India’ program under the leadership of Prime Minister Narendra Modi. Already, the plans to develop 100 smart cities in the country are at various stages of implementation. Some of the villages such as Punsari in Gujarat have also leveraged the IoT and become smart village. It is certain that IoT will touch lives of all the citizens in the country in some way or the other in the years to come. Hence it is imperative that the government is prepared to address the ethical concerns and adverse social impacts through appropriate state policies and regulations. Data security and privacy law need to be in sync with the realities of IoT so that the personal data of the users of IoT devices can be protected not only from the hackers and cybercriminals but also from the custodians of such a fascinating pool of personal information.

Cybersecurity in the Age of the Internet of Things …

55

References 1. Dutton, W.H.: Society on the Line. Oxford University Press, Oxford (1999) 2. Dutton, W.H.: Putting things to work: social and policy challenges for the Internet of Things. Info 16(3), 1–21 (2014) 3. Bi, Z., Xu, L., Wang, C.: Internet of Things for enterprise systems of modern manufacturing. IEEE Trans. Ind. Inform. 10(2), 1537–1546 (2014) 4. Parker, D.B.: Fighting Computer Crime: A New Framework for Protecting Information. Wiley, New York (1998) 5. Sterbenz, J.P.G., David, H., Egemen, K.C., Abdul, J., Justin, P.R., Marcus, S., Paul, S.: Resilience and survivability in communication networks: strategies, principles, and survey of disciplines. Comput. Netw. 54(8), 1245–1265 (2010) 6. Mahalle, P., Babar, S., Neeli, R.P., Prasad, R.: Identity management framework Towards Internet of Things (IoT): roadmap and key challenges. In: Meghanathan, N., Boumerdassi, S., Chaki, N., Nagamalai, D. (eds.) Recent Trends in Network Security and Applications, pp. 430–439. Springer, Berlin (2010) 7. Vermesan, O., Peter, F., Patrick, G., Sergio, G., Harald, S., Alessandro, B., Ignacio, S.J.: Internet of Things strategic research roadmap. Internet of Things Glob. Technol. Soc. Trends 1, 9– 52. https://internet-ofthings-research.eu/pdf/IoT_Cluster_Strategic_Research_Agenda_2011. pdf (2011) 8. Abomhara, M., Køien, G.M.: Security and privacy in the Internet of Things: current status and open issues. In: International Conference on Privacy and Security in Mobile Systems (PRISMS), Aalborg, pp. 1–8 (2014, May 11–14) 9. Cerf, V.G.: Access control and the Internet of Things. IEEE Internet Comput. 19(5), 96–103 (2015) 10. Sicari, S., Alessandra, R., Luigi, A.G., Alberto, C.: Security, privacy and trust in Internet of Things: the road ahead. Comput. Netw. 76, 146–164 (2015) 11. Storm, D.: MEDJACK: hackers hijacking medical devices to create backdoors in hospital networks. Comput. World 8 (2015) 12. Maglaras, L.A., Al-Bayatti, A.H., Ying, H., Wagner, I., Janicke, H.: Social Internet of vehicles for smart cities. J. Sen. Actuator Netw. 5(1), 3 (2016) 13. Misra, S., Kapadi, M., Gudi, R.D., Srihari, R.: (2016) Production scheduling of an air separation plant. IFAC-PapersOnLine 40(7), 675–680 (2016) 14. Liang, G., Steven, R., Weller, J., Zhao, Fengji, L., Zhao Yang, D.: The 2015 Ukraine blackout: implications for false data injection attacks. IEEE Trans. Power Syst. 32(4), 3317–3318 (2017) 15. Kay, J.: Data Sharing in Geonomics–Is It Lawful? In: Dutton, W.H., Jeffreys, P.W. (eds.) World Wide Research, pp. 245–246. Oxford University Press, Oxford (2010) 16. Mittelstadt, B.: Ethics of the Health-Related Internet of Things: A Narrative Review. Springer, London (2017) 17. Li, S., Tryfonas, T., Li, H.: The Internet of Things: a security point of view. Internet Res. 26(2), 337–359 (2016) 18. Keoh, S., Kumar, S., Tschofenig, H.: Securing the Internet of Things: a standardization perspective. IEEE Internet of Things J. 1(3), 265–275 (2014) 19. Nasscom: IoT Technology in India. https://www.community.nasscom.in/communities/eme rging-tech/iot-ai/iot-technology-in-india.html (2020) 20. Spiekermann, S.: The self-regulation approach chosen for the development of the PIA framework for RFID. Presentation at the BCS/OII Seminar on ‘The Societal Impact of the Internet of Things’, British Computer Society (2013, February 14) 21. Roman, R., Najera, P., Lopez, J.: Securing the Internet of Things (IoT). IEEE Comput. 44(9), 51–58 (2011) 22. Gessner, D., Alexis, O., Alexander Salinas, S., Alexandru, S.: Trustworthy infrastructure services for a secure and privacy-respecting Internet of Things. In: IEEE 11th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Liverpool, pp. 25–27, 998–1003, (2012)

56

S. K. Jha and S. S. Kumar

23. Ziegeldorf, J.H., Oscar, G.M., Klaus, W.: Privacy in the Internet of Things: threats and challenges. Secur. Commun. Netw. 7(12), 2728–2742 (2014) 24. Chatterjee, S., Kar, A.K.: Regulation and governance of the Internet of Things in India. Digital Policy, Regul. Governance, 20(5), 399–412 (2018) 25. Yan, Z., Peng, Z., Athanasios, V.V.: A survey on trust management for Internet of Things. J. Netw. Comput. Appl. 42, 120–134 (2014) 26. Weber, R.H.: Internet of Things: new security and privacy challenges. Comput. Law Secur. Rev. 26(1), 23–30 (2010) 27. Hill, K.: These Academics Spent the Last Year Testing Whether Your Phone is Secretly Listening to You. Gizmodo, https://gizmodo.com/theseacademics-spent-the-last-year-testingwhether-you-1826961188 (2018) 28. MacMillan, D. (2018). Tech’s ‘Dirty Secret’: The App Developers Sifting Through Your Gmail. https://www.wsj.com/articles/techs-dirty-secretthe-app-developers-sifting-thr ough-your-gmail-1530544442 (2018) 29. The Hindu: A greater role of NGOs in cyber crime awareness. The Hindu. https://www.the hindu.com/news/cities/chennai/chen-events/a-greater-role-for-ngos-in-cyber-crime-awaren ess/article5960579.ece (2014) 30. Dalmia, V.P.: Data Protection Laws in India: Everything You Must Know. https://www.mon daq.com/india/data-protection/655034/data-protection-laws-in-india--everything-you-mustknow (2017) 31. Smys, S., Basar, A., Wang, H.: Hybrid intrusion detection system for Internet of Things (IOT). J. ISMAC 2(4), 190–199 (2020) 32. Shakya, S.: Process mining error detection for securing the IOT system. J. ISMAC 2(4), 147– 153 (2020)

Application of Artificial Intelligence in New Product Development: Innovative Cases of Crowdsourcing Srirang K. Jha and Sanchita Bansal

Abstract Crowdsourcing is a community-based approach to solve any problem faced by societies, business enterprises, non-profit organizations and the governments by means of collective wisdom and an indomitable desire to find solutions out of their own volition. The concept of crowdsourcing is about three hundred years old. However, progress in artificial intelligence has made the practice of crowdsourcing all the more popular in recent times. The concept based on open innovation has touched all aspects of human life and all the areas of management functions. Likewise, crowdsourcing has revolutionized new product development as evident from various success stories. A comprehensive review of 32 researches on the theme from diverse perspectives has been conducted to explore the instrumentality as well as the efficacy of crowdsourcing as a tool for new product development in the emerging scenario while leveraging the power of artificial intelligence. Keywords Artificial intelligence · Crowdsourcing · Open innovation · New product development

1 Introduction Generally speaking, crowdsourcing has been associated with crowdfunding for new projects or some pressing social cause. However, the concept is quite broad-based and can impact several other dimensions of human activities geared towards improving the quality of life of the communities across the world. Indeed, crowdsourcing entails subcontracting things to the masses [1]. The concept of crowdsourcing has been around for about last three hundred years [2]. However, recent advances in artificial intelligence have made the notion of crowdsourcing all the more popular as a community-based instrument for resolving issues, testing new ideas and providing S. K. Jha (B) Apeejay School of Management, New Delhi, India S. Bansal University School of Management Studies, Guru Gobind Singh Indra Prastha University, New Delhi, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_6

57

58

S. K. Jha and S. Bansal

support in an effective manner [3]. Communities have tremendous power to add value to ideas and offerings of both business firms and non-profit organizations. Indeed, crowdsourcing enables the entrepreneurs of all hues to co-create ideas, offerings, services, products, etc. [4–7]. Greater use of Internet of Things (IoT) to augment the efficacy of crowdsourcing is likely to capture infinite passive data based on experiences and responses of the stakeholders and transform the quality of output, especially in the context of new product development. A major shift from active data input driven by the user communities to passive date generated by them and captured and processed by IoT sensors and big data technologies can open up a sea of opportunities for development of innovative products and services imbued with tremendous value for the communities at large [8]. The concept of crowd is supreme in crowdsourcing. Generally, crowd has such connotations as troop, throng, mob, herd, horde, gang, mass, etc. Such formations are different from communities which are characterized by identical culture, thought processes and response structures. Hence, the experiences and responses of the crowd are infused with diversity and usually through fresh and unorthodox insights into the given problems. Crowd as such is capable of collating diverse information, intelligence and wisdom into a unified set of knowledge that can really be used for the benefit of all the stakeholders including those who contribute towards formation of new knowledge in any manner. Hence, the entrepreneurs look up to the crowd for not only funds for their new ventures but also for incisive feedback on new products/services/offerings as well as solutions to certain operational or management issues that arise while launching or running any project/enterprise. Crowdsourcing facilitates smooth transfer of discrete and scattered knowledge to the entrepreneurs in the most cost-effective manner by means of technologies powered by artificial intelligence.

2 Review of Related Work Although crowdsourcing has been doing the rounds for long, we do not have its universally acceptable definition as yet [9]. Crowdsourcing can be explained in terms of leveraging information technology to farm out any problem that the enterprises find difficult to resolve internally to a tactically distinct populace of human as well as non-human players through an across-the-board invite [10]. Also, crowdsourcing provides unique opportunities to both internal as well as external stakeholders to participate in finding a solution to some challenging problem faced by the enterprise or non-profit organizations or the communities in general [11] Scholars have observed that crowdsourcing was facilitated via word of mouth or contests prior to advances in artificial intelligence. Fast emerging social web or Web 2.0, Web 3.0, social media and IoT have transformed the way crowdsourcing is being practised these days. A number of corporate houses such as Dell, Ford, IBM, PepsiCo, Unilever, among others have contributed significantly towards the development of technology-driven crowdsourcing applications [12]

Application of Artificial Intelligence in New Product …

59

Table 1 Dimensions of the next-generation crowdsourcing Situated crowdsourcing Spatial crowdsourcing • Computational basics in routine settings, e.g. • Augments data collection propensity due to settings civic displays, embedded tablets [13] its ability to capture both time and space • Involves dynamic, specific, human input dimensions with the task subject also at the individual • Higher degree of authenticity of the collated level [12] data Crowdsensing • IoT-based data collection sans any human involvement • Privacy of individuals highly compromised during data collection

Wearables crowdsourcing • IoT-based data collection enabled by the individuals using the devices • Privacy of individuals highly compromised during data collection

Source Compiled by the authors

Contemporary notion of crowdsourcing is primarily based on the proposition that a particular task which was earlier carried out by in-house employees and managers can be outsourced to external actors who might have better capabilities and insights to deliver the desired results. In recent times, the next-generation crowdsourcing has evolved in terms of situated crowdsourcing, spatial crowdsourcing, crowdsensing and crowdsourcing through wearable devices [12]. Table 1 provides an overview of the next-generation crowdsourcing dimensions and their efficacy. Crowdsourcing-based new product development encompasses potential users as ideators in the ideation phase and as co-creators in the product development phase by way of contributing their mite in various subtasks opted by them voluntarily [14]. Crowdsourcing presents a cost-effective and faster cheaper and quicker prospects for obtaining market intelligence [15]. Advances in crowdsourcing have opened vistas for its application in development of innovative products and services [16]. A number of companies have established enduring crowdsourcing groups which continually accumulate ideas for new products and services from a distributed “crowd” of users [17]. For example, Dell IdeaStorm, InnoCentive, Gold Corp Challange, iStockphoto, Quirky, Mechanical Turk and Unilever Foundry IDEAS have appeared as powerful crowdsourcing applications for finding unconventional ideas. However, only limiting factors vis-à-vis these applications are the expressed requirement of active data inputs from the respondents [12]. Indeed, open idea calls have become quite popular among the companies which obtain inexhaustible number of propositions from a wide array of respondents with diverse mind-sets and thought processes ensuring richness of the shared suggestions [18]. Crowdsourcing has also debunked the myth that idea generation for new products and services is exclusive prerogative of the domain experts such as engineers, managers, designers, among others [19]. Design crowdsourcing as part of new product development endeavours has also gained traction all over the world in recent times. Enterprises tend to crowdsource designs primarily to understand perceived usability, reliability and technical intricacies from users’ perspective [20]. Design crowdsourcing has positive correlation with unit sales, indicating its efficacy especially if the quality of the preliminary product concept is commendable [20]. A classic case of design crowdsourcing is

60

S. K. Jha and S. Bansal

an open call for design of a new model of car, i.e. “Fiat Mio CC” made by FIAT Brazil that attracted 14,000 participants and likely customers from 140 countries [21]. Threadless, an online clothing enterprise, has leveraged design crowdsourcing in a most interesting manner by way of producing apparel based on users’ feedback and suggestions. Top ten crowdsourced designs are chosen each week by Threadless on the basis of active users’ comments, while the creators of all the chosen designs are rewarded profusely with gift vouchers and goodies [22]. Threadless is a fascinating success story with zero unsold products, $17,000,000 annual sales and 35% profit margins—all this just by leveraging the potential of design crowdsourcing [22]. People participate in the crowdsourcing activities for financial incentives, recognition, passion to solve a complicated problem and above all to make any new product or offering best suited to their needs and expectations. Mechanical Turk, a crowdsourcing platform promoted by Amazon allows the participants earn financial rewards for completing a particular project better known as human intelligence task [23] InnoCentive is another crowdsourcing platform that also provides financial rewards to the participants in lieu of solving problems posed by various clients [24]. Several other scholars have also indicated financial incentives as trigger that motivates the users to participate in the open calls under crowdsourcing projects/opportunities [23, 25, 26]. Users’ community such as Yahoo! Answers attract a number of participants who contribute towards solving problems just for recognition [27]. Various models have been suggested for augmenting efficacy of crowdsourcing in new product development. A very simple way of crowdsourcing for new product development is a three-stage model, i.e. making a choice regarding a crowdsourcing initiative, taking a call on designs and incentivizing the contributors from the crowd [28]. Further, one of the most comprehensive and widely accepted models is stagegate filtering process (Fig. 1) which is distributed across six stages, viz. original idea, mapping scenarios, concept ideas, concept design, modelling and launch with an express provision of feedback and votes at each stage to rule out unviable options [29].

Fig. 1 Stage-gate process for the application of crowdsourcing in product development. Source Saldanha et al. [29]

Application of Artificial Intelligence in New Product …

61

3 Findings and Discussion Crowdsourcing enables the companies to obtain a great number of ideas from the diverse set of people across the board. However, effectiveness of each of the suggested ideas and their implementation may not be tenable. While the people making the suggestions in response to open idea calls are fully convinced about the usefulness of their propositions, they usually undervalue the price implementing the crowdsourced proposals [30]. Real challenge is to select the most pragmatic idea for implementation out of innumerable suggestions accumulated in response to open idea calls. It has been observed that ideas generated through crowdsourcing are generally better than those developed in-house through brainstorming or contests in terms of innovation and user advantage but scoreless in terms of viability [19]. Hence, crowdsourced ideas need to be used judiciously to match the ideas for new products and services developed in-house [19]. Crowdsourced ideas are likely to offer greater insights into product line extensions and piecemeal refinements than sweeping innovations [31]. Hence, the crowdsourced ideas are generally viewed as additional inputs while in-house propositions are taken rather seriously during initial phase of new product development [31]. However, at a stage close to launching the new products/services/offerings, crowdsourced ideas can be far more sensible [31]. In fact, crowdsourcing engenders a win–win association, forming value for both the enterprises as well as the users [32]. Further it is wrong to assume that crowdsourcing is an infallible concept. Flipside of crowdsourcing is a detrimental effect on privacy of individuals participating in the process. Another pitfall of crowdsourcing is a strong belief that shared insights are often better than the natural acumen of the entrepreneurs. On several occasions, mobs have displayed odd responses which could be unfavourable or damaging. Hence, it is imperative that the entrepreneurs apply their mind judiciously while adopting the ideas/feedbacks generated through crowdsourcing. On several occasions, the crowdsourcing initiatives fail due to lack of clarity regarding the problems, implementation constraints, abysmal monitoring at the level of the enterprises and wider scope of the open call mindlessly running across different phases of new product development.

4 Conclusion Application of crowdsourcing in new product development is critical as about 90% of new launches fail due to mismatch between users’ expectation and offerings made by the enterprises. The best way to ward off imminent product failures is to involve the users in the process of new product development by opening up the opportunities at the stage of conceptualization when they can articulate their needs and constraints of using the existing products. Further, seeking ideas from the users while designing the product can actually enable the enterprises to come closest to their expectations. Users’ feedback at the product testing stage can be quite valuable so that all the issues

62

S. K. Jha and S. Bansal

related to product usage can be ironed out well before a successful launch. There are several success stories which indicate the efficacy of crowdsourcing as a means of new product development in an effective manner. In this article, 32 major researches on the theme have been reviewed comprehensively to establish the rationale and efficacy of crowdsourcing driven by artificial intelligence as a tool for new product development. A critique of the emerging paradigms of crowdsourcing for new product development has been presented. A holistic review of some of the success stories based on new product development by means of crowdsourcing indicates that this concept has matured enough to be adopted as best practices for the benefit of all the stakeholders.

References 1. Rouse, A.: A Preliminary taxonomy of crowdsourcing. In; ACIS 2010 Proceedings, vol. 76. https://aisel.aisnet.org/acis2010/76/ (2010) 2. Spencer, R.W.: Open innovation in the eighteenth century: the longitude problem. Res. Technol. Manag. 55(4), 39–44 (2012) 3. Howe, J.: The rise of crowdsourcing. Wired 14(6), 1–5 (2006) 4. Morphy, E. The dark side of crowdsourcing. LinuxInsider. https://chacocanyon.com/pdfs/Lin uxInsider%202009-04-24 (2009) 5. Harris, C.G.: Dirty deeds done dirt cheap a darker side to crowdsourcing. In: Proceedings—2011 IEEE International Conference on Privacy, Security, Risk and Trust and IEEE International Conference on Social Computing, PASSAT/Social.Com, pp. 1314–1317 (2011) 6. Simula, H.: The rise and fall of crowdsourcing? In: Proceedings of the Annual Hawaii 123 International Conference on System Sciences, pp. 2783–2791 (2013) 7. Heidenreich, S., Wittkowski, K., Handrich, M., Falk, T.: The dark side of customer cocreation: exploring the consequences of failed co-created services. J. Acad. Mark. Sci. 43(3), 279–296 (2015) 8. Lohr, S.: The age of big data. New York Times, 11. https://www.nytimes.com/2012/02/12/sun day-review/big-datas-impac(d1wqtxts1xzle7.cloudfront.net) (2012) 9. Estelles-Arolas, E., Gonzalez-Ladron-de-Guevara, F.: Towards an integrated crowdsourcing definition. J. Inf. Sci. 38(2), 189–200 (2012) 10. Kietzmann, J.H.: Crowdsourcing: a revised definition and introduction to new research. Bus. Horiz. 60(2), 151–153 (2017). https://doi.org/10.1016/j.bushor.2016.10.001 11. Garrigos-Simon, F.J., Narangajavana, Y., Galdón-Salvador, J.L.: Crowdsourcing as a competitive advantage for new business models. In: Gil-Pechuán, I., et al. (eds.) Strategies in E-Business, pp. 29–37. Springer, New York (2014) 12. Brown, T.E.: Sensor-based entrepreneurship: a framework for developing new products and services. Bus. Horiz. 60, 819–830 (2017) 13. Goncalves, J.: Situated Crowdsourcing: Feasibility, Performance, and Behaviours. University of Oulu, Oulu, Finland. Available at https://jultika.oulu.fi/files/isbn9789526208503.pdf (2015) 14. Zhu, J.J., Li, S.Y., Andrews, M.: Ideator expertise and cocreator inputs in crowdsourcing-based new product development. J. Product Innov. Manag. 34(5), 598–616 (2017) 15. Gatautis, R., Vitkauskaite, E.: Crowdsourcing application in marketing activities. Procedia Soc. Behav. Sci. 110, 1243–1250 (2014). https://doi.org/10.1016/j.sbspro.2013.12.971 16. Whitla, P.: Crowdsourcing and its application in marketing activities. Contemp. Manag. Res. 5(1), 15–28 (2009) 17. Bayus, B.L.: Crowdsourcing new product ideas over time: an analysis of the Dell IdeaStorm Community. Manag. Sci. 59(1), 226–244 (2013)

Application of Artificial Intelligence in New Product …

63

18. Schemmann, B., Herrmann, A.M., Chappin, M.M., Heimeriks, G.J.: Crowdsourcing ideas: involving ordinary users in the ideation phase of new product development. Res. Policy 45(6), 1145–1154 (2016) 19. Poetz, M.K., Schreier, M.: The value of crowdsourcing: can users really compete with professionals in generating new product ideas? J. Product Innov. Manag. 29(2), 245–256 (2012) 20. Allen, B.J., Chandrasekaran, D., Basuroy, S.: Design crowdsourcing: The impact on new product performance of sourcing design solutions from the “crowd.” J. Mark. 82(2), 106–123 (2018) 21. Brondoni, S.M.: Intangibles, Global networks and corporate social responsibility. Symphonya Emerg. Issues Manag. 2, 6–24 (2010) 22. Kavaliova, M., Virjee, F., Maehle, N. and Kleppe, I. A.: Crowdsourcing innovation and product development: gamification as a motivational driver. Cogent Bus. Manag. 3(1) (2016) 23. Horton, J. J. and Chilton, L. B.: The labor economics of paid crowdsourcing. In: Proceedings of the 11th ACM Conference on Electronic Commerce, Harvard University, MA, pp. 209–218 (2010) 24. Tran, A., ul Hasan, S., Park, J.: Crowd participation patterns in the phases of a product development process that utilizes crowdsourcing. Ind. Eng. Manag. Syst. 11(3), 266–275 (2012) 25. Ebner, W., Leimeister, J.M., Krcmar, H.: Community engineering for innovations: the ideas competition as a method to nurture a virtual community for innovations. R&D Manag. 39(4), 342–356 (2009) 26. Albors, J., Ramos, J.C., Hervas, J.L.: New learning network paradigms: communities of objectives, crowdsourcing, wikis and open source. Int. J. Inf. Manag. 28(3), 194–202 (2008) 27. Wightman, D.: Crowdsourcing human-based computation. In: Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, Reykjavik, Iceland, pp. 551–560 (2010) 28. Panchal, J.H.: Using crowds in engineering design: towards a holistic framework. In: Proceedings of ICED 2015/20th International Conference on Engineering Design, Milan, Italy, July 27 - 30, 2011, The Design Society, pp. 27–30 (2015) 29. Saldanha, F.P., Cohendet, P., Pozzebon, M.: Challenging the stage-gate model in crowdsourcing: the case of Fiat Mio in Brazil. Technol. Innov. Manag. Rev. 4(9), 28–35 (2014) 30. Huang, Y., Singh, P.V., Srinivasan, K.: Crowdsourcing new product ideas under consumer learning. Manag. Sci. 60(9), 2138–2159 (2014) 31. Zahay, D., Hajli, N., Sihi, D.: Managerial perspectives on crowdsourcing in the new product development process. Ind. Mark. Manag. 71, 41–53 (2018) 32. Djelassi, S., Decoopman, I.: Customers’ participation in product development through crowdsourcing: issues and implications. Ind. Mark. Manag. 42(5), 683–692 (2013)

The Effect of the Topology Adaptation on Search Performance in Overlay Network Muntasir Al-Asfoor and Mohammed Hamzah Abed

Abstract In this paper, the proposed research work study and focus on the effect of overlay topology adaptation and improved the search performance in a P2P overlay network. Two different search methods: Guided search vs. blind search have been studied and investigated with the aim to improve the network performance. The bidirection graph has been used to represent the proposed P2P overlay network as a simulation model, where vertices represent the network nodes and edge represents virtual connection between the nodes. In this work, two different search methods have been studied under these circumstances, they are DFS and BFS. Further, the algorithms were examined under scale-free network topology and topology adaptation. A simulation scenario result has shown that the search and performance are better under the adapation of P2P overlay network than the random scale-free in terms of result and the quality of search performance. The tools used in this model were Java and Matlab. Keywords Sematic search · Guided search · Self-adaptation · P2P network · Overlay network

1 Introduction Nowadays, computer networking domain has experienced many challenges, one of them is searching across a network with a restricted condition of reaching the destination node within a deadline. With small networks, finding the destination is rather easy and could be done subjectively. However, when a network grows large, finding the best solution is a hard job to do subjectively. Traversing a large network and looking for complete solution is a challenging task in a restricted environment (time and cost). Accordingly, researchers in academia and industry have employed search M. Al-Asfoor · M. H. Abed (B) College of Computer Science and IT, University of Al-Qadisiyah, Diwaniyah, Iraq e-mail: [email protected] M. Al-Asfoor e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_7

65

66

M. Al-Asfoor and M. H. Abed

algorithms, which are usually applicable for graphs, on networks’ applications. By modeling the network as a graph, heuristics search algorithms can be applied to find the best possible solutions (probably shortest path). breadth-first search [1] is a well-known algorithm to find the shortest path between a source node and all other nodes in the network. However, a conventional breadth-first search algorithm does not support finding the shortest path between only two nodes. Finding all paths in a single run rather time and space consuming and not useful in a restricted conditional environment like open networks. Another approach to deal with the search problem in a large network is random walks [2] where the path between two nodes can be found by repetitive random paradigm. Using this technique, at each node, the next step is taken by selecting one connection (neighbor) of this node randomly. The drawbacks of random walk are basically the repetition that takes place by vising the same node more than once. Furthermore, using this technique the search would take inordinate amount of time and subsequently space. However, a path would be found in some cases faster than heuristic search. Another algorithm to deal with search across an open network is Dijkstra’s algorithm [3]. This algorithm does use the same methodology of breadth-first search; however, it differs from BFS by not taking into account the length of the connection (edge) between the nodes. It keeps information about each path which had been found before for each node and updates them repeatedly. Similar to BFS, Dijkstra’s algorithm is time and space consuming and relatively it has a relatively high complexity (depending on how it is implemented) [4, 5]. As shown above all, the algorithms which have been described do not consider learning and keep history of pervious less efficient selection. The aim of this paper as will be described in the next sections is to improve the performance of these algorithms. Employing learning and keeping track of previous decisions would make it possible to take better decisions when moving from one node to another during the search process. Furthermore, rearranging the network’s topology by adapting the network connections according to the information which have been collected before would improve the search performance in terms of time and accuracy of results. The rest of this paper is organized as follows: Sect. 2 illustrates general background on overlay network, Sect. 3 related work, then Sect. 4 the system model and algorithms. More after, Sect. 5 deals with the simulation results and analysis. Finally, Sect. 6 contains the conclusions from the applied solutions and suggestions for future improvements.

2 Overlay Network Overlay network is an application network create virtually on top of the physical networks or topologies. Which each node in overlay network initially select its neighbors and built overlay links to minimize its own cost as well as the overall network cost [6]. Overlays network do not suffer the rigidness of physical network, since the logical topology more flexible and increase the adaptive of reconfiguration [7]. The physical topology, i.e., underlay network can be presented as a graph (G), in the proposed model assume N number of nodes in virtual and physical are

The Effect of the Topology Adaptation on Search Performance …

67

Fig. 1 Example of P2P overlay network [9]

equal. The peers communicate with each other and establish self-management and organization over the physical networks. P2P network has a high degree of decentralization compare with other methods [8]. For examples distributed system such as P2P network are overlay network because the P2P nodes run on top of the internet [9]. originally internet build as an overlay network [10], in addition, the overlay network creates the foundation of virtual network infrastructure. Figure 1 shows the structure of overlay network over the internet.

3 Related Work Several methods of self-adaptive overlay network were proposed by the researchers; in this section, some approaches will discussed as follow: M. Youssef and the research group in [6], proposed strategy based on adaptive of overlay network topology using some matrix of traffic estimation, based on node behavior to create overlay link between closest and near-optimal overlay of each node based on heuristic search algorithm. The topology adaptation leads to change some of traffic because of adding and deleting overlay link in topology. As well as in [7], the researcher proposed traffic predication model for self-adaptive routing on overlay network topology, which can be used a neural network to predicate a traffic link that will be used in future. In [8], Peer to Peer overlay networking was discussed based on full topology knowledge network, even routing information and the routing manner. The overlay network can

68

M. Al-Asfoor and M. H. Abed

provide neighborhoods locations and services applied through directed search, therefore the model can be creating a direct link to the closed manner node in overlay environment. In another work [9], a distributed algorithm to solve the problem of optimal content-based routing (OCBR) was proposed based on an assumption, therefore the reconfigured overlay network cannot adapt well to traffic.

4 System Model of Work To evaluate the fidelity of the suggested solutions the network has been designed as graph with vertices represent nodes and edges represents connections [11]. Each node could be a search initiator or a target of a search message which is initiated by another node. Typically resources in a distributed system would be described using standard description languages or frameworks like WSDL (Web Service Description Language) [12], RDF (Resource Description Framework) [12] and OWL (Web Ontology Language) [13–15]. For simulation purposes the resources to be owned by each node, the resources have been described as a set of numbers which are going to be used as a key factor in the matchmaking and adaptation process. Each node is assigned a vector of three values of the form: ri=r0i , r1i ,r2i , where each resource element is assigned a value between 0 and 4 using uniform random distributer function.Furthermore, the matchmaking process between two descriptions, for instance a search request message with a receiver node description is calculated using Manhattan distance measure as follows: D(i · j) =

k=2    r ki − r k j 

(1)

k=0

where: D(i · j): is the semantic distance between description i and description j. ri: is the resource description of i. rj: is the resource description of j. Accordingly the similarity is computed as the opposite of distance using Eq. (2) as follows:   12 − D(i.j) 100% (2) sim(i · j) = 12 For example, two descriptions of the exactly same are produced; a  resources ∗ distance = 0, which in turn gives a similarity sim = 12−0 100 = 100% while 12 twodescriptions with the maximum distance = 12 produce a similarity as follows:  100 = 0% . sim 12−12 12

The Effect of the Topology Adaptation on Search Performance …

69

5 Simulation Result and Analysis The experiment has been designed to study the system performance in terms of search performance and quality of search results. Search performance has been measured by calculating the number of hops to satisfy the search criteria. While the quality of search results is represented by the semantic similarity between the request and node’s resource description. The comparisons have been performed between two search algorithms. Namely, standard breadth-first search algorithm and proposed guided search algorithm. More after, a performance comparison has been conducted between the two mentioned algorithms with topology adaptation and without topology adaptation.

5.1 Simulation Setting This research work has used a bi-direct graph as a formulated one to the proposed model of the network, each node in graph represents station in simulation network while vertices represent the nodes, and edge represent the virtual connection in application layer of the overlay network. Java is used here to implement the suggested solutions and algorithms. Moreover, the simulation results have been analyzed and represented using a Matlab toolkit. Table 1 shows the simulation setting for the first experiment, where the network is being modeled as a graph of 200 nodes with each node connected to a maximum of 15 other nodes. Furthermore, a total of 50 requests have been created by each node through the simulation time and maximum number of hopes for each node to reach the destination or fail = 10. To study the feasibility of the suggested solutions a set of experiments have been conducted to evaluate the system performance under different conditions. First of a set of experiments have been engineered to study the search performance with a static network setting (i.e. the network is initially created randomly with each node must be connected to at least one other node). Two different search algorithms are implemented and tested under the static network, namely: breadth-first search and guided search. The simulation results have shown that guided search is performing better that breadth-first search algorithm under some conditions. With smaller network and partially connected nodes, no noticeable differences in performance have been noticed. However, with bigger networks and fully connected nodes (each node is connected to the maximum number of nods) guided search performance is improved gradually as the simulation is running. Analysis has shown that nodes are learning from their previous search requests and accordingly forwarding the future requests Table 1 Simulation setting Parameter name

Number of nodes

Max connections

Max requests

Number of hops

Value

300

15

50

10

70

M. Al-Asfoor and M. H. Abed

to the best-known connections and so on. More after, the system has been evaluated with network adaptation. Using this technique the node forwards the message to the best contact as in guided search but additionally updates its list of connections by removing the worst contact ( the connection with resource description that is far away from node description). Updating the node’s connections locally would update the whole network globally. The simulation results have shown a gradual increase in system performance in terms of (search success ratio, number of hops to success). The three experiments have been named as: Configuration 1: standard search algorithm as config1. Configuration 2: guided search as config 2. Configuration 3: guided search with self-adaptation as config 3. As shown in Figs. 2, 3 and 4. The figures are shown that under predefined allowable matching error between 0 (no error) and 12 (completely different description) guided search with self-adaptation has overperformed the other two settings. allowable error values in the figures are cut to 6.0 because the system has shown no changes after. The proposed model suggests guided search based on adaptation of network topology based on node’s behavior and their own value. Figure 5 shows the initial topology network connection of 50 nodes and topology after adaptation. the figure shows that each node has connections with nodes that is similar or have some of features contain same interests. Each node has own features and attributes, based on these values the new connections are established virtually to improve the search performance in overlay network.

Fig. 2 A comparison between the three configurations in terms of system mean average error

The Effect of the Topology Adaptation on Search Performance …

71

Fig. 3 A comparison between the three configurations in terms of average success hop count

Fig. 4 A comparison between the three configurations in terms of failure (success) ratio

6 Conclusion and Future Work This paper has studied the process of search through a dynamic adaptable network. As the network size increases proportionally, searching the whole network is a timeconsuming activity. To address this problem a standard breadth-first search algorithm has been adapted to support guided search by using the resource owned by each node.

72

M. Al-Asfoor and M. H. Abed

Fig. 5 a initial 50 node network topology. b Adaptation of the same network

Furthermore, the suggested solutions propose the use of the guided search heuristics to self-adapt the network itself which creates the phenomena of “local changeglobal effect". Simulation results have shown that guided search and adaptation are improving the search performance noticeably. Suggestions for future improvements would be by applying the same solutions on more search algorithms and testing them under different network types.

The Effect of the Topology Adaptation on Search Performance …

73

References 1. Sun, Y., Yi, X., Liu, H.: The communication analysis of implementation in breadth first search algorithm. In: 2016 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData). IEEE (2016) 2. Bisnik, N., Abouzeid, A.: Modeling and analysis of random walk search in P2P networks. In: Proceedings of Second International Workshop on Hot Topics in Peer-to-Peer Computing (HOT-P2P’05) (July 2005) 3. Lu, J., Dong, C.: Research of shortest path algorithm based on the data structure. In: 2012 IEEE International Conference on Computer Science and Automation Engineering. IEEE (2012) 4. Xia, Y., Prasanna, V.K.: Topologically adaptive parallel breadth-first search on multicore processors. In: Proc. 21st Int’l. Conf. on Parallel and Distributed Computing Systems (PDCS’09) (2009) 5. Tsujimoto, T., Shindo, T., Kimura, T., Jin’no, K.: A relationship between network topology and search performance of PSO. In: IEEE World Congress on Computational Intelligence (2012) 6. Youssef, M., Natarajan, B., Scoglio, C.: Adapting the Overlay Network Topology Based on Traffic Matrix Estimation (2007) 7. Chi, M., Yang, J., Liu, Y., Li, Z.: A Traffic prediction model for self-adapting routing overlay network in publish/subscribe system. Mobile Inf. Syst. 2017 (2017) 8. Waldvogel, M., Rinaldi, R.: Efficient topology-aware overlay network. ACM SIGCOMM Comput. Commun. Rev. 33(1), 101–106 (2003) 9. Migliavacca, M., Cugola, G.: Adapting publish-subscribe routing to traffic demands. In: Proceedings of the Inaugural International Conference on Distributed Event-Based Systems (DEBS’07), pp. 91–96, ACM, Ontario, Canada (2007, June) 10. Galluccio, L., Morabito, G., Palazzo, S., Pellegrini, M., Renda, M.E., Santi, P.: Georoy: a location aware enhancement to viceroy peer-to-peer algorithm. Comput. Netw. 51(8), 1998– 2014 (2007) 11. Ludwig, S.A., Reyhani, S.M.S.: Semantic approach to service discovery in a grid environment. J. Web Semant. 4(1), 13 (2006) 12. Demirkan, H., Kauffman, R.J., Vayghan, J.A., Fill, H.-G., Karagiannis, D., Maglio, P.P.: Service-oriented technology and management: perspectives on research and practice for the coming decade. Electron Commer. Rec. Appl. 7, 356–376 (2008) 13. Lua, E.K., Crowcroft, J., Pias, M., Sharma, R., Lim, S.: A survey and comparison of peer-to-peer overlay network schemes. IEEE Commun. Surv. Tutorials 7, 7293 (2005) 14. Dimakopoulos, V.V., Pitoura, E.: Performance analysis of distributed search in open agent systems. In: Proceedings International Parallel and Distributed Processing Symposium. IEEE (2003) 15. Hacini, A., Amad, M.: A new overlay P2P network for efficient routing in group communication with regular topologies. Int. J. Grid Utility Comput. 11(1), 30–48 (2020)

Flanker Task-Based VHDR Dataset Analysis for Error Rate Prediction Rajesh Kannan Megalingam, Sankardas Kariparambil Sudheesh, and Vamsy Vivek Gedela

Abstract The human brain is the most efficient computer which works much faster than the human-made computers. But, when it comes to computation and logical things, computers show more efficiency. Thus, BCI (Brain-Computer Interface) has many advantages and can be a reason for a new technological evolution. In this paper, we are trying to use this technology as a tool to do some effective analysis. A system is introduced that shows how well a person can do the flanker task without making an error using the EEG signals recorded while the person performing the task. We use the machine learning algorithm LDA (Linear Discriminant Analysis) and the windowed means paradigm which is used for observing slow-changing brain dynamics mainly the ERP (Event-Related Potential). In our experiment, we use the data of two identical twins for the analysis. MATLAB with the BCILAB plugin is used as the platform for the analysis. Once the analysis is done, we can get the error rate of a person performing the task. We used this as a tool to increase the concentration levels and give the person feedback after every trial. Keywords BCI (Brain-Computer-Interface) · LDA · ERP · MATLAB · BCILAB

1 Introduction In the present century, everyone is involved in many activities and studies. Each day he/she wants to do more tasks with more perception and accuracy. But multiple tasks at a time makes a person lose his/her attention on individual task. In this paper, R. K. Megalingam (B) · S. K. Sudheesh Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India e-mail: [email protected] S. K. Sudheesh e-mail: [email protected] V. V. Gedela Department of EE, University of Cincinnati, Cincinnati, OH, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_8

75

76

R. K. Megalingam et al.

we are focusing on how to help these people by helping them to know how much error they make while doing a task which involves cognitive control. In the area of cognitive psychology, the Eriksen flanker task is utilized to evaluate the capacity to suppress responses that are unsuitable in a specific setting by utilizing a set of response inhibition tests. Thus, we can use this task and take the EEG of the person performing this particular task and our system which helps to examine people lacking cognitive control. As per the studies it is said that the brain shows peculiar nature and the EEG signals vary from person to person thus, we are using the EEG data of two identical twins to show the possibility of using EEG as a biometric tool. There are three main methods to measure brain activity. The non-invasive (sensors are kept on the brain scalp to gauge the electrical capability of the brain EEG or MEG the magnetic field), semi-invasive (sensors are placed on the exposed surface of the brain), and invasive (through surgery microelectrode are placed directly into the cortex). Here we use the non-invasive method. We predict the error rate of a person by recording the EEG signals using an EEG headset and processing it through our system. Our brain signals are always a combination of several basic base frequencies, which are considered to reflect specific psychological, emotional, or attentional states. Since these frequencies differ somewhat subject to singular components, stimulus properties, and interior states, researchers in this field categorize these frequencies on the basis of frequency bands, or certain frequency ranges. The Gamma band (>25 Hz), the Beta band (13–25 Hz), the Alpha band (8–12 Hz), the Theta band (4–8 Hz), and the Delta band (1–4 Hz). Table 1 shows some of the emotions or mental states that can be predicted by looking into the frequency range of the EEG signals. These peculiarities of each frequency band ranges are found by doing different testing and experiments. EEG signals are also used to diagnose brain tumor, stroke, and sleep disorders, thus an EEG headset can be added to systems similar to [8, 9]. This will help the doctors to analyze their patients from the home itself and the patients can be given better treatment and can be diagnosed in the early stages itself. In such experiments or studies, the data upon which we do the analysis matters a lot. In the Research paper [4] it shows that meditations help individuals to improve his/her concentration level. It also helps individuals to perform better in certain tasks that involve the cognitive skills of the brain. They have also conducted an experiment and came up with results that make the above statement reliable. In our case we are Table 1 Contains the normal features regarding each frequency bands Delta band (1–4 Hz)

Dominance of delta rhythm in the data shows the depth of the sleep

Theta band (4–8 Hz)

Can be used for measuring attention/concentration, mental workload

Alpha band (8–12 Hz)

Meditation, significant change in value while eyes closed and open

Beta band (13–25 Hz)

Occupied or restless thinking and active focus. High band power while executing movements

Gamma band (>25 Hz) Emotions

Flanker Task-Based VHDR Dataset Analysis for Error Rate …

77

dealing with EEG signals that are recorded from the surface of the brain using electrodes. Here we need to be very careful as there are high chances for unwanted noise signals to get mixed with the EEG signals also the mental state of the person during the signal extraction also matters a lot as the EEG is a mixture of signals. EEG signals can be also used to find the mental state of a person and also compare it with others. In another study [13] they have only used the electrode AF4 and designed a single electrode portable alcohol detection system using which we can identify people who have consumed alcohol. This will be a low-cost model also as only 1 electrode is used. The signals with the help of an accusation device and proper processing techniques can be even used to control bionic arm [5] is a similar work in which they developed and tested a bionic-arm which is controlled by the EEG signals by using ICA classifier to control pre-programmed mechanical tasks. Another important issue we need to concentrate on is the platform on which we do the analysis. MATLAB cannot be always used when it comes to controlling robots as it requires good processors and computers with sufficient memory. When it comes to android devices the software design plays a significant role. In the research paper [6] they explain the use of such software and have also come up with a version.

2 Related Works These are some of the related works in this field. ˙In [12] analyzing the EEG signals responsible for stress/depression and finding solutions on how they can be controlled. Here the author mainly focused on changes in EEG signals during mood change and the effects of how practices like yoga and meditation will help in prevention. Designing a wheelchair which is user friendly which can be used by physically disabled people [7]. Using instrumentations amplifier, operational amplifier, high pass, low pass filters, and notch filter they have designed a thought acquisition block and have also designed a thought transmission block in which a 12 bit ADC is used for digitization of the EEG signal and ATMega644 microcontroller for transmission. In [2] the EEG signals of people undergoing audio-visual stimulus is recorded and processed to identify the 5 emotions disgust, fear, sad, happy, and neutral state. Independent Component Analysis (ICA) is used for feature extraction K Nearest Neighbor (KNN) algorithm is used for classification. They also successfully identified the prominent regions of the brain responsible for these emotions. Research on the signals of fMRI were used to classify the states of brain activity underlying the target actions [11]. The directional tuning properties of the primary motor cortex or M1 brain segment were discovered at the voxel level for motor trajectory decoding, which is a simple functional property of neural activity in that segment. They have also performed a simulation demonstrating that it is possible to control the robotic arm based on multi-voxel patterns in real-time data. Spatial– temporal discriminant analysis (STDA) testing and deployment of ERP classification, which is a multi-way expansion of LDA [14]. By collaboratively finding two projection matrices from spatial and temporal dimensions, this approach maximizes the

78

R. K. Megalingam et al.

discriminant information between non-target and target groups, the study shows that this method effectively reduces the feature dimensionality in the discriminant analysis, and this will help to reduce the number of training samples needed for similar studies. An analysis on EEG signals recorded during sleep to find sleep disorders [3]. Such studies also help people identify the cause for feeling groggy or disoriented after they wake from sleep.

3 System Design and Implementation Our system design has four main stages: data acquisition, loading data, creating an approach, result analysis. The system architecture diagram is shown in Fig. 1. Result analysis is again divided into two, offline and online analysis.

3.1 Data Acquisition The data we have used is freely available from the internet. The data was downloaded from the website [15]. It’s a dataset of two twins performing flanker tasks. The dataset is in vhdr format and is 16 channel data. The data is directly loaded to the MATLAB for experiment. No preprocessing is done on the data.

Fig. 1 System Architecture—shows the procedure used for the computation step by step

Flanker Task-Based VHDR Dataset Analysis for Error Rate …

79

Fig. 2 Main GUI of BCILAB

3.2 Loading Data First, we start with the MATLAB and load the plugin BCILAB. Figure 2 shows the main GUI of BCILAB on which we are going to work. Then we take the data source and the.vhdr file. Here we can select the number of channels required for analysis by index subset, Range of subset, time range, and channel type. If you press ok before filling, the whole data will be uploaded. For now, we will do the same. By tying whos in the command window, we can check if data is loaded. Now it’s time for analysis.

3.3 Creating an Approach Here, we are selecting our computational approach which the data have to be analyzed with. In our case, it will be windowed mean. The windowed means paradigm is a common technique utilized for capturing slow-changing brain dynamics, reaction to events in particular (Event-Related Potentials/ERPs) for example, the observation of self-prompted mistakes, machine-prompted mistakes, and/or surprisal, prediction of movement intent, or overt attention. It can also be used to detect brain processes without a preceding event (i.e. asynchronously) when sufficient amounts of data from the ‘nothing’/’rest’ condition are included in the calibration data. Now as the approach is selected, we need to configure it. An epoch time window is taken from negative range to consider data before pressing the button and taken till 0.8. Epoch intervals are just specific intervals to inspect while processing/inspecting the data. Epoching a signal is a technique wherein, specific time-windows are separated from the continuous signal (in our case it is EEG). These time windows are called “epochs”, and usually are time-locked concerning an event e.g., a audio stimulus. LDA is the machine learning function used here. Linear Discriminant Analysis or Discriminant Function Analysis is a methodology for reducing the dimensionality that is commonly used for the problems of supervised classification. It is used in groups to model discrepancies, i.e., to distinguish two or more classes. Save the approach for future use. Now, we train the model with the approach. By inspecting the data, we get a prior view of the data that we are going to use for simulation. Inspecting the data also helps

80

R. K. Megalingam et al.

Fig. 3 Data representation

us to see the target markers in the data. Figure 3 is the representation of the data. Target markers are the list of types of those markers around which information will be utilized for BCI Standardization; each marker type encrypts an alternate target class (i.e., Desired output value) to be learned by the resulting BCI model. The target markers are given as you can see in Fig. 4 are S101, S102, S201, S22. we are also using a five cross-validation folds is done in our analysis.

4 Result Analysis 4.1 Offline Analysis Now, as the approach and data are ready, we start our prediction. Figure 4 shows our first prediction. It shows us the error rate of the first twin. The true positive/negative rate is the number of actual positive/negative positives that are properly known as such. The false-negative rate is the percentage of positive results that offer negative test results with the test. The chance that the null hypothesis for a particular test is wrongly rejected is a false positive ratio (also considered to be the’ false alarm ratio’). The false-positive rate (also known as the “false alarm rate”) typically refers to the false-positive ratio

Flanker Task-Based VHDR Dataset Analysis for Error Rate …

81

Fig. 4 Sample prediction of our design

predicted. We can also visualize this model with the help of a Topographical mapping of the brain as shown in Fig. 5. By seeing we can say the window 1, 2, 3, 7 has much more importance than the rest due to the weightage of data. In Fig. 6, we can see the 7th window of the visualization alone zoomed to show the intricate details. In the figure, we can mainly see two di-poles. This is the main two electrodes in which the signal is concentrated. From the diagrams, we can also see that the frontal lobe and parietal lobe are the major regions that give the signal. The prefrontal cortex is the major region of the brain responsible for cognitive control thus its helps in bringing more accuracy in our predictions. The topological analysis of the EEG is an easy and less time-consuming method. Topological data can be used in the MI (Motor imaginary) based BCI’s [1]. We are able to say this because other artifacts such as blink and muscle movements which have effects on the EEG signals are removed with the help of surface Laplacians for cleaning, topological localization of the data. Now we can apply the same model to a different person’s data and see the result. In our case, we are going to apply the model to the twin brother’s data. The same step is followed. The data is loaded and checked. No need to create a new model you

82

R. K. Megalingam et al.

Fig. 5 Seven windows of different topological mapping of brain

Fig. 6 Topological mapping of brain of seventh window

can apply the old model to the new data. The result of applying the same approach to the second twin brother is shown in Fig. 7.

4.2 Online analysis In the experiment, we use the online analysis to visualize our classifier’s output. We read the data from the dataset previously uploaded and write the output to a MATLAB visualization tool which plots the graph. Figure 8 is a probability distribution graph between the person is in no error (1) and in error (2). This is actually a fluctuating graph. We can see that the no error state will be mostly higher than the error state.

Flanker Task-Based VHDR Dataset Analysis for Error Rate …

Fig. 7 The second twin brother window

Fig. 8 Probability distribution graph between a person—no error (1) and in error state (2)

83

84

R. K. Megalingam et al.

5 Conclusion In this research work, we have introduced a system that shows how well a person can do the flanker task without making an error using the EEG signals recorded while the person performing the task. We used the machine learning algorithm LDA (Linear Discriminant Analysis) and the windowed means paradigm which was used for observing slow-changing brain dynamics mainly the ERP (Event-Related Potential). In our experiment, we used the data of two identical twins for the analysis. MATLAB with the BCILAB plugin is used as the platform for the analysis. We also presented the results of our findings. The first twin (Fig. 4) has an error rate of 4.8% which means he is correct 95.2% of the time and the second twin (Fig. 7) has an error rate of 11% which means he is correct 89% of the time. Figure 8 shows the change from the error state to the non-error state. The percentage values that we got from the experiment clearly explains why the (1) mostly dominant (2) as they show the no error and error state respectively. We also see that even though we take EEG data from the twin brothers, still there is difference in the error rate. This also shows the possibility of using EEG as a biometric tool that can be used for identification of a person’s signature, as the present fingerprint and face recognition are not that accurate.

6 Future Work Here, we have only used LDA and windowed means paradigms. We can try to use many other machine learning algorithms for example SVM which is used for classification as well as in regression analysis or we can use QDA which is variant of LDA were observations in each and every class the individual covariance matrix is estimated. Conduct a study on error rate of normal people and people diagnosed with autism. Develop and implement a system similar to [10] in which EEG signals can be used to control devices. Acknowledgements We would like to express our sincere gratitude to Almighty God for giving us the opportunity to carry out the research in a successful manner. We extend our sincere thanks to Amrita Vishwa Vidyapeetham and the Humanitarian Technology (HuT) Lab for aiding us in this venture.

References 1. Altindis, F., et al.: Use of topological data analysis in motor intention based brain-computer interfaces. Eur. Signal Process. Conf. 2018-Septe, 1695–1699 (2018). https://doi.org/https:// doi.org/10.23919/EUSIPCO.2018.8553382.

Flanker Task-Based VHDR Dataset Analysis for Error Rate …

85

2. Chinmayi, R., et al.: Extracting the features of emotion from EEG signals and classify using affective computing. In: Proceedings of 2017 International Conference on Wireless Communication & Signal Processing Networking, WiSPNET 2017, pp. 2032–2036 (2018, January). https://doi.org/10.1109/WiSPNET.2017.8300118 3. Chinmayi, R., et al.: Localization of spatial activity in sleep EEG. In: 2018 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC 2018). pp. 1–3 IEEE (2018). https://doi.org/10.1109/ICCIC.2018.8782423 4. Eskandari, P., Erfanian, A.: Improving the performance of brain-computer interface through meditation practicing. In: Proceedings of 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. EMBS’08—“Personalized Healthcare through Technology”, pp. 662–665 (2008). https://doi.org/10.1109/iembs.2008.4649239 5. Gayathri, G., et al.: Control of bionic arm using ICA-EEG. 2017 Int. Conf. Intell. Comput. Instrum. Control Technol. ICICICT 2017. 2018-Janua, 1254–1259 (2018). https://doi.org/ https://doi.org/10.1109/ICICICT1.2017.8342749. 6. Gutierrez, L., Husain, M.: Design and development of a mobile EEG data analytics framework. 2019 IEEE Fifth International Conference on Big Data Computing Service and Applications (BigDataService), 2019, pp. 333–339, https://doi.org/10.1109/BigDataService.2019.00059 7. Megalingam, R., et al.: Thought controlled wheelchair using EEG acquisition device. 3rd International Conference on Advancements in Electronics and Power Engineering, pp. 207–212 (2013) 8. Megalingam, R.K., et al.: Assistive technology for elders: Wireless intelligent healthcare gadget. In: Proceedings of 2011 IEEE Global Humanitarian Technology Conference (GHTC 2011), 2019, November, pp. 296–300 (2011). https://doi.org/10.1109/GHTC.2011.94 9. Megalingam, R.K., et al.: HOPE: An electronic gadget for home-bound patients and elders. 2012 Annual IEEE India Conference (INDICON 2012), pp. 1272–1277 (2012). https://doi.org/ 10.1109/INDCON.2012.6420814 10. Megalingam, R.K., et al.: IoT based controller for smart windows. In: 2020 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS 2020) (2020). https://doi.org/10.1109/SCEECS48394.2020.148 11. Nam, S., et al.: Motor trajectory decoding based on fMRI-based BCI—a simulation study. In: 2013 International Winter Workshop on Brain-Computer Interface (BCI 2013), pp. 89–91 (2013). https://doi.org/10.1109/IWW-BCI.2013.6506641 12. Tiwari, A., Tiwari, R.: Monitoring and detection of EEG signals before and after yoga during depression in human brain using MATLAB. In: Proceedings of International Conference on Computing Methodologies and Communication (ICCMC 2017). 2018-January, ICCMC, pp. 329–334 (2018). https://doi.org/10.1109/ICCMC.2017.8282702 13. Vinothraj, T., et al.: BCI-based alcohol patient detection. 1–6 (2018). https://doi.org/10.1109/ ifsa-scis.2017.8305564 14. Zhang, Y., et al.: Spatial-temporal discriminant analysis for ERP-based brain-computer interface. IEEE Trans. Neural Syst. Rehabil. Eng. 21(2), 233–243 (2013). https://doi.org/10.1109/ TNSRE.2013.2243471 15. Flanker task data “EEG dataset of Identical twins”

Integrating University Computing Laboratories with AWS for Better Resource Utilization Kailash Chandra Bandhu and Ashok Bhansali

Abstract Computing and storage are prime requirements for running and storing the business processes or other applications on cloud. With the emergence of Industry 4.0, all applications are moving to cloud infrastructure because of the continuously increasing demand for more computing power and storage offered by individual machines. Cloud computing also offers better security, management and costing. Generally, a lot of computer laboratories of a university sit idle during the night time, and hence, it can be utilized effectively by cloud infrastructure, especially during the day time of other countries. In this paper, a model is suggested to enhance the utilization of the computer laboratory resources of various universities located in different countries by integrating them with AWS cloud computing platforms. This paper discusses implementation and deployment of models using VMware vSphere ESXI, vCenter server which is configured in computer laboratory and AWS. Keywords Cloud · vCenter server · Computing · AWS · Resource utilization

1 Introduction In today’s scenario, the cloud computing is a popular platform for hiring the computing, storage and other resources on a rent basis to deploy the business applications on it. The cloud service providers have a huge setup of interconnected infrastructure at various locations across the globe and allowing the customer to configure their business on cloud from any location of his/her choice. Universities have a whole lot of computing resources at their disposal that sits idle for a considerable amount of time, especially during the night hours. If we can integrate these computer laboratories with cloud service providers, then these idle resources can be utilized in a better way. Presently, we have many cloud service providers in the market like Microsoft K. C. Bandhu (B) Medi-Caps University, Indore, India A. Bhansali OP Jindal University, Raigarh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_9

87

88

K. C. Bandhu and A. Bhansali

Azure, Amazon Web services, IBM salesforce, etc., but most of them do not have features to integrate the computer laboratories infrastructure with their public cloud and some of them allow integration, but it is difficult to manage this infrastructure. The AWS management portal provides facilities to manage the on premises resources by using VMware vCenter server on AWS cloud. The vCenter server installs plug-in inside your existing vCenter server environment. After installation, it enables migration of VMware vSphere virtual machines to Amazon EC2 and manages the resources inside vCenter server [1]. It also allows migration of the vCenter server workload to AWS vCenter server. VMware vSphere provides a virtualization platform which helps integrate different resources such as CPU, storage and network. vSphere manages these infrastructures by using a unique operating environment by providing different tools [2]. vSphere is a type 1 hypervisor which can directly be installed on hardware without any operating system, and it is also known as bare metal hypervisor. This hypervisor allows the creation of multiple virtual machines on common hardware and these virtual machines can be managed and accessed by vSphere clients. The vCenter server is server management software which allows managing and controlling vSphere environments centrally such that all virtual machines exist on vSphere environments. It provides the features to manage and control multiple vSphere ESXi hosts using single sign on. It also provides the facility to automate and deliver a virtual infrastructure across the hybrid cloud with centralized visibility, simplicity, efficiency, scalability and extensibility [3].

2 Literature Survey Many researchers have worked on cloud computing issues, challenges, security and application some of them are listed in this section. Luna et al. presented a model for implementing the virtual laboratory in universities for teaching and learning using cloud computing services such as software as service, platform as service and infrastructure as service on social cloud [4]. It also integrates the MOOC and other aspects of social networking and support on cloud. Dukhanov et al. explained the design and implementation of virtual laboratories by using cloud computing with the help of application as a service [5]. He proposed a model for composite application and some learning models through cloud-based virtual laboratories. The software tools presented were used to enable automation and configuration of the virtual laboratory which was based on the cloud computing platform CLAVIRE. Traditional computing resources of a university face many challenges during the initial setup and subsequent maintenance. Lot of initial investment and maintenance cost is associated with it. It has been observed that utilization is quite un-even. At some places, you have a lot of computing resources sitting idle, and at some other places, you lack it. Quan et al. proposed a solution, based on cloud computing, to optimize the usage of traditional computer laboratories and resources in a university

Integrating University Computing Laboratories with AWS …

89

[6]. The proposed solution can help in many different ways like saving cost, improved data security, proper management of the resources and many more. Software testing is an important phase of the SDLC cycle and is a part of curriculum for computer science and allied courses in many of the universities. With the emergence of cloud computing and IaaS, SaaS, PasS services, many of the companies have moved software testing to the cloud. Wen et al. proposed a design of a university software testing laboratory based on the cloud concept [7]. It helps optimize the use of existing hardware resources and services in an efficient and effective manner. Students and faculty can login and use the cloud testing services over the VPN. It makes executing software testing experiments convenient, interesting, and adds a lot of excitement and enthusiasm among the students and teachers. Murphy et al. used the technology developed by North Carolina State University and deployed a pilot virtual computer laboratory in a university [8]. The purpose was to provide high end computing resources to the end users over the Internet. The implementation was carried out to understand the scalability and optimization offered by cloud virtualization in implementing university laboratories over the cloud. In the current scenario, many universities have got a large number of scattered computer laboratories, central computing facilities and distributed and central storage space for data. This approach leads to many problems and issues related to resource utilization, work management, annual maintenance, securing the data and many more. Shi et al. implemented a laboratory using virtualization and cloud computing and got satisfactory results with the new approach [9]. It provides better collaborative opportunities, data security, optimization of resources and scalability. It is also suggested that the integration of resources and working procedures can be standardized to achieve better results and convenience of implementation. Different services and models of cloud computing were described beautifully by Li. Q along with detailed analysis [10]. A lot of other issues including data security were also discussed thoroughly from the research and implementation point of view. Cloud computing offers a lot of advantages of many applications and a lot of data are getting migrated to hybrid or public cloud. But many large organizations are hesitant to move their business data and other critical information to the cloud because of the privacy, security and safety concerns. A framework for analyzing the data security and privacy protection during data migration was proposed by Shakya and Subarana [11]. They established a secured socket layer (SSL) and minimum privilege tickets for data migration. Data is separated into two baskets-sensitive data and not so sensitive data. For securing the sensitive data, they used prediction-based encryption (PBE) for encryption. This could be useful in e-commerce and healthcare domains for storing credit card details and other critical data. Quite often organizations use cloud computing because it can offer an array of feature rich services well in time. A wide range of customers can be offered different types of services dynamically. Time servicing over the cloud is a challenging and tedious job and in order to handle different tasks efficiently, the cloud uses different scheduling approaches. Bhalaji proposed a genetic algorithm-based approach to minimize the delay during rendering of the services [12]. The proposed

90

K. C. Bandhu and A. Bhansali

algorithm was validated using a cloud simulator to calculate the delay, efficiency and quality of services. Cloud-based data centers help improve scalability and better service availability. Services like video streaming, storage and other network intensive applications pose new challenges to the cloud environment and placement of virtual machines (VM) in the data center. Tseng et al. presents an integer linear programming-based networkaware VM placement optimization (NAVMPO) problem statement. The purpose was to reduce the communication time of VMs for similar services [13]. They proposed two algorithms, namely, service-oriented physical machine selection (SOPMS) algorithm and link-aware VM placement (LAVMP) algorithm. To optimally utilize the channel capacity, SOPMS selects the best physical machine and LAVMP selects the best VM to target the physical machine. Sensor cloud is a concept that uses cloud computing with wireless sensor networks. Pricing models play a very important role in the adoption and usage by industry and people. Zhy et al. present a very comprehensive study of sensor cloud and its pricing models [14]. They proposed a pricing model based on lease period, usage time, resource requirements, volume of the sensory data, etc. They also discussed case studies for applications of sensor cloud pricing models (SCPMs). Different cloud users typically engage customers with pay per use model. Quality of service (QoS) and service level agreements (SLAs) are very important factors for this. Complexity, dynamism and continuously evolving cloud models make offerings of required quality of services (QoS) for different users without violating agreed on service level agreements (SLAs) challenging and complex. Singh et al. discussed STAR, an SLA-aware resource management mechanism [15]. It aims at reducing SLA violations, managing QoS and efficiently optimizing delivery of cloud services.

3 Integration Model In the existing model, the customer data center, which includes computing, storage and network on private cloud, is created using VMware, vSphere and is integrated with AWS cloud. As shown in Fig. 1, the customer data centers are a dedicated infrastructure integrated with AWS cloud using vCenter server. It is also integrated with AWS services. Figure 2 shows the proposed model to integrate the computer laboratory infrastructure with AWS cloud which is managed and controlled by VMware vCenter server on VMware vSphere cloud on AWS. By using this model, the idle infrastructure of computer laboratories of various universities across the globe can be utilized in an optimum fashion. It provides extra computing power, storage and network in a virtualized form on existing infrastructure of any public cloud and thus enhances the availability of resources. All the features of the private cloud, which are configured in the computer laboratory, are made available to the end user through AWS public cloud platform at minimum cost. When the computer laboratories are in use during

Integrating University Computing Laboratories with AWS …

91

Fig. 1 Existing model for integration of customer data center and AWS

Fig. 2 Model for integration of computer laboratory infrastructure and AWS

the day time in that case the workload of cloud, which was in the computer laboratory, is migrated back on AWS EC2 instance so that services never go down. When the computer laboratories are not in use, then again the reverse migration is done on cloud which is in computer laboratories. Security is a very important factor to take into consideration. To make the cloud secure, a VPN can be configured to the SDDC using AWS direct connect or public network. A route-based or policy-based VPN can be implemented. Networking and security can be configured using setup networking and security wizard which provides step by step procedures for the same in which vCenter server in SDDC can be accessed. Also, the default DNS server can be updated as per the requirements.

92

K. C. Bandhu and A. Bhansali

4 Implementations The proposed model can be implemented using tools and services provided by VMware and AWS as shown in Fig. 3. The procedure for integration computer laboratory infrastructure with AWS as follows: Step 1:

Configure the VMware vSphere ESXi hosts in computer laboratory: i. ii.

iii.

Step 2:

Configure the VMware vCenter server in computer laboratory: i.

ii. iii.

iv.

v. Step 3:

Establish a local area network of computers in computer laboratories by using networking devices with Internet connection. Install the VMware vSphere ESXi on all the computers of laboratories which are on the local area network and set the login credential and private IP address using DHCP protocol. Manage and control the ESXi host through VMware vSphere client software and create the multiple virtual machines (as per the computing, storage and memory requirement of the customer) on every ESXi host and install the operating system of customer’s choice and deploy the customer application.

Select any one high configuration machine as server in the laboratory and it should be connected to a local area network (there should not be an ESXi host). Install server operating system followed by VMware vCenter server and set the login credential and public IP address. The single vCenter server can configure, manage and control the multiple ESXi hosts with multiple virtual machines on single sign on. Login to vCenter server using credential and add all ESXi hosts into it using the credential of individual ESXi hosts and take the control of it, so that no other vCenter server can manage the same host. After adding all ESXi hosts, the vCenter server has control of all the virtual machines running on each ESXi host.

Configure the vCenter server on AWS using VMware cloud services: i. ii. iii.

iv.

Create an account on VMware cloud services portal. Login to VMware cloud services console using credential and select the VMware cloud on AWS option. Create the software defined data centers (SDDC) on any available AWS region with single/multiple host and connect to AWS account, specify VPC, subnet and configure network. Open VMware vCenter server from SDDC and login using credential then create logical networks and content library and add virtual machine on it and start it.

Integrating University Computing Laboratories with AWS …

Fig. 3 Integration process flow

93

94

K. C. Bandhu and A. Bhansali

v.

vi. vii.

Select Administration option from menu then select link domain in hybrid cloud and link the computer laboratory VMware vCenter server. Login to computer laboratory VMware vCenter server and can manage the computer laboratory datacenter and SDDC on AWS. Migrate the VMware vSphere ESXi virtual machines to VMware cloud on AWS and to EC2 instances.

The proposed integration helps in sharing VMware vSphere ESXi-based workload over AWS that offers faster, easier, and cost-effective solutions for customers. It helps manage vCenter servers of VMWare vSphere cloud on AWS through local laboratory vCenter server. The same virtual machine exists at two different locations and thus also provides safeguards against disaster.

5 Conclusion This paper suggests an approach and a model to integrate the computer laboratory resources with AWS using VMware vCenter server. It also outlines the process to setup and configure private clouds in computer laboratories and provide the computing resources to customers to deploy their business application and directly managed by vCenter server which is in the laboratory. It also describes the migration approach of virtual machines which are running on ESXi host of private cloud to EC2 instance. The approach proposed above can indeed be very helpful in improving the utilization of computing resources of universities effectively during the idle time. However, frequent migration takes a lot of time and effort and is a challenging task. However, this can be automated and is part of the future research which could help in better resource optimization.

References 1. AWS Management Portal for vCenter Server: https://aws.amazon.com/ec2/vcenter-portal/#:~: text=AWS%20Management%20Portal%20for%20vCenter%20Server%20enables%20you% 20to%20manage,resources%20from%20within%20vCenter%20Server 2. VMware vSphere Documentation: https://docs.vmware.com/en/VMware-vSphere/index.html 3. vCenter Server: Simplified and Efficient Server Management, https://www.vmware.com/in/ products/vcenter-server.html 4. Luna, W., Castillo-Sequera, J.: Model to implement virtual computing labs via cloud computing services. Symmetry. 9, 117 (2017). https://doi.org/10.3390/sym9070117 5. Dukhanov, A., Karpova, M., Bochenina, K.: Design virtual learning labs for courses in computational science with use of cloud computing technologies. Proc. Comput. Sci. 29, 2472–2482 (2014), ISSN 1877–0509, https://doi.org/10.1016/j.procs.2014.05.231.3 6. Quan, J.J., Yang, J.: Laboratory construction and management of university based on cloud computing (2014). https://doi.org/10.2991/icssr-14.2014.33

Integrating University Computing Laboratories with AWS …

95

7. Wen, W., Sun, J., Li, Y., Gu, P., Xu, J.: Design and implementation of software test laboratory based on cloud platform, 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C), Sofia, Bulgaria (2019), pp. 138–144, https://doi.org/ 10.1109/QRS-C.2019.00039 8. Murphy, M., Mcclelland, M.: Computer lab to go: a “Cloud” Computing Implementation. 7, 30–1055 (0002) 9. Shi, L., Zhao, H. R., Zhang, K.: Research of computer virtual laboratory model based on cloud computing. Appl. Mech. Mater. 687–691, 3027–3031 (2014), https://doi.org/10.4028/www. scientific.net/amm.687-691.3027 10. Li, Q.: Research on cloud computing services in data security issues. Appl. Mech. Mater. 687–691, 3032–3035 (2014), doi: https://doi.org/10.4028/www.scientific.net/amm.687-691. 3032 11. Shakya, S.: An efficient security framework for data migration in a cloud computing environment. J. Artific. Intell. 1(01), 45–53 (2019) 12. Bhalaji, N.: Delay diminished efficient task scheduling and allocation for heterogeneous cloud environment. J. Trends Comput. Sci. Smart Technol. (TCSST) 1(01), 51–62 (2019) 13. Tseng, F.H., Jheng, Y.M., Chou, L.D., Chao, H.C., Leung, V.C.M.: Link-aware virtual machine placement for cloud services based on service-oriented architecture. IEEE Trans. Cloud Comput. 8(4), 989–1002 (2020). https://doi.org/10.1109/TCC.2017.2662226 14. Zhu, C., Li, X., Leung, V.C.M., Yang, L.T., Ngai, E.C.H., Shu, L.: Towards pricing for sensorcloud. IEEE Trans. Cloud Comput. 8(4), 1018–1029 (2020). https://doi.org/10.1109/TCC. 2017.2649525 15. Singh, S., Chana, I., Buyya, R.: STAR: SLA-aware autonomic management of cloud resources. IEEE Trans. Cloud Comput. 8(4), 1040–1053 (1 Oct–Dec 2020), doi: https://doi.org/10.1109/ TCC.2017.2648788

IoT-Based Control of Dosa-Making Robot Rajesh Kannan Megalingam, Hema Teja Anirudh Babu Dasari, Sriram Ghali, and Venkata Sai Yashwanth Avvari

Abstract The main purpose of this research work is to propose a suitable IoTbased control method for a dosa-making robot. There are several IoT-based control methods: virtual reality, mobile phone, website, joystick, etc. We have used mobile phone-based control of cleaning, dosa batter flow, oil flow, speed and temperature using IoT, in the dosa-making robot. A simple android-based app was developed so that the users can easily control the robot. The robot is automated using the programmable logic controller. The mechanism used for heating is a slip ring mechanism in order to get rid of the need for gas or fire. The experiments and results show that mobile phone-based control of cleaning, dosa batter flow, oil flow, speed and temperature using IoT is one of the simple and easiest way to control the dosa-making robots. Keywords PLC · Slip rings · Internet of things · Node MCU · GPIO pins · L-shaped clamps

1 Introduction Dosa is one of the most favorite breakfast items all over India, paritularly in South India. People consume this as a snack item in the evening or sometimes as dinner too along with favorite side dishes like chutney and sambar. Dosa, which is mostly circular or oval in shape, comes in variety of preparations, from a thinner one (thinner R. K. Megalingam (B) · H. T. A. B. Dasari · S. Ghali · V. S. Y. Avvari Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India e-mail: [email protected] H. T. A. B. Dasari e-mail: [email protected] S. Ghali e-mail: [email protected] V. S. Y. Avvari e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_10

97

98

R. K. Megalingam et al.

than a chappathi) to a thicker one like that of a pancake. The most common ones are the masala dosa, ghee roast, uthappam, etc. The flour used in making dosa has to be prepared in a specific way. Rice and urad dal (split grams) are soaked in water the previous day for few hours. Each are separately grounded and mixed together along with salt and left the ferment for overnight, 8–10 h of time. The fermented mixture is used to making dosa on dosa-making pans. The first thing that comes to our mind when we hear the word robot is that it is a machine that imitates a human being; mostly, we remember about a humanoid robot robots are human-made machines some of which help us to investigate and gain knowledge about the places which are dangerous for us like knowing the atmospheric conditions underwater, climatic conditions on different planets. Robots are used in agriculture, healthcare, surgery, search and rescue, services, underwater, climbing, etc. There are autonomous or semi-autonomous robots. Any robot will work on human command given through a controller interface. Autonomous robots work according to the commands provided in the software (algorithms) but semiautonomous robots work, taking command from humans which can be operated through joysticks, computer, etc. In many sectors, robots replaced humans showing their performance in complex situations. The change in time brought many changes in the field of automation. Nowadays, many are running companies using robots alone. Many are interlinking their intellects with robots, resulting in unpredictable performance of robots. There are many foodproducing robots like pancake-making robots, drink-dispensing robots, ice creammaking robots, and soda-making robots. For example, there is a cocktail dispensing machine called “bartendro.” It makes a very tasty cocktail drink in many numbers quickly and without the mess. It is very small in size such that we can carry it to any holiday trips, parties, etc. It makes nearly 200 cocktails in an evening. Hamilton beach flip Belgian, a pancake maker which is compact, easy to clean, long lasting, non-technical, etc. By all these, we can simply say that the need of designing a robot is to save time and money. Developing a robot for dosa making is not a simple task either. In our work presented in this paper, we explain IoT-based control methods that we used to control a dosa-making robot. In addition, we also present several wireless protocols used for controlling IoT devices. The robot used in our work can make dosa on its own without the help of any humans. Manual cooking consumes a lot of time and effort but the proposed robot can make a lot of dosas within a short period without any human effort. This system need not require any human effort other than the operator and can make more number of dosas as compared to human workers. This robot was developed at Humanitarian Technology Laboratories of Amrita Vishwa Vidyapeetham University.

IoT-Based Control of Dosa-Making Robot

99

2 Problem Statement As the population keeps increasing day by day, the need to provide basic facilities like food, accommodation, and clothes is a need of the day. The demand for the food increases rapidly. People, particularly, working class look for quick options wherever they go; hotels, restaurants, eateries, etc. They do not have time to wait for a longer time at these places. Automation helps a lot in such a scenario. An automatic dosa-making robot saves a lot of time for preparing dosas and be served to customers on time in hotels, restaurants, etc. In addition, there is lack of proper, user-friendly, simple control methods available to control the automated robots. People spend lot of time in understanding the control methods, user interfaces, and may also involve some coding which a technical operator might not know. Engineers are engaged to control robots instead of technicians due to the complexities involved in using available control methods and user interfaces. In case of dosa bot, manual cooking consumes a lot of time and effort. We propose a simple, user-friendly mobile-based app to control the dosa-making robot. The app does not require any prior experience or expertise to learn. In addition, it can be learned in a very short duration. We designed it in a very simple way that anyone can use the app. Once we give command through the mobile phone app, the robot starts working automatically for cooking dosas. We can also keep track of how many dosas cooked, how much time it took to cook dosas, the temperature of the dosa making plate of the robot, etc., through the IoT-based app. Moreover, dosa-making robot is fully automated and works on IoT. There are many dosa makers present today, but there are no independent dosa makers who are in control of the IoT. This research work has the added advantage of controlling dosa-making robot using IoT.

3 Related Works In research paper [1], using programmable logic controller (PLC) a simplified approach for speed control of a separately excited DC motor is presented. This approach is based on supplying a variable DC voltage to the DC motor armature circuit from a fixed DC supply voltage via a PLC which is used as a DC/DC chopper. Contactless slip rings (CS) based on inductive power transfer (IPT) offer a secure and good power transfer solution for rotatory applications. An accurate unwilling model for CS is introduced along with considering the partially linking effect of the magnetic flux, and associated parameter identification is proposed in paper [2]. The research work [3] describes that most commercially available switched-mode power supplies (SMPS) with multiple outputs use multiple DC/DC converters, which increase the cost and complexity of the system and decrease reliability. The operation of the elevator system using programmable logic controller (PLC). The system is organized to drive a DC motor for front and back motoring mode with sensors at each floor. The application of the emergency STOP switch is for maintenance

100

R. K. Megalingam et al.

purposes or to stop any accident inside the lift and door switch for safety purposes. We see that there several modes of operating devices are proposed in these research work. The Internet of Things is used to operate devices and robots, even remotely. For example, it is used in the process of filling the water container which helps to open and close the pump or valve and to monitor the water level in the container. A detailed description of the process and the working is presented in papers [4, 5]. To provide start-up current for integrated circuits (ICs), a high-voltage start-up current is always required in integrated switched-mode power supply (SMPS) which is explained in paper [6]. Paper [7] describes about the usage of PLC in the process of controlling the speed of cutting machines, and the proportions of two paces control have greater randomness. PLCs—they are more flexible in automating various processes, since the behavior of the system can be changed without changing the electrical connections, in addition to being able to monitor the system in its operation explained on paper [8]. The compound DC motor is broadly utilized, for example, lifting and traction. The mental producers try to improve the pace of the processes and restrict bottlenecks, in order to attain outstanding performance in terms of production capacity which is described in the paper [9]. Paper [10] describes the wireless interface for controlling the device via a website. To attain the position control of a permanent magnet DC motor, the analog I/O of a PLC has been. The actuated control valve to control the liquid flow in a flow control system explained in paper [11]. We see that PLCs are widely used in automation. In our automatic dosa-making robot too, we use PLCs. Node MCU interface with Wi-Fi carried out by using ESP8266 module mounted on node MCU through which the data is transferred which is explained in papers [12, 13]. IoT devices getting controlled through Wi-Fi is one of the techniques to control IoT. The data transmission through Wi-Fi, Bluetooth, or X-bee is proposed in papers [14, 15]. The research paper [16] is about IoT monitoring and wireless control of the system. The experimental parameters are monitored, and if they reach a certain threshold value, the system reacts as per the programmed algorithm. In our case, the parameter monitored by IoT is the time. When the time reaches the threshold value, the system resets the timer. Paper [17] describes the working of flow control electronic valve using field programmable gate arrays (FPGA).

4 Wireless Protocols Used for Controlling IoT Devices The Internet of Things (IoT) refers to a network comprised of physical objects capable of assembling and splitting electronic data. In many places, IoT finds its ways like home appliances, appliances with remote monitoring capabilities, and in many big companies throughout the world. In one of the research work at HuT Laboratories, we used IoT-based crop protection system to scare away and save animals which destroy the crops and being killed by farmers.

IoT-Based Control of Dosa-Making Robot

101

IoT devices can be controlled in many ways like apps and websites. IoT devices can also be controlled through protocols. There are many protocols in controlling IoT. Bluetooth and Wi-Fi are the most commonly used protocols in controlling IoT devices. Mobile phones play key role in controlling these IoT devices through protocols. We can connect a mobile phone to any IoT device using Bluetooth or Wi-Fi. There are many alternate ways to control IoT devices other than Bluetooth and Wi-Fi, like Zigbee, XMPP, MQTT, DDS, and AMQP. Zigbee modules are used in controlling IoT devices because they consume less power but with moderate, throughput (till 250 kbps), and connectivity range of 100 m between nodes. It is same as Bluetooth and Wi-Fi, and it works as two-way communication between sensor and control system. We can connect to Zigbee module by Ethernet cable or Wi-Fi network. We discuss below the different methods/approaches to control IoT devices. A.

Controlling IoT devices through messaging services:

eXtensible Messaging and Presence Protocol (XMPP): XMPP is a streaming protocol. It is an open protocol. It helps us to exchange xml fragments between network and end points. XMPP allows user to access networks using other protocols also. This protocol is mostly used by instant messaging like WhatsApp. Message Queuing Telemetry Transport (MQTT): It provides low power consumption for devices. It works on transmission control protocol/Internet protocol. It allows two or more computers to communicate simultaneously. It is publicationsubscription messaging protocol that means the publishers collect data and send details to subscribers through the mediation layer entitled as a mediator (mediator architecture). It responds well to minimum bandwidth and supports operating wireless networks with good accuracy, compact processing, and memory resources. Publisher might be any device like temp sensor, but subscriber will be either mobile or computer. Data Distribution Service (DDS): It was developed based on the publish-subscribe methodology. The DDS protocol for real-time machine-machine communication authorizes scalable, accurate, peak performance, and inter-operable data interchange among connected devices independent of software and hardware platforms. It supports brokerless architecture. This DDS protocol is used in aviation service controls, grid management, independent vehicles, robotics, etc. Advanced message queuing protocol (AMQP): It enables encrypted and interoperable messaging between organizations and applications. It is an open standard publish/subscribe protocol. This is similar to MQTT, but the difference is MQTT which has customer/mediator architecture, whereas AMQP has customer/mediator or customer/server architecture, and both MQTT and AMQP follow transmission control protocol. B.

Controlling IoT devices through Smartphone apps: Blynk: This is supported by connecting to Node MCU, Raspberry Pi, etc. We can connect Blynk through Wi-Fi, Bluetooth, and Ethernet. We can send any data

102

R. K. Megalingam et al.

from widgets in the app to IoT hardware. We can map the incoming values to the specific range by using mapping button. Timer can be set using this application. Step control is used to set granular values with a given step. Using two push buttons, we can increment or decrement the step. We can also calculate temperature using a gage button. Cayenne—IoT Project Builder: It works with both Arduino and Raspberry Pi. Cayenne takes most of the complications out of creating hardware-oriented programing, it is an online IoT platform. Cayenne has all the standard sensors and actuators that show up in projects and you can get them up and running in no time, mainly by just dragging and dropping. If we want to, We can use Cayenne to work with both Arduino and Raspberry Pi devices simultaneously, if needed. This makes it possible to use a combination of devices, for example, a home automation project and have a sensor in one device to power another. To make things automatic, we can define the triggers, namely the “if … then” rules that can take the sensor state as the “if” part and the trigger state as the “then” part. We can do things like turn off something when the temperature reaches a threshold, and we can send notifications to the outside world using SMS or email. RemoteXY: It is supported in Bluetooth HC-05, Bluetooth HC-06, Arduino communication modules, Wi-Fi ESP8266. Using a smartphone or tablet with a graphical interface, RemoteXY manages microcontroller devices. We can manage a large number of devices with different graphical interfaces, using a single mobile application. RemoteXY makes it easy to generate a unique graphical interface to manage microcontroller devices using a single mobile application. C.

Controlling IoT devices through websites:

The following process is done from the website to control IoT devices. 1.

Materials required:

PC with Internet access. Arduino-UNO board, Wi-Fi module or GSM module, mobile phone with Internet access, Wi-Fi and hot spot compatibility, relay module, power supply, and potentiometers. 2.

Creating website:

Login to the website, on the website click add to enter the website name and required password. Click on create to generate the website. Using the options give in the website build website. 3.

Creating a Domain:

Enter into generated website. Click my domains in the menu bar. Then, on the screen, click add domain option. Click add domain in the domains page and click free subdomain in the window. Insert sub-domain name and save it. This will create the domain.

IoT-Based Control of Dosa-Making Robot

4.

103

Creating a database:

Generate your own PHP database. Click on new database. Click create after inserting the database name, username, and password. This will create the database. 5.

Manage database:

Generate tables in the database. Generate the required tables in the PHP accordingly. Click import in the page to import tables from given files. This will create our own IOT pages. 6.

Upload to Website:

Upload the pages to the website. 7.

Testing Arduino Program:

Upload the code. When we run the code, we will find the sensor value posted on our database.

5 System Architecture Figure 1 shows the system architecture diagram of the automatic dosa-making robot with IoT-based interface to control the robot with a smartphone. Firstly, we need to

Fig. 1 System architecture of dosa-making robot

104

R. K. Megalingam et al.

connect to Node MCU through a mobile phone. Later, we used the Blynk app to connect to Node MCU. We connect the control signal of Node MCU to the relay module, in which relay is connected to PLC. So when we press the start button in our mobile phone app, Node MCU sends a control signal to the relay module. The IoTbased control mechanism is explained in detailed under IoT-based control section. Node MCU is an open source to connect the Internet of Things. Node MCU consists of one Wi-Fi module which is the ESP-8266 module, to connect to the smartphone. A slip ring is an electromechanical device which help us to the transmission of power from stationary objects to the objects which are rotating. In the dosa bot, we used slip rings which help in the transmission of power from stationary power supply to rotating object. It works on AC power supply. PLCs are used in industrial applications where large amount of power is handled. They provide users with easy way programming with high-reliability control. They help to automate many machines and devices used in industrial applications. Solenoid valves are used to open and close the valves of the containers with batter, water, and oil.

6 Design and Implementation Figure 2 shows the dosa-making robot that we used for IoT-based control. In the design, we use three containers, one each for dosa batter, water, and oil. The pulley is used to rotate the pan with the help of DC motor. The roller is for spreading dosa batter on the pan. Dosa collector helps in picking cooked dosa. The two L-shaped clamps help in holding dosa spreader (roller), dosa collector, and containers (batter container, oil container, and water container). Fig. 2 Dosa-making robot

IoT-Based Control of Dosa-Making Robot

105

Fig. 3 Heating element

Figure 3 shows the image of the dosa bot from the downside of the pan. The coils stuck to the pan helps in heating the pan. Through the slip ring mechanism, the current is passed to the coils thereby heating the pan. The outer ring shown in Fig. 3 helps in supplying heat to the corners of the pan through the help of extra fittings shown in Fig. 2. The process of the dosa-making using the bot involves several step. The rotating pan is heated to the required temperature and starts rotating. The water container sprinkles water on the pan first. As the pan keeps rotating, the batter container dispenses the amount of batter required for making a dosa over the surface of the pan where water is sprinkled earlier. Water will have got evaporated, and by the time, the pan rotates and reaches the batter container. The next step is the spreading the dosa batter. The roller spreads the dosa batter followed by the oil container sprinkling oil over the dosa. The dosa has enough time on the pan to be cooked well by the end of one rotation of the pan since sprinkling of the water. At the end of one rotation, the dosa collector removes the dosa from the pan, and the hot dosa is ready for consumption.

7 Iot-Based Control Node microcontroller unit (MCU) is an open-source firmware and development hardware which can be used to design and build IoT-based product prototypes. In addition, it is very cheap and is an open-source IoT platform used by several developers. Node MCU is used for turning on or off motors and valves in our design. We can do this by simply connecting the Node MCU to our mobile phone using Blynk app. By connecting to Node MCU, we will get access to control the whole process. The control signals from Node MCU are connected to relay module. We have created

106 Table 1 App functions in full autonomous mode

Table 2 App functions in semi-autonomous mode

R. K. Megalingam et al. App button

Function

gp2

Cleaning dosa (making plate)

gp3

Making dosa

App button

Function

gp2

Cleaning dosa (making plate)

gp3

Batter control

gp4

Oil control

gp5

Speed control

gp12

Temperature control

buttons in Blynk app for switching on and off of the dosa maker. After connecting the Node MCU successfully to Wi-Fi, we can operate the Node MCU pins using mobile phone. We have to make sure that the app buttons in the Node MCU board are properly mapped to the GPIO pins. Once connected to Wi-Fi, we can easily control the dosa maker using the Blynk app. Using the app we can control the different functions of the dosa maker like the rotation of the pan, temperature, speed, and batter, oil, and water dispensing. The app buttons and their functions for both the autonomous mode and semiautonomous mode are shown in Tables 1 and 2. Figure 4 shows two images of the developed app. The left side image shows the extremely simple app with only two Fig. 4 Images of control panel in mobile

IoT-Based Control of Dosa-Making Robot

107

Fig. 5 IoT control panel

buttons to switch on or off, the dosa maker, i.e., in full autonomous mode. In full autonomous mode, the user need not control the rotation of the pan, temperature, speed, or batter, oil, and water dispensing. All will be taken care automatically. In semi-autonomous mode, the user has to use five different buttons to control the functions of the dosa maker. Figure 5 gives the pictorial description of the Blynk app we developed. When we press cleaning the pan button, the pan and water controller starts at the same time, and water will be sprinkled throughout the pan as the pan rotates. When we press dosa-making button, dosa batter falls on the pan first, and as it rotates, it goes through dosa roller to spread the batter throughout the pan. This is followed by oil sprinkling. And as one full rotation of the pan is getting completed, the dosa remover removes the dosa from the pan, and the user can transfer it to another container. In case of automatic mode, the user needs to press only gp2 and gp3 buttons to start the process. But in case of semi-automatic mode, the user has to press all the five buttons from gp2 to gp12. Figure 6 shows the system circuit diagram that we used in our dosa maker. As mentioned earlier, we use the open-source hardware platforms Arduino MCU platform and Node MCU platform. The Arduino platform is the main controller, and the Node MCU is used for the Wi-Fi connection for IOT-based control. We use programmable logic controllers (PLCs) for controlling the motors used to rotate the plate and also controlling various valves like batter, oil, and water valves. The PLC consists of many relay switches, and it is simple to supply power (AC or DC) at a time using a common pin. This is the main reason for using the API instead of other microcontrollers. The PLC programming used in our dosa maker is discussed below. The PLC program shown in Fig. 7 is to run both motors as well as the water containers valve simultaneously. We need to clean the pan before making dosa. So

108

R. K. Megalingam et al.

Fig. 6 System circuit diagram

Fig. 7 Code for cleaning of the robot

this program is to clean the robot. When X0 is high, both M0 and M1 are set. M0 is for motor, and M1 is for the water container. TMR is the keyword for the timer. So T0 is the timer coefficient, and K100 is the delay. Y0 and Y1 are the output indicators of M1 and M0. When the process is started, both water container valve and motor start at the same time and runs for 100 s. So water is applied across the pan for cleaning. The program shown in Fig. 8 is for the process of making a dosa. When X2 is high it sets M2 as high. When M2 is on, timer T2 is triggered to be on for 30 s. This means the motor rotates for 30 s. When T2 completes its 30 s, it sets timer T5 as high for 100 s delay, and it is in parallel connection with “RST M2” (Reset) which will hold the motor2 turned off for a duration of 100 s. After the timer T5 completes its

IoT-Based Control of Dosa-Making Robot

109

Fig. 8 Working of robot

100 s, then it sets M2 which is connected to batter and oil containers. As soon as M2 is turned on, valves of both containers are kept open for 20 s. This process is carried out several times based on parameters like size and number of dosa required.

8 Conclusion In this research work, we have discussed in a detailed manner various ways of using IoT for controlling several devices. We presented one approach to use IoT to control our dosa-making robot. We developed a smartphone-based app on Blynk with simple and user-friendly interface. It has only five buttons in semi-automatic mode and only two buttons in fully automatic mode and easy for anyone to learn. The dosa machine controller program was written using PLC. PLC programming diagram was also presented. The working of the dosa-making robot using IoT-based app was explained. As part of future work, we expect to evaluate our algorithm and measure the performance of the IoT-based control. We would also want to bring in multiple ways of using IoT to control the robot. Future Works On top of that, consumers have opted for the type of dosa they want. It can also display the amount of dosa paste, oil, and water. It gives an indication when the container is empty.

110

R. K. Megalingam et al.

Acknowledgements We would like to express our sincere gratitude to Almighty God for giving us the opportunity to carry out the research in a successful manner. We extend our sincere thanks to Amrita Vishwa Vidyapeetham and the Humanitarian Technology (HuT) Laboratory for aiding us in this in this venture.

References 1. El Din, A.S.E.D.Z.: PLC-based speed control of DC motor. In: CES/IEEE 5th International Power Electronics and Motion Control Conference (2006) 2. He, G., Chen, Q., Ren, X., Wong, S.-C., Zhang, Z.: Modeling and design of contactless sliprings for rotary applications. In: IEEE Transactions on Industrial Electronics 3. Singh, S., Bhuvaneswari, G., Singh, B.: Multiple output SMPS with improved input power quality. In: 5th International Conference on Industrial and Information Systems (2010) 4. Singh, G., Agarwal, A., Jarial, R.K., Agarwal, V., Mondal, M.: PLC controlled elevator system. In: Students Conference on Engineering and Systems (SCES) (2013) 5. Sachio, S., Noertjahyana, A., Lim, R.: IoT based water level control system. In: 3rd Technology Innovation Management and Engineering Science International Conference (TIMES-iCON) (2018) 6. Hu, H., Lin, Z., Chen, X.: A novel high voltage start-up current source for SMPS. In: 24th International Symposium on Power Semiconductor Devices and ICs (2012) 7. Query ID="Q3" Text="Unable to parse this reference. Kindly do manual structure" Yang, Z., Song, L.: Fuzzy-PI control system for speeding of paper cutting machine Based on PLC. In: International Conference on Electric Information and Control Engineering; Wuhan, China (2011) 8. Abashar, A.I., Mohammed Eltoum, M.A., Abaker, O.D.: Automated and monitored liquid filling system using PLC technology. In: International Conference on Communication, Control, Computing and Electronics Engineering (ICCCCEE) (2017) 9. Soressi, E.: New life for old compound DC motors in industrial application. In: IEEE International Conference on Power Electronics, Drives and Energy Systems (PEDES) (2012) 10. Srivastava, P., Rana, A.S., Bajaj, M.: IOT based controlling of hybrid energy system using ESP8266. In: IEEMA Engineer Infinite Conference (eTechNxT) (2018) 11. Bandyopadhyay, M., Mandal, N., Chatterjee, S.: PLC based flow control system using a motor operated valve. In: 4th International Conference on Control, Automation and Robotics (ICCAR) (2018) 12. Sai Nithesh Varma, R., Jyothi Swaroop, P., Karthik Anand, B., Yadav, N., Janarthanan, N., Sarath, T.V.: Iot based ıntelligent trash monitoring system with route optimization method. Int. J. Electr. Eng. Technol. (IJEET) 13. Megalingam, R.K., Kota, A.H., Doriginti, M., Vijaya Krishna Tejaswi, P.: IoT based controller for smart windows. In: IEEE International Students’ Conference on Electrical, Electronics and Computer Science (2020) 14. Megalingam, R.K., Krishnan, G.P., Tanigundala, K., Pai, S.S., Sajan, K.K., Sujith, A., Samudrala, N.: Design and ımplementation of junction boxes for preventing animal attacks in farms. In: IEEE International Students’ Conference on Electrical,Electronics and Computer Science (SCEECS) (2020) 15. Prabha, R., Sinitambirivoutin, E.: Passelaigue, F., Ramesh, M.V.: Design and development of an IoT based smart ırrigation and fertilization system for chilli farming. In: International Conference on Wireless Communications Signal Processing and Networking (2018) 16. Megalingam, R.K., Indukuri, G.K., Sai Krishna Reddy, D., Vignesh, E.D., Yarasuri, V.K.: Irrigation monitoring and prediction system using machine learning. In: International Conference for Emerging Technology (INCET) Belgaum, India, Jun 5–7 2020 (2020)

IoT-Based Control of Dosa-Making Robot

111

17. Adhul, S.V., Nandagopal, J.L., Revathi, H.: Control electronics module for flow control valve using FPGA. In: International Conference on Circuits Power and Computing Technologies [ICCPCT] (2017)

Classification of Idiomatic Sentences Using AWD-LSTM J. Briskilal and C. N. Subalalitha

Abstract Idioms are a mixture of terms with a figurative meaning distinct from the literal meanings of each word or expression. Automatic meaning detection of idioms represents a serious challenge in understanding the language. Because their meaning cannot be directly retrieved from the words, the development of computational models for human processing human languages is concerned with natural language processing (NLP). Idiomatic phrase identification is utmost important in many NLP applications like machine translation system, chatbot and information retrieval system (IR). Text classification is one of the fundamental tasks of NLP and is mostly attempted using supervised algorithms. This paper has perceived the identification of idioms as a text classification task. In this paper, we propose a classification model to classify the idioms and the literal sentences using ASGD weight-dropped LSTM (AWD-LSTM) model and universal language model fine-tuning (ULMFiT) for transfer learning to fine-tune the language model. The proposed model has been evaluated using precision, recall and F1-score metrics. The proposed model has been tested with the TroFi metaphor dataset and an in-house dataset and achieved 81.4 and 85.9% of F-Score, respectively. Keywords Idiom and literal classification · Text classification · AWD-LSTM · ULMFiT

1 Introduction Idioms are most important aspects of English language and are used frequently in both formal and informal conversations. Most of the speakers use idioms to express their ideas clearly and economically, and so it appears in all text genres of the language [1]. Many natural language processing (NLP) applications like machine translation (MT), information retrieval (IR), chatbot and summary generation need automatic identification of idioms. For instance, in MT to accurately translate the sentence J. Briskilal (B) · C. N. Subalalitha SRM Institute of Science and Technology, Potheri, Kattankulathur, Chengalpattu District 603203, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_11

113

114

J. Briskilal and C. N. Subalalitha

“An old man kicked the bucket” its idiomatic meaning “An old man died” has to be automatically interpreted, whereas the existing MT system tends to translate its literal meaning leading to erratic translation. The main problem involved in translating such idiomatic phrases is that the idiomatic phrases might actually mean the literal meaning sometime. For instance, the idiomatic phrase, “The old man kicked the bucket,” might sometimes reflect to its literal meaning. This paper has attempted to tackle this by looking at this as a classification task. This paper proposes a novel classification framework that classifies the idiomatic phrase and literal phrase using transfer learning methodologies. Example idiomatic sentences are given below: 1. 2. 3. 4. 5.

Uncle Bill tells some great stories, but we take what he says with a grain of salt because he sometimes exaggerates or makes things up. You might not feel the effects of smoking cigarettes while you are young, but you will definitely pay the price when you are older. I will only tell you if you promise to keep it under your hat. The coach said I have to pull my socks up or I will lose my spot on the team. You really hit the nail on the head when you said that Gina needs to stop worrying about making mistakes when she speaks English.

Classification of idioms and literals was performed by the earlier research works [2] using unsupervised clustering methods in which the topics were extracted from the paragraphs containing idioms and literals [3]. And simple context encoding LSTM model has been utilized to perform classification of idiomatic use of infinitive-verb compounds [4]. On the other hand, transfer learning algorithm has been implemented on different kinds of texts to perform text classification. The fine-tuning algorithm based on attention selects appropriate features automatically from the pre-trained language model [5, 6]. The aforementioned papers have utilized the pre-trained models and various LSTM models to perform the various kinds of text classification. This paper emphasizes the classification of idiomatic sentences from the literal sentences using the novel transfer learning methodology using ULMFiT and AWD-LSTM with good accuracy. The main contributions of this paper are twofold: 1. 2.

Usage of pre-trained model to classify idiom and literal sentences. Integration of the language model with AWD-LSTM for idiomatic phrase classification task.

The rest of the paper is organized as follows: Sect. 2 describes the background, Sect. 3 literature survey, Sect. 4 proposed transfer learning model experimental setup, Sect. 5 experimental results and Sect. 6 conclusion and future work.

Classification of Idiomatic Sentences Using AWD-LSTM

115

2 Background In this section, the description of the methodology used in this paper is given. In this paper, transfer learning method is used for training the models and to perform the classification. In deep learning, transfer or inductive learning is a supervised learning technique which utilizes the knowledge gained from one task that can be applied to another task but related. At least one or more layers from the previously trained model can be utilized in the new model based on the problem of interest. We are considering the method of transfer learning for our work, because it is said to be universal approach for NLP transfer learning for the following points: • This works through tasks that differ in document size, number and type of labels. • Using a single method of architecture and training. • Additional in-domain documents or labels are not required. The main purpose of using the transfer learning method is that it is not necessary that the fine-tuned model needs to learn from the scratch, since it gives high accuracy with less computational time compared to the models which does not use transfer learning approach.

2.1 ULMFiT Universal language model fine-tuning (ULMFiT), which uses new techniques to pre-train a language model (LM) on a broad general-domain corpus and fine-tune it on the target task. In the sense that it satisfies these functional conditions, the procedure is universal: (1) This works through tasks that differ in document size, number and form of label; (2) A single architecture and training method is used; (3) No customized feature engineering or pre-processing is required; and (4) No additional in-domain documents or labels are required. Elevated level thought of ULMFIT is to prepare a language model utilizing a huge corpus like Wikitext-103 (103 M tokens), at that point to take this pre-trained model’s encoder and join it with a custom head model, for example, for classification and to do past calibrating utilizing discriminative learning rates in numerous stages cautiously [5].

2.2 AWD-LSTM AWD-LSTM (ASGD weight-dropped LSTM) [7] is the most popular language model used in NLP. It uses drop connect method and average random gradient descent method and few other regularization strategies. Long short-term memories (LSTM)

116

J. Briskilal and C. N. Subalalitha

is a kind of time recurrent neural network (RNN) which is mainly used for predicting the sequence of words in many NLP task. The NLP model is a language model that predicts the next word in a sentence. For example, if we take a mobile phone keyboard, when we type a sentence like it should predict the next word. A good language model should predict the words in a sentence by understanding all the semantics and grammar and all the other elements in NLP. For instance, “I ate a hot” → “dog,” “It is very hot” → “weather.”

2.3 Fast AI Fast AI library focuses on the use and tweaking of pre-trained language models, carried out in three stages: 1. 2. 3.

Data pre-processing can be done using a minimum amount of code. Create a language model with pre-trained weight so that dataset can be finetuned. Creating classifier model on top of the language model.

2.4 Algorithm for Proposed Work

Step 1: Trofi dataset and an In-house dataset is taken as an input. Step 2: Data preprocessing is done using FastAi library. Step 3: Building ULMFiT Language Model Step 3.1: ULMFiT model is trained using 103M tokens of Wikitext. Step 4: Language Model fine tuning Step 4.1: In this process, the pre-trained weights of language model is used and fine-tuned with the training data of Trofi dataset and In-house dataset respectively. Step 5: Building AWD-LSTM classifier model: Step 5.1: AWD-LSTM Classifier is integrated on the top of the language model. Step 6: Once the model is trained, testing sentence can be given to the model for classification. Step 6.1 : If the given sentence is “Idiomatic” Then it returns “positive” Else “negative”

Classification of Idiomatic Sentences Using AWD-LSTM

117

3 Literature Survey This section analyzes the state-of-the-art approaches that are pertinent to the proposed work. Since the proposed work is targeted on idiom classification which is primarily a text classification task, the literature survey has been done in two dimensions, namely existing works on idiom recognition and works on text classification techniques.

3.1 Works on Idiom Recognition: A method of automatic idiom recognition has been proposed by computing idiomatic and literal scatter matrices from the local context in the word vector space [8]. An algorithm using linear discriminant analysis (LDA) as binary classifier has been proposed to classify idiomatic and non-idiomatic sentences. A 3NN classifier has been used to enhance the accuracy [9]. The techniques called linear discriminant analysis and principle component of analysis have been used for automatic detection of idiomatic clauses [10].

3.2 Works on Text Classification Bidirectional encoders representation from transformers (BERT) model was proposed to classify three common tasks for classification such as sentiment analysis, question classification and topic classification [11]. Scientific documents have been classified by using BiLSTM encoder-decoder model [12]. RNN and LSTM methodologies have been used to classify news-based sentiment analysis [13]. TropeFinder (TroFi) system has been used to classify literal and non-literal usage of verbs through word sense disambiguation and clustering techniques [14]. Transfer learning approach on universal language model fine-tuning (ULMFiT) was proposed to perform six text classification tasks which include question classification, sentiment classification and topic classification [5]. Haptic feedback technology has been used to transmit the tactile information to the teleoperator [15]. Decision tree and artificial neural networks hybrid approach has been used to increase the efficiency of data mining approach in the marketing organization [16]. In sum in the above-mentioned works, transfer learning has not been attempted for classification of idioms. The proposed work attempts to explore the behavior of transfer learning in classification of idioms in order to check if it is performing better than the above-mentioned techniques. The next section explains the proposed work.

118

J. Briskilal and C. N. Subalalitha

4 The Proposed Transfer Learning Model Experimental Setup The architecture of the proposed classification system is shown in Fig. 1. Universal language model (ULMFiT) which is pre-trained with 103 M Wikitexts. It is finetuned with the pre-processed TroFi, for example, base dataset is used for transfer learning in order to create the language model. This model is further integrated with the classifier model called AWD-LSTM to classify the given sentences as idiomatic and literal sentences. In this paper, we have taken TroFi dataset [14] which consists of 3737 literal (neg) and non-literal (pos) sentences, which is drawn from Wall Street Journal (WSJ) Corpus Release 1.

4.1 Experimental Setup and Analysis: We have taken TroFi dataset [14] which consists of 3737 literal (neg) and non-literal (pos) sentences, which is drawn from Wall Street Journal (WSJ) Corpus Release 1. Table 1 shows the sample TroFi dataset.

Idiom and Literal Sentences

ULMFiT Language Model

Fine Tuning

AWD-LSTM Classifier

Transfer Learning Approach

Idiom Sentences

Fig. 1 Proposed idiom classification architecture

Literal Sentences

Classification of Idiomatic Sentences Using AWD-LSTM

119

Table 1 Sample TroFi dataset Id

Text

Label

0

The debentures will carry a rate that is fixed but can increase based on natural gas prices

neg

1

Last year the movies were filled with babies

pos

2

Other magazines may survive five, 10, even 25 or 50 years and then die

pos

3

It actually demonstrated its ability to destroy target drones in flight

neg

4

Ever since, Banner has been besieged by hundreds of thrill-seeking callers

pos

Along with TroFi dataset, we have taken an in-house dataset which consists of 600 idiomatic sentences (pos) and 400 literal sentences (neg) which are annotated by three domain experts.

4.2 Building Language Model By using the ULMFIT model [5], learning is done more quickly as the model is already pre-trained model and updating such a model demands less computational complexity. Such models can also learn well, even from a limited number of examples of training. Fastai is a deep learning library consisting of a pre-processed subset of 103 million tokens drawn from Wikipedia that has a pre-trained Wikitext model. It is a model that knows a lot about language and a lot about what language represents. The next step is to fine-tune this model and transfer learning to construct a new language model that is explicitly good for the next word prediction.

4.3 Language Model Fine-Tuning This is the main training process, where we use the weights of the pre-trained language model and fine-tune it with the training data of TroFi dataset. One of the most important hyper-parameter is used to train the model which is learning rate hyper-parameter. Fastai provides the convenient utility of (learner.lr_find) to search through range of learning rate to find the optimum one for our dataset. Here, the learning rate is plotted against the loss (Fig. 2).

120

J. Briskilal and C. N. Subalalitha

Fig. 2 Learning rate is plotted against loss

4.4 Building Classifier Model To build the classifier. We need to create data bunch from TextClasDataBunch, passing the vocab from the language model to ensure that this data bunch will have precisely the equivalent vocab. Batch size bs to be utilized is as per the GPU memory you which is accessible. In this paper, we have used bs = 64 which will work fine. And the text classifier learner is created and loads it in our pre-trained model. Again the learning rate is noted before and after unfreezing the model (Fig. 3). Again unfreezing the model, we have obtained the accuracy of 0.768 for TroFi dataset (Fig. 5).

5 Experimental Results The proposed model has been tested using the TroFi dataset and an in-house dataset which we have created and annotated. TroFi dataset has totally 3737 sentences which include 2145 idiom sentences and 1592 literal sentences. Our in-house dataset has totally 1000 sentences which include 600 idiom sentences and 400 literal sentences. The F1-score has been used as the evaluation metric. Figure 6 shows the evaluation results. The precision and recall are calculated based on three factors, namely true positive (TP), false positive (FP) and false negative (FN) The precision and the recall equations are shown below Precision = TP/(TP + FP)

(1)

Classification of Idiomatic Sentences Using AWD-LSTM

121

Fig. 3 Learning rate is plotted against loss for classifier model

Fig. 4 Accuracy results before freezing the model

Fig. 5 Accuracy result after unfreezing the model

Fig. 6 Evaluation results

where TP represents the count of Idioms that are correctly classified as idioms, and FP represents the count of literals that are incorrectly classified as idioms. Recall = TP/(TP + FN)

(2)

122

J. Briskilal and C. N. Subalalitha

Fig. 7 Confusion matrix for TroFi dataset

where FN represents the count of idioms that are incorrectly classified as literals. A confusion matrix is drawn for these precisions and recalls factors which are shown in Fig. 7 and 8. Fig. 8 Confusion matrix for in-house dataset

Classification of Idiomatic Sentences Using AWD-LSTM

123

It can be observed from Fig. 6 that the precision achieved by the TroFi dataset is 0.763, and the in-house dataset has yielded 0.87 precision. This is due to the fact that the false positive is accounted more (i.e.,) many literals are misclassified as idioms since the literal sentences present in the TroFi dataset contain many phrasal verbs. On the other hand, the better precision was achieved by the in-house dataset due to the presence of less number of phrasal verbs. It can also be observed that the recall achieved by both the datasets is far better than the precision values. This is because the number of false negatives identified by the AWD-LSTM and used by the proposed idiom classification is very less.

6 Conclusion and Future Work In this paper, we have attempted a classification model for idioms and literals by using transfer learning and AWD-LSTM. Automatically spotting the idioms and segregating them from their literal counterparts is essential for building more accurate NLP applications such as machine translation, information retrieval (IR) and question answering systems. The proposed system has been tested using TroFi and an in-house datasets. The experiment has been analyzed using F-score values. It was observed from the experiments conducted that the proposed approach misclassifies many “literals” as “idioms” when compared to the reverse scenario (i.e.,) many idioms are correctly classified as “idioms.” Though the dataset is cited as one of the reasons behind, if the AWD-LSTM architecture is tuned further, these misclassifications can be avoided. Also, transfer learning methods may give more optimistic results for idiom-literal classification.

References 1. Sag, I.A., et al.: Multiword expressions: a pain in the neck for NLP. In: International Conference on Intelligent Text Processing and Computational Linguistics. Springer, Berlin, Heidelberg (2002) 2. Peng, J., Feldman, A., Vylomova, E.: Classifying idiomatic and literal expressions using topic models and intensity of emotions (2018). arXiv preprint arXiv:1802.09961 3. Dinh, D., Erik-Lân, Eger, S., Gurevych, I.: One size fits all? A simple LSTM for non-literal token and construction-level classification. In: Proceedings of the Second Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature (2018) 4. Y. Tao, et al., FineText: text classification via attention-based language model fine-tuning (2019). arXiv preprint arXiv:1910.11959 5. Howard, J., Ruder, S.: Universal language model fine-tuning for text classification (2018). arXiv preprint arXiv: 1801.06146 6. Rother, K., Rettberg, A.: Ulmfit at germeval-2018: A Deep Neural Language Model for the Classification of Hate Speech in German Tweets. pp. 113–119 (2018) 7. Merity, S., Keskar, N.S., Socher, R.: Regularizing and optimizing LSTM language models (2017). arXiv preprint arXiv: 1708.02182

124

J. Briskilal and C. N. Subalalitha

8. Peng, J., Feldman, A.: Automatic idiom recognition with word embeddings. In: Information Management and Big Data, pp. 17–29. Springer, Cham (2015) 9. Peng, J., Feldman, A., Street, L.: Computing linear discriminants for idiomatic sentence detection. Res. Comput. Sci. Spec. Iss. Nat. Lang. Proces. Appl. 46, 17–28 (2010) 10. Feldman, A., Peng, J.: Automatic detection of idiomatic clauses. In: International Conference on Intelligent Text Processing and Computational Linguistics. Springer, Berlin, Heidelberg (2013) 11. Sun, C., et al.: How to fine-tune bert for text classification? In: China National Conference on Chinese Computational Linguistics. Springer, Cham (2019) 12. Calzolari, N., et al., Proceedings of the 12th language resources and evaluation conference. In: Proceedings of The 12th Language Resources and Evaluation Conference (2020) 13. Souma, W., Vodenska, I., Aoyama, H.: Enhanced news sentiment analysis using deep learning methods. J. Comput. Soc. Sci. 2(1), 33–46 (2019) 14. Birke, J., Sarkar, A.: A clustering approach for nearly unsupervised recognition of nonliteral language. In: 11th Conference of the European Chapter of the Association for Computational Linguistics (2006) 15. Manoharan, S., Ponraj, N.: Precision improvement and delay reduction in surgical telerobotics. J. Artif. Intel. 1(01), 28–36 (2019) 16. Kumar, T.S.: Data mining based marketing decision support system using hybrid machine learning algorithm. J. Artif. Intel. 2(3), 185–193 (2020)

Developing an IoT-Based Data Analytics System for Predicting Soil Nutrient Degradation Level G. Najeeb Ahmed and S. Kamalakkannan

Abstract Globally, agriculture seems to be the economic field, which occupies a major part in India’s socio-economic structure. The parameters such as soil and rainfall play a major role in agriculture dependency. Farmers will usually have the mindset of planting the same crop by using more fertilizers and following the public choice. In agriculture, crop productivity will be increased with the incorporation of new technologies. The most commonly used smart farming technologies such as Internet of Things (IoT) has the tendency to process the generous quantities of data from these devices. In the recent past, there have been major developments on the utilization of machine learning (ML) in various industries and research. For this reason, machine learning (ML) techniques are considered as the best choice for agriculture, which is then evaluated to predict crop production for the future year. In this paper, the proposed system uses IoT devices to gather information such as soil nutrient level, temperature of atmosphere, season of the atmosphere, soil type, fertilizer used and water pH level periodically. Further, the data gathered from the sensor will be passed to a principal component analysis (PCA), which are used to reduce features in order to obtain a better prediction level. Also, ML algorithms such as linear regression (LR), decision trees (DT) and random forest (RF) are implemented to forecast and classify the crop yield from the previous data based on soil nutrient degradation level and recommend suitable fertilizer for every particular crop. Keywords Internet of things (IoT) · Soil nutrient · Crop yield · Agriculture

1 Introduction In a country like India, agriculture is the most prominent sector that gives major income to the country. In our country, around 60% of land is used for the agriculture, and more than 50% of population depends on agriculture. In the developing countries G. Najeeb Ahmed · S. Kamalakkannan (B) Department of Computer Science, School of Computing Sciences, Vels institute of Science, Technology and Advanced Studies (VISTAS), Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_12

125

126

G. Najeeb Ahmed and S. Kamalakkannan

like India, agricultural automation can help to achieve an effective yield and reduce human intervention. In agriculture sector, many techniques and invention of novel technologies can be slowly degraded due to huge inventors, who are more focused on building artificial products that lead to a diseased life. At present, people are not aware of crop cultivation over the right time and place. Therefore, they lead to uncertainty in food due to seasonal climatic environments based on the cultivation, and techniques are changed against the basic assets such as air, water and soil. Nowadays, IoT is incorporated in various sectors. Being an agricultural country, India needs some innovations in the field of agriculture. IoT has an ability to transfer information with unique identifiers based on the interconnection between electronic networks such as physical devices, artifacts on a network without having a contact between user-to-computer and human-to-human. Many researchers in agriculture field have recommended ML to predict plant type by IoT-based system. For remote monitoring of the soil properties, an IoT-based system is required. From all the webenabled devices, IoT can collect, send as well as process the data, which is often referred as Internet of Everything (IoE). It can be acquired by processors, embedded sensors or even communication hardware from the external environment. The data gathered from the sensors along with the proposed framework assists in making correct decisions, and further, the information stored on the server is analyzed using ML algorithm. The sensor will sense the data from the crop yield based on the varying factors such as soil moisture, humidity, soil temperature and pH quality. These parameters have a direct influence on the crop development, and it is managed by IoT systems that are used to predict the crop species until a predictive decision has been taken to advance the end user. The utilization of sensor data based on farming activities in the year of 2016–2019 is illustrated in Fig. 1. 30

percentage %

25 20 15 10 5

Sensor data

0

Farming activities Fig. 1 Utilization of sensor data based on farming activity

Developing an IoT-Based Data Analytics System …

127

This intervention will be guiding the end user for further process. Then, it can be passed to a feature selection which is otherwise known as variable or attribute selection. In this process, routine selection of the attributes is performed. Its less features become enviable because the model has less complexity results. ML is an area in the field of computer science, which has been the emerging technology based on artificial intelligence (AI). It focuses on systems programming, which can be developed to make data-based decisions and predictions. The crop was predicted based on the ML algorithms, which offer a better yield. This paper creates an effort to analyze the different ML and IoT techniques that are related to the agricultural sector in order to gather the soil nutrient level data, atmospheric data, and weather data. In this research work, a combination of ML algorithms is developed to predict the degradation in soil nutrient levels based on the past and current levels of nutrients in the soil, past and current levels of nutrients in the atmosphere, past and current weather conditions and nutritional requirements for the crop planned at different stages of growth. The organization of this paper is as follows. Section 2 describes the related survey regarding the technique-based methodological contributions from the existing work, Sect. 3 defines the proposed methodology for predicting the degradation level of soil, and finally, Sect. 4 concludes the proposed work.

2 Literature Review Bondre and Mahagaonkar [1] proposed the prediction of crop yield, which has main aim for creating a prediction model for future crop yield, and it is evaluated by using ML techniques. Manjula et al. [2] investigate the soil nutrient supplements such as iron, zinc, calcium, nitrogen, potassium, magnesium, and sulfur, which are utilized by ML techniques, namely decision tree (DT), Naïve Bayes (NB) and hybrid form of NB and DT approaches. The evaluation metrics of time and accuracy are compared with various classification algorithms based on the performance. Rajak et al. [3] described the techniques such as support vector machine (SVM) and ANN, which are evaluated by using the specific metrics such as efficiency and accuracy, and further, it is applied to the collected soil testing laboratory dataset based on achievement of parameters to recommend a crop. This approach has been obtained from farm by soil database and the crop given by field experts. The security of the system is low for the field and even to the yield is the major disadvantage faced in this application. The system proposed by Tatapudi and Varma [4] has to implement ML algorithms that would integrate sensors such as soil temperature, soil pH, soil humidity, moisture and rainfall for analyzing the data from IoT device. In order to develop the productivity of smart farming based on these techniques, this work provides a better forecasting for farmers to recommend a crop in their field. In the proposed methodology, Kalman filter (KF) is used with predictive processing and collected consistency data without noise for cluster-based wireless system networks (WSNs) to relay the data. The method used for DT is otherwise known as predictive

128

G. Najeeb Ahmed and S. Kamalakkannan

analytics based on weather forecasting seed identification and classification, prediction of crop field and disease for choice of learning. This platform integrates IoT components including cube-based IoT gateway, and Mobius has to provide a crop growth monitoring solution to consumers [5]. Chlingaryan et al. [6] discussed technological developments for accurate crop yield prediction over the past 15 years in ML techniques. Rapid advancements in sensing technologies and ML techniques suggest to estimate of nitrogen status that would offer cost-effective and comprehensive results for better yield based on environmental situation. The drawback faced in this paper is less optimized for targeted application of the sensor platform. Devi Prasanna and Rani [7] explained the process of IoT-based remote monitoring process for agriculture applications. Rao and Sridhar [8] proposed the crop field monitoring for agriculture applications like sensing the soil moisture, light, temperature and humidity. The data collected from the sensors are sent to the microcontroller, and here, it uses the Arduino board, and the data is stored in the database through the wireless transmission. The drawbacks faced in this paper are much complexity of the hardware and significantly less efficiency. Patil et al. [9] propose papers that have given a rough idea about using ML with only one attribute which can improve the yields, and several patterns are recognized for prediction. This system will be useful to justify which crop can be grown in a particular region. Oliveira et al. [10] present the users with the ability to make strategic improvements such as selecting a more stable genetic variety before planting, or even modifying the crop type, to manage significant climatic differences that get forward again in the crop cycle. In [11], the wheat crop safety was tracked using a smartphone captured by nearsurface imagery. The crop has been categorized by computation as safe or unhealthy based on green level. In case of precision agriculture, the application used sensors completely rely on either IoT-based sensor or remote based sensing, whereas IoTbased sensor has utilized the multiple sensor for assessing the crop health, and in the case of remote sensing-based application, some computation has been performed over spectral images to assess the crop health. A study on a framework for tracking the efficiency of indoor microclimate gardening has been addressed to Kaburuan et al. [12]. An IoT board of electronic sensors is deployed to track the growing process. Prado Osco et al. [13] present a framework for predicting nutrient content in leaf hyperspectral measurements based on ML algorithms. The findings suggest that surface reflectance data is even more appropriate for predicting macronutrients for the Valencia orange leaves, whereas first-derivative spectra are higher related to micronutrients. Balducci et al. [14] describes the design and implantation of realistic tasks, ranging from crop harvest forecasting to data restoration of lost or inaccurate sensors, leveraging and evaluating different ML approaches to determine where resources and investments can be focused. PA concept is the product of rapid advances in the IoT and cloud computing models, which incorporate context awareness and real-time measures [15]. Wolfert et al. [16] and Biradar et al. [17] describe analyses of smart-farm sectors, while multiple discipline applications leveraging IoT sensors are explored in the works of [18, 19]. The drawbacks of this paper ha provide less security system to the field as well as to yield. Table 1 discussed the comparative study various technology-based agriculture using IoT.

Technologies used

ZigBee

1. Raspberry Pi 2. Mobile technology 3. Wi-Fi

Mobile technology

1. Raspberry Pi 2. Wi-Fi

1. Mobile technology

Year/Author

Arvind et al. (2017) [23]

Mathew et al. (2017) [24]

Rajakumar et al. (2017) [25]

Pooja et al. (2017) [26]

Jawahar et al. (2017) [27]

1. Crop management 2. Water management

1. Weather monitoring 2. Precision farming

Crop production

Nutrient management

1. Pest controlling 2. Weather monitoring

IoT sub-verticals

Table 1 Comparative study of predicting the agriculture using IoT

1. 2. 3. 4.

1. 2. 3. 4. 5. Temperature Humidity Soil moisture Water level

Temperature Moisture Soil Light intensity Vapor

1. Soil level 2. Soil nutrient

1. Temperature in the climate 2. Prosperous level 3. Nitrogen stage 4. Humidity degree

1. Soil moisture 2. Temperature 3. Water level

Data collection 1. Low cost 2. Efficient crop development 3. Plants are growing faster

Solution for current issues

Excess water from the area of agriculture was added to the tank

Use of decision-making algorithm

Interfacing different soil nutrient sensors

(continued)

Update the farmers with the field’s living condition

1. Crop yield increased 2. Reduced consumption of crops

1. Improve the harvest 2. Control the costs of the agricultural product

Can enhance the fertilizer 1. If factors were amount observed 2. Enhanced the volume of fertilizer

1. To avoid crop failure, anticipate and resolve drought conditions 2. Continue to monitor climatic conditions

Drivers of IoT

Developing an IoT-Based Data Analytics System … 129

1. ZigBee 2. Raspberry Pi

Wi-Fi

Tran et al. (2017) [28]

Viswanathan et al. (2017) [29]

Janani et al. (2017) 1. Wi-Fi [30] 2. Raspberry Pi

Technologies used

Year/Author

Table 1 (continued)

1. Soil management 2. Nutrient detection

1. Crop management 2. Warehouse management

1. Crop management 2. Nutrient detection

IoT sub-verticals

Temperature Wetness Rainfall Soil Moisture Light intensity

1. Smart warehouse management

1. Reduce the consumption of energy 2. Increase the number of sensors

Drivers of IoT

1. Costs eliminated 2. More contact with people 3. Strong trustworthiness 4. Increased production of crops

1. Reduced use of electricity 2. Must adapt to environmental and soil changes

Solution for current issues

1. Soil measures: soil 2. Soil management with Reduces the question of pH, soil humidity, soil the consideration of finding the correct crop temperature nutrient, fertilization for the region

1. 2. 3. 4. 5.

1. Temperature 2. Humidity

Data collection

130 G. Najeeb Ahmed and S. Kamalakkannan

Developing an IoT-Based Data Analytics System …

131

3 Proposed Method The proposed work emphasizes for predicting the degradation level of soil based on effective usage of IoT devices and choice of learning to choose the crop. This system analyzes soil nutrient levels based on various conditions including pasts and current soil level of nutrients present in the soil, current and past levels of nutrients present in the atmosphere, past and current weather conditions (local and global) from the meteorological data and details of the nutrient requirements of the planned crop during every stage of its growth, and the nutrient data will be gathered using appropriate sensors. The gathered data will be passed to a feature selection to obtain better prediction level, and the classification of ML algorithms has to predict the soil nutrient level degradation if it is right for the farmer to start cultivation of that crop. The main aim of this paper is to develop a method which can forecast the type of crop that depends on soil properties and weather which can recommend appropriate fertilizer to every specific crop. The proposed block diagram is as shown in Fig. 2. There are three main components used in the system. 1.

1. IoT device—radio frequency identification (RFID)

Fig. 2 Flowchart for the proposed method

132

2. 3.

G. Najeeb Ahmed and S. Kamalakkannan

Feature reduction using principal component analysis (PCA) ML algorithm for prediction

In order to increase the crop’s production rate based on the study, the farmer has to resolve on the process of choosing the finest crop for such a certain soil.

3.1 IoT Devices According to the exponential growth in IoT systems and WSN applications, different communication standards have been implemented during the last few decades. Based on the bandwidth, battery timing, quantity of free channels, data rate, cost and other issues [20] of each protocol have its own requirements. Sensors, such as light, soil moisture, soil humidity, soil temperature, etc., are used in IoT-based smart agriculture to monitor the field of the plants and improve the watering system. The farmers had the capacity to track field conditions from almost anywhere. This device can detect the soil micronutrient levels using soil moisture sensor and monitor the details of weather based on soil nutrient level, fertilizer control, water pH and temperature for specific plant. All these devices have been used to track the plant, as well as to produce the information. The most widely used wireless networking protocols in IoT-based agricultural applications are as follows: RFID RFID systems are otherwise known as RF tag. It contains two components, namely transponder and reader with very low radio frequency. This tag has reading characteristics with unique information and programmed by electronically. There are two tag systems are available in RFID technologies, namely (i) active reader tag system and (ii) passive reader tag system. The systems of active reader tag utilize high-frequency signals with high power consumption, and it is highly expensive, whereas passive reader tag has low power consumption. In [21], an RFID-based smart agriculture network based on IoT has been implemented. Soil nutrient level, temperature, soil moisture and water pH data obtained by sensor reading were included in the device, and these readings have been sent to the cloud through RFID communication protocols. The sensor senses data from the crop yield for varying factors such as soil moisture, humidity, soil temperature and pH quality. These parameters are stored by IoT systems that can be forecast plant varieties that have a direct impact on crop growth after a predictive decision is made to forward to the end user for further intervention that will help the end user. This intervention will be guiding the end user for further process.

Developing an IoT-Based Data Analytics System …

133

3.2 Feature Reduction Using Principal Component Analysis (PCA) In this paper implanting the data into a low-dimensional linear subspace, PCA technique has been used to minimize dimensionality. The aim of PCA has highest variance to obtain the data [20]. Additional features are generated from grouping the existing features linearly [22]. In the x-dimensional space, the instance of the dataset would be mapped to y-dimensional subspace, so that x less than y. The set of y-new dimensions generated are called Principal Components (PC). The PC is limited to the highest deviation which is extracted, because it has been identified in all of its previous elements. Therefore, the initial module covers the greatest variance; then, each module which tracks to cover the smallest variance value. In this proposed work, the gathered data from the sensor will be passed to a PCA that is used to reduce features in order to obtain better prediction level. There are several attributes available which related to weather but PCA has been applied for dimension reduction that chose attribute such as humidity, temperature, moisture, rainfull etc. These significant features are extracted, and then, the ML algorithms are implemented to predict and classify the degradation level of soil to choose the crop based on soil nutrient level.

3.3 ML Algorithm for Prediction ML is a common tool used to resolve agricultural issues. It is used to define useful classifications and movement patterns in the study of broad datasets. The ultimate purpose of the ML framework seems to be extracting the information from a collection of data and transforms it into a comprehensible framework for further use. Fertilizer recommendation can be done using fertilizer data, crop and location data. In this part, suitable crops and required fertilizer for each crop are recommended. It can be used to display weather information, temperature information as well as humidity, atmospheric pressure and overall description. In Fig. 3, sensors are mounted on the farm to detect soil humidity, soil moisture, rainfall, soil temperature and PH-related data, whereas ML algorithms have been used to classify sensed information. The results of the forecast demonstrate which soil is suitable for various crops and the soil quality.

4 Linear Regression (LR) This study has been focused with several ML algorithm for predicting the crop, and one of the basic regression algorithm is LR which is said to be simple LR, whereas the single dependent variable datasets deals with simple LR, but in the case

134

G. Najeeb Ahmed and S. Kamalakkannan

Input/IoT Data

Data processing

Train/test data

Test data

Train data

ML Model

Prediction of crop

Fig. 3 Block diagram of ML algorithm for prediction

of two or more dependent variables, it deals with multiple LR. The variables used for predicting the value dependent variable are said to be independent variables. Hence, the LR model which consists of multiple predictor variable is known as multiple LR model. Thus, the expression of Eq. (1) illustrates the two predicted variables x1 and x2 which are shown below Y = β0 + β1 x1 + β2 x2 + ∈

(1)

β0, 1, β2 are coefficient of LR. x1, x2 are independent variables. The crop is able to predict accurately using crop prediction variables based on several dependent factors, namely soil nutrients, historical crop production and weather. This study depends upon these variables that are considered as location dependents which influence the study to introduce user location need to be suggested as the input for the IoT system. Both these variables are dependent on the environment, and so the user’s position is used as an input to the system. Hence, this IoT system collects the soil properties from the soil repository over respective area in accordance with the current user location. Similarly, the technique used in weather parameters has been extracted from the datasets of weather, and the crop gets cultivated only when suitable condition has been convinced. These involve major parameters associated with weather and soil. Moreover, these constraints have been compared, and suitable crops get verified using ML model as LR by predicting the crops. The prediction is completely dependent upon previous history data of crop productions which assists in finding out the concrete parameter of weather and soil while comparing it with the present conditions that may aid in predicting the crop with high accuracy. The beneficial for using LR algorithm has prediction of crop, and the users are available with various suggestion of crop discussion about crop duration. This is achieved by applying ML algorithms like LR on agriculture data and recommends fertilizer suitable for every particular crop.

Developing an IoT-Based Data Analytics System …

135

5 Decision Tree (DT) DT is acting as a classification algorithm which has performed one of the ML model, and this model is a basic learning that operation as an educational decision making. Based on the various parameters, the model is generated from the data or investigations, and the design is focused in identifying from the experienced example of usual rules. Therefore, the DT has executed in two dissimilar task in accordance with weather and feature of variables which are constant or dissimilar, but in the case of crop, the classification tree has assisted for generating regression tree.

6 Random Forest (RF) RF is an algorithm for supervised learning. RF constructs and combines multiple DTs to achieve more accurate as well as stable prediction. RF searches for the most significant parameter of all when dividing each node, and then, it searches for the best of them from the subset of random features. It produces a paradigm that is more robust in large diversity. Within this algorithm, only specific features for node separation are taken into consideration. The trees should be made more dynamic, using arbitrary thresholds for the function set instead of looking for the highest available thresholds. RF testing approach employs the general bootstrap strategy of combining, or bagging, to tree learners.

7 Conclusion An IoT-based approach to predict the level of soil nutrient degradation based on the past and current levels of micronutrients, crop information, fertilizers and prediction of both past and current weather in order to planned crop’s nutritional requirements. In the past, farmers used to suffer heavy losses due to poor quality products due to degraded nutrient levels in the soil. The levels of nutrients may degrade due to various factors including wrong crop chosen without computing the later nutrient levels that will affect the yield. This proposed research work aims at developing an IoT-based system that takes into various factors that impact the soil nutrient levels, which may impact the yield from the planned crop. This alerts the farmer to choose an alternative crop to avoid a loss. This paper proposes a system to predict crop yield from previous data by applying ML algorithms like LR, decision tree and random forest which recommends suitable fertilizer for every particular crop. The prediction of crop yield based on location and proper implementation of algorithms has proved that the higher crop yield can be achieved. From above work, it is concluded that for soil classification, random forest, linear regression and decision tree have achieved better accuracy compared to other ML algorithm. The work can be extended further

136

G. Najeeb Ahmed and S. Kamalakkannan

to add following functionality. Mobile application can be built to help farmers by uploading image of farms. The detection of crop diseases using image processing in which user get pesticides based on disease images. Also implement the smart irrigation system for farms to get higher yield.

References 1. Bondre, D.A., Mahagaonkar, S.: Prediction of crop yield and fertilizer recommendation using machine learning algorithms. Int. J. Eng. Appl. Sci. Technol. 4(5), 371–376 (2019). ISSN No. 2455-2143 2. Manjula, E., Djodiltachoumy, S.: Data mining technique to analyze soil nutrients based on hybrid classification. IJARCS 8, 505–510 (2017) 3. Rajak, R.K., Pawar, A., Pendke, M., Shinde, P., Rathod, S., Devare, A.: Crop recommendation system to maximize crop yield using machine learning. IRJET 12 (2017) 4. Tatapudi, A., Suresh Varma, P.: Prediction of crops based on environmental factors using IoT and machine learning algorithms. Int. J. Innov. Technol. Explor. Eng. 9(1) (2019). ISSN: 2278-3075 5. Patil, S., Sakkaravarthi, R.: Internet of things based smart agriculture system using predictive analytics. Asian J. Pharmaceut Clin Res 10(13), 148–52 (2017). https://doi.org/10.22159/ajpcr. 2017.v10s1.19601 6. Chlingaryan, A., Sukkarieh, S., Whelan, B.: Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: a review. Comput. Electron. Agric. 151, 61–69 (2018). ISSN 0168-1699 7. Prasanna, V.N.D., Kezia Rani, B.: A novel IOT based solution for agriculture field monitoring and crop prediction using machine learning. Int. J. Innov. Res. Sci. Eng. Technol. 8(1) (2019) 8. Rao, R.N., Sridhar, B.: IoT based smart crop-field monitoring and automation irrigation system. In: 2018 2nd International Conference on Inventive Systems and Control (ICISC), Coimbatore, pp. 478–483 (2018) 9. Patil, P., Panpati, V.: Crop prediction system using machine learning algorithms. IRJET 7(2) (2020) 10. Oliveira, I., Cunha, R.L.F., Silva, B., Netto, M.A.S.: A scalable machine learning system for pre-season agriculture yield forecast. IEEE (2018). https://doi.org/10.1109/eScience.2018. 00131 11. Hufkens, K., Melaas, E.K., Mann, M.L., Foster, T., Ceballos, F., Robles, M., Kramer, B.: Monitoring crop phenology using a smartphone based near-surface remote sensing approach. Agric. For. Meteorol. 265, 327–337 (2019) 12. Kaburuan, E.R., Jayadia, Harisno, R.J.: A design of IoT-based monitoring system for intelligence indoor micro-climate horticulture farming in Indonesia. In: International Conference on Computer Science and Computational Intelligence, ScienceDirect, pp. 459–464 (2019) 13. Osco, L.P., et al., A machine learning framework to predict nutrient content in valencia-orange leaf hyperspectral measurements, Remote Sens. (2020) 14. Balducci, F., Impedovo, D., Pirlo, G.: Machine learning applications on agricultural datasets for smart farm enhancement. Machines (2018) 15. Sundmaeker, H., Verdouw, C., Wolfert, S., PrezFreire, L., Internet of Food and Farm 2020: Digitizing the Industry—Internet of Things Connecting Physical, Digital and Virtual Worlds, vol. 2. River Publishers, Gistrup, Denmark (2016). 16. Wolfert, S., Ge, L., Verdouw, C., Bogaardt, M.-J.: Big data in smart farming a review. Agric. Syst. 153, 69–80 (2017)

Developing an IoT-Based Data Analytics System …

137

17. Biradarand, H.B., Shabadi, L.: Review on IoT based multidisciplinary models for smart farming. In: Proceedings of the 2nd IEEE International Conference on Recent Trends in Electronics, Information Communication Technology (RTEICT), Bangalore, India, 19–20 May 2017; pp. 1923–1926 18. Ramya, R., Sandhya, C., Shwetha, R.: Smart farming systems using sensors. In: Proceedings of the 2017 IEEE Technological Innovations in ICT for Agriculture and Rural Development (TIAR), Chennai, India, 7–8 April 2017; pp. 218–222 19. Yoon, C., Huh, M., Kang, S.G., Park, J., Lee, C.: Implement smart farm with IoT technology. In: Proceedings of the 20th International Conference on Advanced Communication Technology (ICACT), Chuncheon-si Gangwon-do, Korea, 11–14 February 2018, pp. 749–752 20. Al-Sarawi, S., Anbar, M., Alieyan, K., Alzubaidi, M.: Internet of Things (IoT) communication protocols. In: Proceedings of the 2017 8th International Conference on Information Technology (ICIT), Amman, Jordan, 17–18 May 2017; pp. 685–690 21. Wasson, T., Choudhury, T., Sharma, S., Kumar, P.: Integration of RFID and sensor in agriculture using IOT. In: Proceedings of the 2017 International Conference On Smart Technologies For Smart Nation (SmartTechCon), Bangalore, India, 17–19 August 2017, pp. 217–222 22. Doraikannan, S., Selvaraj, P., Burugari, V.K.: Principal component analysis for dimensionality reduction for animal classification based on LR. Int. J. Innov. Technol. Explor. Eng. 8(10) (2019). ISSN: 2278-3075 23. Arvind, G., Athira, V., Haripriya, H., Rani, R., Aravind, S.: Automated irrigation with advanced seed germination and pest control. In: IEEE Technological Innovations in ICT for Agriculture and Rural Development (TIAR) (2017) 24. Rau, A., Sankar, J., Mohan, A., Das Krishna, D., Mathew, J.: IoT based smart irrigation system and nutrient detection with disease analysis. In: IEEE Region 10 Symposium (TENSYMP) (2017) 25. Rajeswari, S., Suthendran, K., Rajakumar, K.: A smart agricultural model by integrating IoT, mobile and cloud-based big data analytics. In: International Conference on Intelligent Computing and Control (I2C2) (2017) 26. Pooja, S., Uday, D., Nagesh, U., Talekar, S.: Application of MQTT protocol for real time weather monitoring and precision farming. In: International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT) (2017) 27. Roselin, A., Jawahar, A.: Smart agro system using wireless sensor networks. In: International Conference on Intelligent Computing and Control Systems (ICICCS) (2017) 28. Maia, R., Netto, I., Tran, A.: Precision agriculture using remote monitoring systems in Brazil. In: IEEE Global Humanitarian Technology Conference (GHTC) (2017) 29. Mekala, M., Viswanathan, P.: A novel technology for smart agriculture based on IoT with cloud computing. In: International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (ISMAC) (2017) 30. Ananthi, N., Divya, J., Divya, M., Janani, V.: IoT based smart soil monitoring system for agricultural production. In: IEEE Technological Innovations in ICT for Agriculture and Rural Development (TIAR) (2017)

A Survey on Cloud Resources Allocation Using Multi-agent System Fouad Jowda and Muntasir Al-Asfoor

Abstract Cloud computing has become a new generation of technology with greater potential in markets and companies. It involves providing resources as a service in the marketplace, it can be accessed and consumed in a transparent and flexible manner. Several cloud customers need a range of services according to their dynamically evolving needs. It is therefore the task of the cloud computing framework to facilitate the provision of services required for its users. One of the main challenges is to improve resource allocation process. This is a challenging task due to their growth in size, power consumption, and complexity. The goal of this paper is to undertake a study on the resource allocation of cloud using multi-agent systems which have relatively not studied intensively. Cloud Computing’s features include advanced operational characteristics for structured multi-agent systems. In essence, the integration of multi-agent system into the core of the Cloud allows the convergence of various functionality to be improved. Keywords Cloud system · Multi-agent · Resource allocation · MAS · Cloud computing resource allocation

1 Introduction The Cloud was introduced as a concept by Chellappa [1], He suggested that future model computational would be far highly Related with economics desires than to technology’s constraints. In recent years, the research community and the computing sector have made great progress in implementing the computing model. These have led to a substantial increase of both public and private platforms [2–5] aimed at offering new ideas that can meet the Cloud Computing paradigm’s existing needs. Certainly, general societal support for this model [6] has contributed greatly to its F. Jowda (B) · M. Al-Asfoor College of Computer Science and IT, University of Al-Qadisiyah, Al Diwaniyah, Iraq e-mail: [email protected] M. Al-Asfoor e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_13

139

140

F. Jowda and M. Al-Asfoor

Fig. 1 Cloud resources [9]

rapid growth, in addition to the economic interests of major technology companies that focus on developing technological aspects [7, 8]. can describe Cloud Computing be according to many points of view, such as job processing, software, data storage, and virtual environment use. Cloud infrastructure is a form of a resource-on-demand approach to a common set of computing resources (e.g. networks servers, software, storage and services) It can be controlled and monitored by a service provider with the smallest management effort or minimal intervention, as in Fig. 1. The paper is structured as follows: motivation in the next section, in Sect. 3 Cloud definition, in Sect. 4 Multi-Agent System (MAS), in Sect. 5 Literature Survey, in Sect. 6 Summary and section conclusion finally.

2 Motivation Different cloud users in cloud infrastructure require several offerings according to their rapidly changing needs. Therefore, Cloud Computing must make use of all the necessary resources for cloud clients. However, it is challenging for cloud vendors to deliver all the services needed on schedule due to scarcity of resources. From a Cloud vendor’s point of view, Cloud services should be distributed fairly. So, meeting the needs of service quality and satisfaction of Cloud customers is problem. Cloud Computing is not suitable for traditional resource allocation methods because it relies on virtualization techniques of a distributed nature because of Heterogeneity, of hardware capacity and assessment of functions and Working load, Cloud poses new problems for flexible and manageable to achieve the service level Cloud application

A Survey on Cloud Resources Allocation Using …

141

user goals. The absolute purpose of allocating cloud storage resources to increase the benefit for cloud vendors and reduce expenses for cloud users.

3 Cloud Definition The cloud is a Large virtual resource pool that are readily accessible and usable [7, 8]. Depending on the principle of pay-as-you-go [10]. The increasing significance of this model has contributed to a wide range of definitions [11–14]. The most widely agreed term and, in our view, a most valid from both a technical and a practical viewpoint is provided by “NIST” [11]. In this definition, Mell et Grance propose “that Cloud Computing is a model for enabling ubiquitous, convenient, ondemand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models”. Vaquero et al. [13] defined cloud as “Clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or services). These resources can be dynamically reconfigured to adjust to a variable load (scale), allowing also for an optimum resource utilization. This pool of resources is typically exploited by a pay per-use model in which guarantees are offered by the Infrastructure Provider by means of customized SLAs”. Plummer et al. [15] defined cloud as “a style of computing where massively scalable IT-related capabilities are provided as a service across the Internet to multiple external customers”. Marston et al. [16] definition of cloud computing is as: “It is an information technology service model where computing services (both hardware and software) are delivered on-demand to customers over a network in a self-service fashion, independent of device and location. The resources required to provide the requisite quality-of service levels are shared, dynamically scalable, rapidly provisioned, virtualized and released with minimal service provider interaction. Users pay for the service as an operating expense without incurring any significant initial capital expenditure, with the cloud services employing a metering system that divides the computing resource in appropriate blocks.”

3.1 Types of Services in Cloud • SaaS: stands for “Software as a Service”, the technology user uses cloud-based web systems operating on a cloud platform instead of just using locally managed programmers. It is the responsibility of the cloud operator to maintain and operate the computing resources used by the cloud customer. The best way is SaaS to take

142

F. Jowda and M. Al-Asfoor

Fig. 2 Reference cloud services [9]

advantage of modern technologies. Examples of this support model is Customer Relationship Management (CRM) and Salesforge.com [11, 17–21]. • PaaS: stands for “platform as a service” provides the cloud computing platform an ecosystem for programmers to build and execute applications. It offers a platform in which apps and utilities can be operated. The underlying cloud infrastructure would not need to be taken care of by customers, like network, server, storage, OS, but have power over installed programmers. Examples of this model are Microsoft Azure, Google Application Engine, and Right Scale [11, 17–19, 21] • IaaS: stands for “Infrastructure as a Service” Service providers manage a wide range of resources computing, like storage and processing capacities. Cloud users can monitor the operating system; servers, software installed, and likely minimal monitor of chosen Components of Networking (e.g., host firewalls). It is often referred to as “Hardware as a Service (HaaS)”. Here, The hardware cost can be greatly reduced. Amazon Web Services and Open Stack offer IaaS services in addition to Flexiscale, Eucalyptus and GoGrid [11, 17–19, 21] (Fig. 2).

3.2 Deployment Models in Cloud • Private cloud: which is the organization’s private, meaning the failure to provide services to the general public. Managed all services by members of By the organization or from outside parties i.e. vendors. A cloud can exist on or outside of a premise [11, 18, 19, 22]. • Public Cloud: The cloud services managed by the company are available on a “pay as you go” basis to all the public. This cloud can be used by business people

A Survey on Cloud Resources Allocation Using …

143

Fig. 3 Reference cloud deployment models

to save hardware and/or software costs. Public cloud can offer a combination of solutions such as data protection, data storage, consistency, level of control, etc. [11, 18, 19, 22]. • Community Cloud: Available to unique groups of people or groups of people. All of these people can share resources in the community. The clouds group can be located inside or outside the building [11, 18, 19]. • Hybrid cloud: Two or more clouds are a mix from cloud (private, social, and public) [11, 18, 19]. This model provides the user with more flexibility and allows for better user infrastructure and protection (Fig. 3).

4 Multi-agent System (MAS) Multi-agent systems have been emerged as a way to representing Dynamic network scalable [23] is a system composed of more than one agent Interacting together with each other, to achieve a target or judgement that cannot be inferred by a single agent, In MAS, each agent must be able to collaborate, organize and interact with other designated agents. Typically, by the transmission of messages over some infrastructure of the computer network. In the general, agents represent or function on behalf of consumers or stakeholders with entirely different interests and motives in a multi-agent structure. These clients would need the ability to cooperate, organize and compromise with each other to communicate effectively, in the same manner as we cooperate, coordinate, and negotiate with others in our everyday lives [24]. Several different definitions have been suggested, depending on the research discipline they come from. The goal is not to list different definitions, but to choose one definition that is general and acceptable. In [25] multi-agent is defined as, “A multiagent system is a loosely coupled network of problem-solving entities (agents) that

144

F. Jowda and M. Al-Asfoor

work together to find answers to problems that are beyond the individual capabilities or knowledge of each entity (agent)”. The prime reason for a multi-agent system is the capacity for Cooperation in problem-solving, information sharing capability, concurrent computation of common problems, modular implementation and creation, tolerance to faults, and the ability to reflect various points of view. MAS can be used to build a design of the Cloud Computing environment that is far more efficient, flexible, and adaptable than what is currently available.

5 Literature Survey Ralha et al. [26] Introduces a multi-agent framework for dynamically monitoring, forecasting, and distributing load allocation of computing resources, reducing implementation time and expense of cloud services. And tested with the agent-based simulator MASE-BDI. Description of the structure of the three-layer. Layer the user interface is the online front end of the framework that allows users for send application to run in environment of the cloud. Layer the management for cloud is holds Master VM for cloud environment the “Virtual Machine Manager agent (VMMgr)” and the provider cost and Knowledge base provisioning, it interacts with Manager Agent that interacts with Monitoring Agent and flexibility KB of Located in each of the cloud execution layer slave Virtual machines. The VMMgr Provides contact between layers of execution of the cloud and the user interface, using inference rules for resource provisioning to automatically for the program operating on the cloud platform. The VMMgr creates the Ms with the user’s chosen features, Create the Manager Agent and Constantly updating the database after the execution from every program. The Monitoring Agent and Manager Agent in every slave VM of layer of cloud execution. Manager Agent The core role is to monitor use of VM affiliate services. It checks the need for conducting horizontal elasticity based on information from the KB and data given by the Manager Agent and tells the VMMgr agent when such a need occurs. Alwadan [27], in this proposed structure, The agent manager (broker) receives requests containing specifications such as (CPU speed, RAM size, storage size … etc.), to find the best option for performing consumer tasks and compare the request with the registered cloud providers. This type of sorting is carried out according to the agreement time and cost. After that, communicate with the agents responsible for liaising between the broker and service providers to ensure that service providers are ready to take on new jobs, Sends the work to the provider, and time set a Timeto-Live (TTL) counter. If finished the TTL is done before the job is completed, the broker will ask the agent of the providers to stop the work. This will tend to stop the endless work from being continue persistent. During the task, the provider’s agent provides frequent reports to the broker on the quality of success of the task. This stage allows the broker to monitor the job’s achievement. If the running of a job has failed to continue at any point, the feedback from the representatives of the providers

A Survey on Cloud Resources Allocation Using …

145

will assist the broker to locate an alternate provider(s) to execute the job. In the last process, the broker gives the work results back to clients. This system is to track job output and provide the agent manager (broker) with the necessary input on the work process. The proposed architecture has been implemented and tested by the JADE platform. Mazrekaj et al. [28], Suggest approach for Improving the Performance and actual quality of the data center using multi-agent systems. It’s based on The Model utility function, based on the use of host CPUs to drive live migration activities. The decision to improve the utility function for Allocation of base resources provides a non-existent flexible resource sharing policy in rule-based policies and the threshold. This implies algorithm the central allocation, Architecture composed of the local agent that called (HA) Host Agent and a central controller (GA) the Global Agent, can be easily modified by Change type of utility function. HA is charged for continually monitoring the utilization of the host CPU and for determining if the host is in an overload or underload state, this information is moved on to GA who is It is also responsible for Initiate acts for Local customization via Decide on the allocation of CPU capacity (CAP), virtual machines and Resolve disputes when the total of maximum values is greater than CPU capacity for all virtual machines. GA makes decisions on the global allocation of resources to improve VM placement to decrease violations of SLA and power usage. It getting the information from the HA on the status data of the host, the available Processor capacity, Processor capacity used, and expected CPU usage, and Performs effective Live VM Migration Actions. In this case, HA it was decided to execute live migration without enabling any control system or GA. HA when detects an overload or underload condition, it redirects live migration demand randomly to another chosen HA to find its hosts as a potential VM location destination. Wang et al. [29] presented a resource allocation method based on multiagent (MA) to address the cloud service providers (CSPs) problem of allocating appropriate VM resources to physical machines (PMs) to reduce energy consumption. By sending each PM a cooperative agent to assist the PM in resource management. There are two complementary mechanisms for the proposed MA approach: (1) Auction-Based VM Allocation: During the first step, a VM allocation process of auction-based agents is configured to assess that PM hosts have new VMs. Theoretical analyzes show that the Mechanism of distribution dependent on auction VM has a good guarantee of power cost savings and the dynamics of the system. (2) VM Consolidation based on negotiation: To deal with system dynamics and to avoid the exorbitant expense of migrating virtual machines. The local negotiation Mechanism of VM consolidation is designed for agents to exchange their own dedicated virtual machines to save energy cost. De la Prieta and Corchado [30] this study proposes an architecture paradigm is called “+Cloud” (Multi-agent System Cloud) that based on intelligent agent “virtual organizations” (VOs). The major goal is to monitor and control a Cloud Computing framework, enabling it to respond to the needs at any given time automatically and dynamically. The model includes a monitoring agent (local monitor) and one local administrator (local manager) for each physical server. Between the two, their function is to disperse services in VM of the actual server (PR) where they are,

146

F. Jowda and M. Al-Asfoor

another agent specialist is the Global Manager, who also based at each point of the infrastructure’s physical nodes, is responsible to decide to grant or delete the contract for a particular service, and service given to customers is connected by two agents. A one to monitoring purposes (Service Monitor), the second to control (Service Supervisor), both of them responsible for Ensure compliance to the SLA. There are other agents at the entry point of the system for Somewhat several roles. two agents of control, the first is in Responsible for monitoring (Hardware Supervisor) hardware infrastructure, its status, and the physical servers beginning or stopping on request. The global controller (Global Supervisor) is a supervisor agent, who ensures that the other modules and agents operate correctly and in conformity for them requirements. Finally, an agent SLA Broker is responsible for establishing platform user agreements to service. This study used the platform of cloud, published in BISITE Research Group’s HPC environment that enables virtualization using the virtualization system KVM and the Intel-VT technology. The study focused on attack denial of service (DoS), if the total reaction time is more than median 2.5 s, when the case, it immediately begins the process of adapting the virtual infrastructure at the real level. The service response time will revert to a value lower than the appropriate standard of service (< 2.5 s) after the automatic correction process is finished and the weights are changed. Fareh et al. [31], The aim of the proposed is to allow users to select the best services that will Meet the requirements they need and, it helps cloud vendors to distribute their resources efficiently. The general system architecture consists of four layers that interact with each other. (1) Cloud and user interaction layer: Is mediator between cloud user and Cloud Computing framework. It allows the cloud system to be interrogated by users. A GUI provides this layer (To describe the user’s needs, the results of the resource allocation appear. etc.). Specification of these requirements The Agent of Cloud Users (CUA) Formulates a User Request Description, which needs to be submitted to Brokers’ Agents (BA). CUAs help consumers find best Resources for their requirements. They function on behalf for every consumer in the process of identifying the necessary QoS and formalizing the agreed agreement. (2) Layer of Brokering: This is layer intermediate between the providers of the cloud and the first layer, comprises broker agents (BAs) Discovering the provider’s most acceptable offer for the customer request is led Activities for negotiation Accepted by both parties one’s Conditions for SLA to be formed Between the consumers of the cloud and cloud providers, (3) Resource layer selection and placement: This layer includes a collection of Coordinators Agents (CoAs) and Cloud Providers Agents (CPAs) In this stage, and for each request From the consumers, to select appropriate resources at each order received, the CPA is in contact with the CoA. The CoA coordinate between the lower layer agents (data center managers), (4) Resource layer exploration and management, that’s the lowest layer, it includes a collection of data-center management agents, the management of a data center’s resources is the responsibility of each DCMA. And at a given moment, It finds the resource capacity of the physical node. Used simulations by CloudSim are carried out in using the JADE platform.

A Survey on Cloud Resources Allocation Using …

147

Al-Ayyoub et al. [32] This research focuses on optimizing the utilization of resources by use of a multiagent system. In which multiple agents are accountable for various activities, include customer monitoring and resources available, which are based on customer requests. It seeks to avoid SLA violations and to mitigate the impacts of issues with overprovisioning and under provisioning. There are three main stages of the proposed system: The tasks linked with every stage are automatically managed by automated rational agents. (1) Monitoring phase: (monitoring of resources, monitoring of requirements and customer tasks), (2) Analysis phase: all information on the current state of the resources of the provider and customer tasks, requirements, and behaviors from monitoring mechanism are analyzed to generate appropriate decision from a group of agents for allocation or release of resources based on the SLA agreement, (3) Phase of execution: this phase is a charge of implementing the decisions and actions taken to allocate or release resources. The components of the multiagent Includes the global utility agent as an agent and a group a local utility agent. Local agent in charge of reformulating the requests is allocated to each client Based on historical regression analysis, the volume of resources to be used would be calculated without causing an SLA breach. On the CloudSim emulator, the suggested framework is applied and tested and the results show that it increases the utilization of resources and reduces power usage while preventing violations of SLA. Shyam and Manvi [33] Suggest a Best-Fit resources allocation system based on an agent using the cloudlet agent of the customer and the resource agent of the provider to ensure that the amount of VMs used is reduced, optimize the usage of resources in VM, minimize the total cost, and comply with the guaranteed QoS. The approach Best-Fit improves the placement ratio. Host overload-induced virtual machine migration is less likely to occur, which increases the overall performance of the system. There are agents two type. The “User Cloudlet agent” (UCA) is located on the client system. Analyses the user’s demand, provide them with full details on the numerous CSP services the different, accepts demands for the allocation of the required resource amount for the execution of the application, assists in future SLA, agreements with provider’s resource agent (PRA) before sending the application to customer. PRA on the server that uses the best way of allocate the resources from the Cloudlet for jobs. The does Receives UCA ‘s list of cloudlets, prioritizes them dependent on SLAs, connects them to relevant CSP VMs using the Best-Fit method. the results are compared using Round-Robin and First-Come-First Serve with other agent-based resource allocation approaches by the work is simulated. The Best-Fit approach is shown to work better in terms of allocation of VMs, time for job execution, expense, and usage of resources. Farahnakian et al. [34], This presents, a framework for Virtual machines management that is dependent on a multiagent system The proposed agents for assigning a Virtual machines are arranged in a hierarchical system of three level. The aim is to reduce energy use and number of datacenter migrations while maintaining a high degree of commitment to SLA. The agents are, The local per PM as the software unit is responsible for controlling the usage of PM resources, the Cluster Agent (CA) for managing the resources of the cluster and controlling a collection of LAs and

148

F. Jowda and M. Al-Asfoor

As a master node, a Global Agent (GA) for managing CAs. So, in the data center, GA has a description of all clusters. The Hierarchical Virtual machines management (HiVM) architecture’s main concept is to break the broad Virtual machines management dilemma into a range of minor ones, such as Virtual machines distribution, Virtual machines positioning, and Virtual machines consolidation. Experimental results show that the CloudSim simulation compared with latest in dynamic merging methods the suggested hierarchical Virtual machines consolidation paradigm, the system achieves a high-quality approach that can minimize energy usage, effectively count migrations and SLA violations. Despite its flexibility and scalability.

6 Summary This paper summarizes the process of integrating cloud computing with a multi-agent system as this process aims for several purposes, as shown in Table 1 in addition to the proposed systems, technologies, and algorithms used for this purpose. Table 1 Summary of studies Author(s)

Aim

The agents

Algorithm

Platform

Ralha et al., 2019

Reducing the time and cost of expense cloud services

VM manager, manager agent, monitoring agent



Simulator MASE BDI, Amazon EC2 platform

Alwadan, 2018

Discovery of the right resources, bargaining between vendors and clients, tracking of processing jobs

Clients (human – actor), agent manager (broker), providers agent

Mazrekaj et al., 2017

Improving the performance and actual quality of the data center

(Host agent) local agent and (global agent) central controller

Best first – decreasing (BFD)

Wang et al., 2016

Reduce energy consumption

Agents [Swap contract only two agents and cluster contract (>2)]

Auction-based VM allocation, compute profitable swap, swap contract and cluster contract

JADE



(continued)

A Survey on Cloud Resources Allocation Using …

149

Table 1 (continued) Author(s)

Aim

The agents

Algorithm

Platform

De la Prieta and Corchado, 2016

Monitor and control a cloud computing framework, enabling it to respond to the needs at any given time automatically and dynamically

Local monitor, Local manager, Global manager every physical server, service monitor, service supervisor, SLA broker, hardware supervisor, global supervisor and Identity manager



BISITE

Fareh et al., 2016

Allow users to select the best services and, it helps cloud vendors to distribute their resources efficiently

Agents (cloud users, broker, coordinators, datacenter management, cloud providers)



CloudSim, using JADE platform

Al-Ayyoub et al., 2016

Increases the utilization of resources and reduces power usage while preventing violations of SLA

Agent (global utility, group a local utility)

Request reformulation (RR) algorithm

(CloudExp) extended of CloudSim

Shyam and Manvi, 2015

Work better in terms of allocation of VMs, time for job execution, expense, and usage of resources

Agents (cloudlet, resource)

UCA, PRA, VMs_Utilization

Cloudlet

Farahnakian et al., 2014

Minimize energy Agents (cluster, usage, effectively global, and local migrations and every PM) SLA violations

Consolidation, VM allocation policy

CloudSim

7 Conclusions The modern era in software is the availability of software as a cloud computing platform. The popularity and elegance are due to the cloud resources provided by the cloud. Cloud vendors need to monitor and distribute all services in a timely manner to cloud users due to scarcity of scarce resources and their changing needs dynamically.

150

F. Jowda and M. Al-Asfoor

We have provided a summary of cloud computing resource allocation with a multiagent system. In cloud computing, many scholars have proposed algorithms and methods for dynamic resource allocation. In short, the following requirements can be met with an appropriate resource allocation technology: Quality of Operation (QoS), conscious use of resources, cost savings, and energy saving/energy reduction. Little of the book focused on allocating resources based on IaaS with VM scheduling. The ultimate goal of allocating resources is to increase benefit to cloud vendors and reduce cost to cloud users. We agree that this survey paper will assist and inspire research groups to develop the most appropriate methodology and systems to develop effective and integrated resource allocation strategies using multiple cloud agents.

References 1. Chellappa, R.: Intermediaries in cloud-computing: a new computing paradigm. In: INFORMS Annual Meeting, Dallas. pp. 26–29 (1997) 2. Alam, M.I., Pandey, M., Rautaray, S.S.: A comprehensive survey on cloud computing. Int. J. Inf. Technol. Comput. Sci. 2, 68–79 (2015) 3. Luo, J.-Z., Jin, J.-H., Song, A., Dong, F.: Cloud computing: architecture and key technologies. J. China Inst. Commun. 32, 3–21 (2011) 4. Wen, X., Gu, G., Li, Q., Gao, Y., Zhang, X.: Comparison of open-source cloud management platforms: OpenStack and OpenNebula. In: Proceedings of 2012 9th International Conference Fuzzy Systems and Knowledge Discovery. FSKD 2012. (2012), pp. 2457–2461. https://doi. org/10.1109/FSKD.2012.6234218 5. Hu, P., Dhelim, S., Ning, H., Qiu, T.: Survey on fog computing: architecture, key technologies, applications and open issues. J. Netw. Comput. Appl. 98, 27–42 (2017) 6. Leavitt, N.: Is cloud computing really ready for prime time? Computer (Long. Beach. Calif). 42, 15–20 (2009) 7. Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I.: A view of cloud computing. Commun. ACM. 53, 50–58 (2010) 8. Buyya, R., Yeo, C.S., Venugopal, S., Broberg, J., Brandic, I.: Cloud computing and emerging IT platforms: vision, hype, and reality for delivering computing as the 5th utility. Futur. Gener. Comput. Syst. 25, 599–616 (2009) 9. https://www.datamation.com/cloudcomputing/what-is-cloud-computing.html, “Cloud Computing” 10. Josep, A.D., Katz, R., Konwinski, A., Gunho, L.E.E., Patterson, D., Rabkin, A.: A view of cloud computing. Commun. ACM. 53, 50–58 (2010) 11. Mell, P., Grance, T.: The NIST definition of cloud computing (2011) 12. Foster, I., Zhao, Y., Raicu, I., Lu, S.: Cloud computing and grid computing 360-degree compared. In: 2008 Grid Computing Environments Workshop, pp. 1–10. IEEE (2008) 13. Vaquero, L.M., Rodero-Merino, L., Caceres, J., Lindner, M.: A Break in the Clouds: Towards a Cloud Definition (2008) 14. Wang, L., Von Laszewski, G., Younge, A., He, X., Kunze, M., Tao, J., Fu, C.: Cloud computing: a perspective study. New Gener. Comput. 28, 137–146 (2010) 15. Plummer, D., Bittman, T.J., Austin, T., Cearley, D.C., Smith, D.M.: Cloud Computing: Defining and Describing an Emerging Phenomenon (2009) 16. Marston, S., Li, Z., Bandyopadhyay, S., Zhang, J., Ghalsasi, A.: Cloud computing—the business perspective. Decis. Support Syst. 51, 176–189 (2011). https://doi.org/10.1016/j.dss.2010. 12.006

A Survey on Cloud Resources Allocation Using …

151

17. Reddy, V.K., Rao, B.T., Reddy, L.S.S.: Research issues in cloud computing. Glob. J. Comput. Sci. Technol. (2011) 18. Zhang, S., Yan, H., Chen, X.: Research on key technologies of cloud computing. Phys. Procedia. 33, 1791–1797 (2012) 19. Zissis, D., Lekkas, D.: Addressing cloud computing security issues. Futur. Gener. Comput. Syst. 28, 583–592 (2012) 20. Zhang, L.-J., Zhang, J., Fiaidhi, J., Chang, J.M.: Hot topics in cloud computing. IT Prof. 12, 17–19 (2010) 21. Sadashiv, N., Kumar, S.M.D.: Cluster, grid and cloud computing: a detailed comparison. In: 2011 6th International Conference on Computer Science & Education (ICCSE). pp. 477–482. IEEE (2011) 22. Grossman, R.L.: The case for cloud computing. IT Prof. 11, 23–27 (2009) 23. Al-Asfoor, M., Fasli, M.: A study of multi agent based resource search algorithms. In: 2012 4th Conference on Electronic Engineering and Computer Science, CEEC 2012—Conference Proceedings, pp. 65–70 (2012). https://doi.org/10.1109/CEEC.2012.6375380 24. Wooldridge, M.: An Introduction to Multiagent Systems. Wiley, Hoboken (2009) 25. Stone, P., Veloso, M.: Multiagent systems: a survey from a machine learning perspective 1 Introduction 2 Multiagent systems. Auton. Robots. 8, 345–383 (1997) 26. Ralha, C.G., Mendes, A.H.D., Laranjeira, L.A., Araújo, A.P.F., Melo, A.C.M.A.: Multiagent system for dynamic resource provisioning in cloud computing platforms (2019). https://doi. org/10.1016/j.future.2018.09.050 27. Alwadan, T.: Cloud computing and multi-agent system: Monitoring and services. J. Theor. Appl. Inf. Technol. 96 (2018) 28. Mazrekaj, A., Minarolli, D., Freisleben, B.: Distributed resource allocation in cloud computing using multi-agent systems. Telfor J. 9, 110–115 (2017). https://doi.org/10.5937/telfor170 2110m 29. Wang, W., Jiang, Y., Wu, W.: Multiagent-based resource allocation for energy minimization in cloud computing systems. IEEE Trans. Syst. Man, Cybern. Syst. 47, 205–220 (2016) 30. De la Prieta, F., Corchado, J.M.: Cloud computing and multiagent systems, a promising relationship. In: Intelligent Agents in Data-intensive Computing. pp. 143–161. Springer (2016). 31. Fareh, M.E.K., Kazar, O., Femmam, M., Bourekkache, S.: An agent-based approach for resource allocation in the cloud computing environment. Proceeding 2015 9th International Conference on Telecommunication System. Serv. Appl. TSSA 2015 (2016). https://doi.org/10. 1109/TSSA.2015.7440447. 32. Al-Ayyoub, M., Daraghmeh, M., Jararweh, Y., Althebyan, Q.: Towards improving resource management in cloud systems using a multi-agent framework. Int. J. Cloud Comput. 5, 112–133 (2016) 33. Shyam, G.K., Manvi, S.S.: Resource allocation in cloud computing using agents. In: 2015 IEEE International Advance Computing Conference (IACC). pp. 458–463. IEEE (2015) 34. Farahnakian, F., Liljeberg, P., Pahikkala, T., Plosila, J., Tenhunen, H.: Hierarchical VM management architecture for cloud data centers. In: 2014 IEEE 6th International Conference on Cloud Computing Technology and Science, pp. 306–311. IEEE (2014)

IoT-Based Smart Helmet for Riders N. Bhuvaneswary, K. Hima Bindu, M. Vasundhara, J. Chaithanya, and M. Venkatabhanu

Abstract This research work describes the methodology switch to adapt the safety measures for the people, who are riding the motor vehicles and helps in preventing the majority of two-wheeler motor vehicle accidents and also helps in preventing bike theft and improper helmet usage. The main motive and objective of this device are to enhance the safety and security of both the rider and motorcycle. It is already known that more than 1.5 lakh motorcyclists are injured only because of the accidents during road transportation. In the proposed systems, the bike and helmet are linked through an RF signal. The bike starting is possible only if the rider wears the helmet; otherwise, the alarm system informs the rider to wear the helmet. This type of arrangement protects the bike rider from road accidents. In this system, a special type of electronic helmet is designed to act as a transmitter module; henceforth, it is impossible for motorcyclists to start the bike without a helmet. The helmet is designed using a microswitch with TX module. The bike key is controlled by the signal from RFID-based secured card with RX module. On receiving a signal from the helmet, the device works according to the received command. Else the supply to the ignition of the bike will cut off. An RFID module is placed in contact with the microcontroller to detect the authorized access. If any such unauthorized access is detected, then the controller utilizes the RFID module to recognize it and the ignition supply and bike will be switched off. Keywords Bikers safety · Theft detection · Arduino microcontroller · Electronic helmet · RFID module · Microswitch

1 Introduction These days riding a motor vehicle for all the needs and the usage of the motor vehicle has become a common and most preferred method for effective transportation. As a result of these vehicles in large numbers, the accidents being caused by them have N. Bhuvaneswary (B) · K. Hima Bindu · M. Vasundhara · J. Chaithanya · M. Venkatabhanu Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_14

153

154

N. Bhuvaneswary et al.

also been increasing rapidly. As the days are passing by there are many accidents that are happening on the roads, due to the usage of two-wheeler vehicles without the proper usage of helmets, reckless driving by the riders, and riding the two-wheeler without proper installation of all the safety measures. This causes many accidents. With the increase in number of motor vehicles, the safety of the motor vehicles should also be updated regularly and effectively until there is a need and value for the life of every individual. To make this happen and the care for every human being, who utilizes motor vehicle is molded into a small method that mainly focuses on the safety of the rider and the fellow family members and individuals are primarily depending on the individual for their livelihood. From all these objectives, this is the model that mainly focuses on the safety of the rider and also helps in preventing accidents that can be avoided by the proper utilization of safety measures. The model which is being proposed is an efficient technique that helps in reducing the accidents. This method also helps in preventing the bike theft by any unauthorized person. With such technique and efficient utilization of the advanced technology, the two-wheeler accidents, and also the theft of the vehicles can be prevented. The proposed method uses an Arduino microcontroller, microswitch, LCD, RF transmitter and receiver module, helmet module with RF transmitter to send the data to microcontroller for further action, and bike module to adapt the complete method for the implementation of the proposed method. The Arduino microcontroller controls and sends all the instructions to the receiver modules and also sends the operation to be performed by the receiver nodes. The RFID module acts as a communication channel for recognizing the proper person and also whether the authorized rider of the two-wheeler vehicle is riding the bike with the proper usage of helmet. The LCD screen displays the information that is processed by the microcontroller. This LCD screen displays the information that has already been set up by the microcontroller, which has been programmed by the user. It also consists of an alarm unit that gives an alarm, when unauthorized access is detected or when a person drives without wearing the helmet.

2 Literature Survey In [1], the author discussed about the project it is implemented with detection of alcohol, identification of accident tracking of location in this project wearing helmet is mandatory if accident occurs then the switch will automatically locked, and message will be sent to the registered number and also the location will be sent. In [2], the author discussed about the implementation of IR sensors, gas sensors for the detection of person whether he is wearing helmet or not gas sensor is used to detect the consumption of alcohol, accident prevention is done using GSM, and the information is sent to the nearest hospital. In this paper, the disadvantages are using more number of sensors increases the cost and collection data of the nearest hospitals might be complicated.

IoT-Based Smart Helmet for Riders

155

In [3], the author discussed about the development of smart helmet to avoid accidents and by sending the accident information on time. This helmet is also used in the mining industry by providing safety and intimating about the hazardous gases which save lives of many people. In this project, the disadvantage is vehicle theft and detection of alcohol is not included. In [4], the author discussed about the helmet. In this project, vibration sensors are placed at different places in the helmet. When the rider fells down and helmet hits the ground then information is sent to GSM module and then information about the location is sent to ambulance or registered phone number. Here, firstly, the disadvantage is checking the person whether he is wearing helmet or not is not done and more number of sensors are used. In [5], the author mainly focused on detecting the presence of helmet of bike rider before he starts the vehicle. If rider does not wear helmet the ignition of the bike will be switched off. To this project, we can add many additional features. In [6], the author major goal is to detect accident, to prevent and to notify about the accident. This helmet is implemented with Raspberry Pi 3 and Bluetooth module and also cloud-based technology is used in order to send the captured image to the registered number. New software is created to know the correct point in terms of Goggle Maps. Here, the cost of the project might be more as they are using more number of sensors and Raspberry Pi 3 is costlier than Arduino. In [7], in this paper, the project is implemented for accident detection and helmet detection by SMS alert. This project can be enhanced with more number of features like alcohol detection and vehicle theft. In [8], in this project, many sensors are used for accident detection, vibration sensor, capacity sensor and Zigbee is used to detect the presence of helmet and KNN algorithm is used to track the nearest hospitals. Here, the disadvantage is usage of more number of sensors. In [9], the author only focuses on accident detection by using the sensors, we can include many additional features to this project. In [10], the project is implemented with alcohol sensor, gyroscope sensor used to guide the directions using LED, and proximity sensor is used to detect whether the person wears the helmet or not and also they used camera in order to capture the images and the image is displayed it on the head up display which is fixed inside the helmet. The major disadvantage is not budget friendly as cost of camera and sensors is more and below middle class and middle class people cannot afford the cost of the project. In [11], this project mainly focuses on prevention of accidents by controlling directions and turning on indicators with voice so deviation from driving is not possible if any accident occurs it sends information to nearby hospitals with GPS and IoT technology. Many features can be enhanced to this project. By analyzing all the existing methods, this research work has implemented a project, which is budget friendly and which everyone can buy and here there are two sections: first one is to detect the helmet by using pushbutton and all the sensors are replaced by pushbutton to detect the presence of helmet and another is to prevent vehicle theft by using RFID tag and RFID reader. The bike gets switch ON only if

156

N. Bhuvaneswary et al.

both the conditions are satisfied. [12] And also it has made the condition in such a way that only if the authorized person tries to start the bike, then if the condition is satisfied it checks the presence of helmet. This is done in order to reduce the complexity. It has both helmet side and receiver side with transmitter along with encoder and receiver with decoder.

3 Purpose • To designing an electronic helmet to protect vehicle riders by using the RFID technique. • To designing a safety system using RFID technology. Both are designed through the embedded system by using Arduino UNO.

4 Scope and Objective • The proposed system uses a new way of technology, which provides an easy way of communication. • The vehicle ignition starts only after the prescribed. • The person gives authorization through an RFID device. • To design an electronic helmet to prevent the riders from accidents. • RFID module gives protection to the bike.

5 Module Description 5.1 Helmet Side In this section, micro-sensitive switch (pushbutton) and along with that CODE generator with RF transmitter is connected to the switch, and these two are fitted in the helmet. Firstly, if the person wears the helmet, the pushbutton will be switched ON and the data generated from the pushbutton is sent to the CODE generator with transmitter, and here the data is encoded serially and sent to the receiver present at the bike side (Fig. 1).

5.2 Bike Side In this section, RF receiver with decoder, RFID reader, Arduino UNO microcontroller, motor driver circuit, bike engine motor, buzzer, and LCD is present. The RF

IoT-Based Smart Helmet for Riders

157

Fig. 1 Block diagram of helmet side

receiver receives the information from the transmitter. It receives the serial information as Arduino reads parallel information decoder converts serial information to parallel information. This information is sent to the Arduino. And then RFID reader is present to prevent the vehicle theft. The bike key chain contains an RFID tag which contains a unique 11 digits information. This tag is placed at the RFID reader then this reader reads the information and sends this to the Arduino. Then, Arduino compares the both 11 digits information. If both are same, then it checks for the second condition that is to detect the presence of helmet. If both conditions are satisfied, then bike motor engine gets switched ON. In between Arduino and bike motor engine, motor driver circuit is present which acts as an amplifier as Arduino generates around 5 mA current that is not sufficient to start a bike motor engine. So motor driver circuit is used to amplify the current and to provide the sufficient current used to start the motor engine. If any of the conditions are violated, then buzzer gives its sound to intimate an unauthorized person is trying to start the vehicle. And if the conditions are satisfied, then all the information will be displayed on LCD (Fig. 2).

6 Components 6.1 Hardware Requirements Helmet Side • • • •

Microswitch Helmet HT12E code generator with RF transmitter Power supply.

Bike Side

158

Fig. 2 Block diagram of bike side

• • • • • • • •

HT1D code decoder with RF receiver RFID module Arduino UNO microcontroller Motor driver circuit Bike engine motor Buzzer Regulated power supply Connecting wires.

N. Bhuvaneswary et al.

IoT-Based Smart Helmet for Riders

159

Microcontroller Unit The microcontroller board that has been used in this model is an Arduino UNO board based on the AtMega328. In brief, the specifications of the microcontroller that has been used are 14 digital I/O pins, 6 pins functions as analog inputs, a crystal oscillator that generated an electrical signal with constant 16 MHz frequency, commonly placed power jack, and a reset button. This Arduino UNO board has all the requirements that are needed for the effective support of a microcontroller. This board has 32 k bytes of in-system programmable flash memory with reading and writing functions. This microcontroller also has a set of 32 general-purpose registers with the capability of generating internal and external interrupts. This board is also embedded with a programmable watchdog timer which also comes with an internal oscillator. One of the best features of this board is that it comes with an effective five software selectable power saving modes. Usage In his project, this microcontroller is used to store the messages and these are displayed on the LCD display, and the embedded program that is used for the processing the system and execution had also been stored (Fig. 3). LCD Display LCD is used for the displaying the information. In this, we are using 2 × 16 LCD. LCD operation is prices declining of LCDs. LCD will display character, graphics, and numbers. LCDs are contrast to LEDs it has limited to numbers and character. LCD with few millimeters are lightweight and consume less power, useful for long period its does not create light, but light is useful for the reading. The life of the LCDs is very low and wide operating range of temperature. Fig. 3 Arduino microcontroller

160

N. Bhuvaneswary et al.

Fig. 4 LCD display

Usage In this project, LCD display is used to display the messages, when the engine displays the message (Fig. 4). Power Supply A power supply circuit is very useful in any project. This power supply circuit is implemented to get a regulated output DC voltage. A constant supply of 5 V is provided by using 7805 IC. The purpose of rectification is done by the using the bridge rectifiers with diodes. Usage The power supply section is used for supplying voltages to the whole circuit unit. Buzzer Buzzer or beeper is an audio signaling device. It was designed by the mechanical, electromechanical, or piezoelectric systems. Usage In this project, when the person is not wearing a helmet and if an unauthorized person uses the bike, the buzzer gives the signal with a beep sound (Fig. 5). Fig. 5 Buzzer

IoT-Based Smart Helmet for Riders

161

HT12E—Encoder HT12E transmitter module is used to transmit the data with 433.92 MHz. The data to be transmitted with the bits of header is transferred through the RF transmission medium upon signal trigger. From the HT12E, the TE trigger is selected for delivering the application flexibility of the 212 series of encoders. Features: 2.4–12 V, low power, high noise immunity CMOS technology, low standby current: 0.1_A (typ.) at VDD = 5V and 18-pin DIP/20-pin SOP package. Usage In this project, HT12E encoder is used to encode the helmet connected with switches (Fig. 6). HT12D—Decoder HT12d receiver module is used to receive the data with 433.92 MHz the data from the receiver. The decoders receive the data of serial addresses and data from a programmed 212 series of encoders, then that data is transmitted by a carrier using an RF. They compare the addresses (Fig. 7). Fig. 6 HT123 encoder

Fig. 7 HT12D decoder

162

N. Bhuvaneswary et al.

If no error or unmatched codes are found, the data of input codes are decoded and then transferred to the output pins. The VT pin also goes high to indicate a valid transmission. Usage In this project, HT12D decoder is used to decode the information from the encoder connected through a RF transmitter. RF ID RFID, used for the radio frequency identification, it has the technology that permits identification of a tag by using the electromagnetic waves. The essential feature served by means of RFID was comparable to the bar code identification; however, the line of sight alerts are no longer required for the operation of RFID. Unattended functions are the possible, minimizing the human mistakes and the excessive cost. Usage In our project, RFID used to identify the person by tapping to the RFID reader, so that the usage of unknown person can be avoided (Fig. 8). Reader It is a coil protected with the plastic, a reader captures the data supplied by the tag present inside the detectable area to the reader. There are regularly one or greater tags inside the captured area. A reader is generally capable of reading the multiple tags data simultaneously.

Fig. 8 RFID

IoT-Based Smart Helmet for Riders

163

Driver Circuit The relay part includes relays and the ULN2003 driver. The microcontroller offers a logic high output when it required and the high logic output then to drive the relay. The ULN2003 consists of the seven excessive voltage, excessive current NPN Darlington transistor pairs. All devices characteristic frequent emitter, open collector outputs to maximize their effectiveness, these devices incorporate with diodes for inductive masses and emitter–base resistors for leakage. The ULN2003 is a collection base resistor to every Darlington pairs, for that reason, permitting operation at once with the TIL or CMOS working at a supply voltage of 50 V. The required applications to the sink currents beyond the capability of a single output such as display numbers, characters, and graphics.

6.2 Software Requirements • Arduino C compiler • Language: Embedded C.

7 Implementation of Proposed Methodology The proposed system works mainly on the safety of the rider has splitted pin into two modules, one is of the transmitter end having the helmet module and the other is the receiver end consisting of bike or motor module. This proposed method comes up with a transmitter and a receiver, in which a transmitter consists of an RFID transmitter, which transmitted the information to the receivers side of the module, i.e., the motor module, which is additionally fixed with a small pushbutton in the wearable part of the helmet [13]. This small pushbutton is placed in such a way that the pushbutton is connected in synchronization with the code generator and the RFID transmitter, which further helps the ineffective transmission of the appropriate data from the transmitter’s end. This also includes an automatic code generator that generates a code to state whether the rider is riding the two-wheeler by wearing a helmet or not. The code generator used in this method is a 4-bit code generator. This method only requires a single it code for the transmission of the data from the transmitter module to the receiver module, for example, if ‘0’ is transmitted [14], then it is transmitted by the RFID signal to the receiver module which has been predefined in the code that has been dumped to the Arduino microcontroller as ‘Helmet has not been used’. Similarly, on the opposite side, if the code generator sends ‘1’, this data will be transferred to the Arduino and takes into the perception that ‘Helmet is being used for sending a signal to the other part of the receiver to ignite the engine. Even though, a single-bit generator is required a 4-bit code generator is used and this helps in preserving the privacy of the signal (Fig. 9).

164

N. Bhuvaneswary et al.

Fig. 9 Ardunio UNO pins

This helps in such a way that even if the unauthorized person is transmitting a signal that person has to verify all the combinations of 4-bits. On the receiver end, there is an RFID receiver which receives the data from the transmitter, this data is sent to the code decoder which decodes the code sent from the transmitter end, this data will then be sent to the Arduino microcontroller, and this Arduino will give proper commands to turn on the motor module or not. If the engine has to be ignited, then the Arduino sends the commands to the motor driver circuit for amplifying the current to a certain range such that the motor gets enough power to ignite and function [15]. Coming to the other module of the system, the transmitter will have a unique RFID tag, which is unique for every vehicle and the RFID module comes with an antenna working with 120 MHz frequency, and thus the reader will recognize whether the person accessing the vehicle is the authorized user or not. If the rider is authorized, then the Arduino board gives commands to turn ON the engine to the motor driver circuit. On the other hand, if the rider is not an authorized user, then the engine will not be ignited, similarly, if the rider is not wearing a helmet, then the engine will not be ignited. This system follows a simple AND gate logic when both the requirements are achieved then the engine will be ignited else the engine will not be ignited (Fig. 10).

8 Result This intelligent bike and the helmet system put forwarded in order to provide the surety and security for bike riders to reduce the accidents and bike larceny. The proposed project is evaluated by RF transmitter and receiver circuit along with RFID technology to reduce vehicle robbery. An implanted methodology has been evolved and implemented in the helmet side, if the rider erode the helmet, it transmits information to the receiver present at the bike side and also by the RFID tag and reader

IoT-Based Smart Helmet for Riders

165

Fig. 10 Flowchart

only authenticate person has to start the vehicle if any of the conditions are violated bike engine will not start. As you can see that, the LCD display displays that the SMART helmet has been activated and is ready for use when the key has been inserted (Fig. 11). Then, it asks for the RFID card to be scanned to verify the rider is authorized or not (Figs. 12 and 13).

Fig. 11 Smart helmet

166

N. Bhuvaneswary et al.

Fig. 12 Spieling RFID tag on RFID reader

Fig. 13 RFID card is accepted

Fig. 14 Instruction to wear helmet

Then, the module verifies and checks whether the rider is wearing the helmet or not (Fig. 14). At last, the LCD displays that the engine is being ignited.

References 1. Korade, M.V., Guptha, M., Shaikh, A., Jare, S., Thakur, Y.: Smart helmet-A review paper. In: Proceedings of International Journal of Science and Engineering Development Research

IoT-Based Smart Helmet for Riders

167

(IJSDR) (2018) 2. Jesudoss, A., Vybhavi, R., Anusha, B.: Design of smart helmet for accident avoidance. In: Proceedings of International Conference on Communication and Signal Processing (ICCSP) (2019) 3. Divyasudha, N., Arulmozhivarman, P., Rajkumar, E.R: Analysis of smart helmets and designing an IoT based smart helmet: a cost effective solution for riders. In: Proceedings of International Conference on Innovations in Information and Communication Technology (ICIICT) (2019) 4. Paulchamy, B., Sundhararajan, C., Xavier, R., Ramkumar, A., Vigneshwar: Design of smart helmet and bike management system. In: Proceedings of Asian Journal of Applied Science and Technology (AJAST) (April–June 2018) 5. Saravana Kumar, K., Anjana, B. S., Thomas, L., Rahul, K.V.: Smart Helmet. In: Proceedings of International Journal of Science, Engineering and Technology Research (IJSETR), vol. 5, Issue 3 (March 2016) 6. Sethuramrao, Vishnupriya, S.M., Mirnalini, Y., Padmapriya, R.S.: The high security smart helmet using internet of things. In: Proceedings of International Journal of Pure and Applied Mathematics, vol. 119 (2018) 7. Vasudevan, U., BhavyaRukmini, A.G., Dhanalakshmi, S., Jana Priya, S., Balashivani Pal, S.: Smart helmet controlled vehicle. In: Proceedings of International Research Journal of Engineering and Technology (IRJET) (March 2019) 8. Vani, V., Mohana, M., Haritha Meenashi, S., Jeevitha, G., Keethika, A.: Third-eye two wheeler for accident detection with micro electro Mechanical system (MEMS) enables in smart helmet. In: Proceedings of International Journal of Recent Technology and Engineering (IJRTE) (November 2019) 9. Venkata Rao, K., Moray, S.D., Shraddha, S.R., Vandhana, V.K.: IOT based smart helmet for accident detection. In: Proceedings of International Journal of Technical Research and Applications (March–April 2018) 10. Patil, A.B., Krishna, B.S., Shivkumar, S.H.V., Aryalekshmi, B.N.: Smart Helmet using wearable technology. In: Proceedings of 13th International Conference on Recent Trends in Engineering Science and Management (April 2018) 11. Prashanna Rangan, R., Sangameshwaran, M., Poovedan, V., Pavaprnesh, G., Naveen, C.: Voice controlled smart helmet (2018) 12. Khandelwal, A., Jain, H., Jain, B., Vijay, H., Akshaykumar, Singh, S.K.: Smart Helmet (IoT Based). In: Proceedings of International Journal of Scientific & Engineering Research, vol. 10 (May-2019) 13. Bharadwaj, D., Kumar, R.R., Yadav, S.K., Lekha, C.: Development of smart helmet based on IOT technology. In: Proceedings of International Journal of Scientific Research and Development (November 2018) 14. Shravya, K., Mandapati, Y., Keerthi, D., Harika, K., Senapati, R.: Smart Helmet for Safe Driving, vol. 87, no. 01023 (2019) 15. Tayag, M.I., De Vigal Capuno, M.E.A.: Smart motorcycle helmet: real-time crash detection with emergency notification and anti-theft system using internet of things cloud based technology. In: Proceedings of International Journal of Computer Science and Information Technology (June 2019)

Collision Avoidance in Vehicles Using Ultrasonic Sensor N. Bhuvaneswary, V. Jayapriya, V. Mounika, and S. Pravallika

Abstract This paper presents the design of collision avoidance in heavy vehicles by using ultrasonic sensors. This project is about vehicle collision avoidance system for vehicles such as cars and trucks, and in particular, this research work utilizes the ultrasonic sensor. Further, the proposed research work utilizes the application of electronic systems embedded in automobile, which is mainly used to save the lives of people and reduce injuries when the accident occurs, and mainly, it reduces or avoids the vehicle accident disaster. This project concentrates on developing a vehicle collision avoidance system model of the rear-end vehicles that contain an LCD display, which will recognize the length between two vehicles that are moving in the same lane, same direction, and when anyone is in danger, a microcontroller will alert the driver. Sensing of object ahead and distance measurement is done by using an ultrasonic sensor. Keywords GSM module · Arduino · Sensor · Buzzer · Servo motor · LCD display

1 Introduction Over the last 20 years, the strategies of industry for safety purposes are becoming more advanced. Initially, discrete passive devices and properties such as seatbelts, crush zones, knee bolsters, and airbags [1] are implemented for saving lives and reduce injuries during an accident. Later, inhibitory measure such as developing headlights, visibility, windshield wipers, and tire traction was distributed to minimize the probability to get into an accident. Now, we are at avoiding accidents in this stage actively as well as contribute maximum safety to the vehicle occupants and even on pedestrians. The systems that are under intense development include collision avoidance systems. In this project, we concentrate on advanced ideas such as senses before pre-crashing [2], and here, sensing of object in front can be done by an ultrasonic sensor, it sends the signal to the receiver, and here, microcontroller unit N. Bhuvaneswary (B) · V. Jayapriya · V. Mounika · S. Pravallika Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_15

169

170

N. Bhuvaneswary et al.

acts as a receiver. Here, the received signal sends a signal for applying the brake automatically to the breaking unit [3]. To design an ultrasonic-based anti-collision system for all vehicles, we mainly need to follow the intent. We should stabilize the vehicle and control the effective braking. It will be applicable for all types of fourwheel vehicles [15]. It mainly helps during pre-cashing mode. It is an automatic vehicle collision-free system.

2 Existing Modules This section mainly consists of technologies which are existing earlier. These consist of some sensors such as accelerometer sensors.

2.1 Central Acceleration Sensor This technology was introduced by central acceleration sensor is a Bosch, and it is a self-propelled sensor technology. By using this technology, many new technologies such as surface micro-mechanical technology are developed. It is combined with a control unit, which contains the airbag, and this helps us to provide signals to the vehicle, and then, the airbags will be opened. It helps us to find out the object acceleration. When it is necessary, it also helps us to fill the airbags [4].

2.2 Recognition of Vehicles by Using Wireless Network Sensor. This system gives an overall view on the intelligent vehicle information for monitoring the traffic. Capturing of images is done by a visual sensor node traffic and sent to the traffic controller for further processing. To identify the hidden images, symmetry detection algorithm is used [5].

2.3 Automobile Collision Avoidance System In this system, the obstacles or objects are detected by using sonar sensors which generate the acoustic waves, accept the waves from the object or obstacle reflected back, and involve avoidance algorithms. The main disadvantage of the system is to sensor purity.

Collision Avoidance in Vehicles Using Ultrasonic Sensor

171

2.4 An Obstacle Detection and Tracking System Using a 2D Laser Sensor This paper presents the Kalman filter. When the critical disruption occurs, this filter is not powerful or not profitable, i.e., while measuring the position of the object. When a laser beam occurs as an interruption, then it will not be possible for receiving the signals from the sensor.

2.5 Intelligent Mechatronic Braking System This paper mainly focuses on the safety purposes. This system consists of ultrasonic sensor, which helps us to measure the distance, and when an object is being detected, this sends a signals to control unit for controlling the speed of vehicle based on pulse detection.

3 Purpose The design and implementation of an ultrasonic-based anti-collision system for vehicles are based on the following objectives: • Vehicle control and stability with effective braking. • Suitability should be to all types of four-wheel vehicles. • The extended range for pre-crashing mode to provide enough time for the system to stabilize and control. • Prototype for a fully automatic vehicle collision-free system.

4 Specific Objective • To achieve the overall project objective, the following are the specific objective: • To design obstacle sensor system. • Grow an algorithm which incorporate sensor strengths and limits sensor shortcomings. • Ensure the system responds in real time. • Identify objects that are potential threats.

172

N. Bhuvaneswary et al.

5 Project Justification As per survey, there are more number of deaths caused due to road accident. It is a popular fact that the antigrowth, physical, and psychological crisis caused by vehicle disaster need to be taken with seriously. Various research and studies should be conducted to overcome this problem. This research will be committed to attempt another solution for this known problem that by implementing, a cheap cost is mounted on current car for private anti-collision warning system model [2, 3] and vigilant the driver in risk or emergency zones. Therefore, probably putting aside inbuilt active safety system evolution to the car manufacturers, the customers have many ways to solve the problem we have developed the model for fitting to the road vehicle. We have implemented domestic active safety system model that would be developed to road vehicle in spite of their model and year of make. This inspired the writer to share the ideas for developing a model that will find another innovative solution to decrease the life-threatening menace of vehicle accidents [1].

6 Project Scope The scope of the project is to design a prototype of a vehicle collision avoidance system [6], and it mainly helps to save lives of people and to minimize probability of occurring an accident. Here, ultrasonic sensor senses the obstacle and sends an alert to drive and programmed for the prototype which will work at real time [2]. The project will be developed using a prototype toy car with designed systems.

7 Hardware Components • • • • • •

Arduino Uno. Ultrasonic sensor. Servo motor. LCD display. Buzzer. Connecting wires.

7.1 Arduino Uno This is a microcontroller board made with ATmega32. It contains 14 digital pins as inputs, and pins as outputs with 6 analog pins as inputs. It works on 16 MHz crystal oscillator, and it has power Jack and contains reset button. It contains all the components that are needed support for the microcontroller which we are using for

Collision Avoidance in Vehicles Using Ultrasonic Sensor

173

Fig. 1 Arduino Uno board

developing the system. It contains 4/8/16/32 K bytes of in-system programmable flash with read-while-write capabilities. The board is given with 5 V supply coming from voltage regulator. The program is uploaded with USB jack with appropriate code required for design of project (Fig. 1).

7.2 Ultrasonic Sensors Ultrasonic sensors are an electronic devices, which emit acoustic waves beyond audible range, which is in between 20 Hz and 20,000 Hz to control the distance between object and sensors that depend on the time, and it will send the signal and the echo was received [7]. Vibrating device is a transducer that transmits the ultrasonic pulses and is used in the ultrasonic sensors [8]. The frequency is determined by the transducer. When the increase in frequency occurs, the sound waves will transmit for minimum distance vice-versa [9] (Fig. 2). Fig. 2 Ultrasonic sensor

174

N. Bhuvaneswary et al.

Fig. 3 Servo motor

7.3 Servo Motor A servo motor is a spinning actuator or motor that confesses for leveraging precise control in terms of linear point, velocity, acceleration, and rotation servo motor. It subsists of proper motor related to the sensor for arranging feedback. It also needs a moderately sophisticated controller, and usually, servo motors are designed especially for the devoted module. The applications are used in servo motors like robotics, CNC engine, or electronic manufacture (Fig. 3).

7.4 LCD Display LCD is mostly used for the word display direction when program is read by reader. The LCD 2 × 16 are used to display the numbers, characters, graphics. This is a contract to the LCD’s, which are particular to numbers and characters. After all the LCD’s consume less power, they are adaptable with low-power electronic circuit and can be powered for long duration’s. LCD pins are connected with the Arduino pins for display of information given to the coding transfer to the microcontroller (Fig. 4). Fig. 4 LCD display

Collision Avoidance in Vehicles Using Ultrasonic Sensor

175

Fig. 5 Buzzer

7.5 Buzzer A buzzer is an auto communicate device, which can be used for the automated, electromechanical, or piezoelectric. Mostly, uses of buzzers and beepers combine alarm devices, timers, and acceptance of user input like a mouse click or keystroke [10] (Fig. 5).

8 Methodology Design In this project, we designed the collision avoidance of warning system which consists of hardware and software part. The project circuit is to compose the hardware part that will be explained in detail. The part of the project is to program the software part. The project consists of five circuits such as power supply, obstacle sensor unit, warning system, central control module, and the motor driver system. The block diagram shows the different units (Fig. 6).

8.1 Central Control Module In this central control module, the main unit is the microcontroller. Here, an Atmega328P microcontroller is used. For receiving an echo pulse from the transmitter, i.e., sensing unit is done by the interrupt pin, which is present in the microcontroller, and for sending an 10 µs trigger pulse, the bit 1 pin of register is being used. While rising and falling edge changing is happening, the activation of external interrupt pin occurs. Here, this microcontroller, i.e., ATmega328P was used because of low cost and easily available in market.

176

N. Bhuvaneswary et al.

Fig. 6 Central control module

8.2 Obstacle Sensing Unit To make a successful obstacle, sensing unit methodology depicted in the flow diagram was used [9] (Fig. 7).

8.3 Driver Circuit The driver circuit combines. • LCD display. • Buzzer. When an obstruction comes warm to the vehicle, then the warm alarm system will give signal to the driver with length between the vehicles (10–400 cm) [11].

Collision Avoidance in Vehicles Using Ultrasonic Sensor

177

Fig. 7 Block diagram of control system

8.4 Warning System LED and buzzer are used in warning system to alert when distance is more and is derived by connection is done to an LED to the Atmega32 pin of output through a 220 resistor which is connected in series. LED consumes 20 mA for normal brightness, and about 2 V is voltage drop across LED [12, 16]. When the port is at logic one, at the output of the voltage of the microcontroller is seem to be 5 V. As per our result, a resistor is needed to be connected with LED in series. Therefore, there is a need to determine the limiting resistor value of current. So if the output voltage of the port is 5 V, we need to drop down 3 V across the resistor, to get a drop of voltage, i.e., 2 V. The resistance value is calculated, and the current through LED is presumed as 20 mA. R=

5V − 2V = 0.15 K 10 mA

178

N. Bhuvaneswary et al.

Fig. 8 Timing diagram

The lowest resistor value is 150. When distance is low, the buzzer is used to alarm. The ATmega328P output pin is directly connected. Whenever an vehicle comes near to the other vehicle, the headlights will automatically flash to alert the driver who is in other vehicle [13].

9 Working Principle There are four pins for sensor. VCC, ground, echo, and trigger are the pin names. Ground and VCC are connected to the corresponding pins for the Atmega32. The trigger pin is connected to PORT D of pin 2, while the echo pin is connected to PORT D of pin 3. The sensor will work as follows. • Send a short, but long enough on the trigger pin, i.e., 10 µs pulse (module unit which automatically sends eight 40 KHz square wave). • It will wait until the echo line to go high time, and the length of the pulse will stay high. • Length of the pulse is directly proportional to distance [14]. The distance between the sensor and the object is deliberated by measuring high-level time of the echo pulse which can be obtained from the echo pin of the sensor module, and the high level is deliberated by using the following formula (Figs. 8 and 9). μs = distance in centimeters 58

10 Result A rear-end anti-collision warning system is outlined and arranged. And easily clear replica is established to illustrate the method. The sensor used here is able to interpret

Collision Avoidance in Vehicles Using Ultrasonic Sensor Fig. 9 Flowchart

Fig. 10 Circuit design

179

180

N. Bhuvaneswary et al.

Fig. 11 LCD display

Fig. 12 Design implementation

the short distances perfectly. This is due to noise from the environments (Figs. 10, 11 and 12).

11 Conclusion The automation remains complete for safely controlling a vehicle, but it needs help from the artificial intelligence [AI]. This work will be extended to incorporate neural network [NN] controlling for the objectives discussed. In order to cover long distances, a sensor is required. To build up features and utilize in vehicles, more information is required. This prototype is a good resource and useful for the illustration of anti-collision warning system research.

Collision Avoidance in Vehicles Using Ultrasonic Sensor

181

References 1. Giancarlo, A., Alberto, B., et al.: Vehicle and guard rail detection using radar and vision data fusion. IEEE Transactions on Intelligent Transportation Systems (2007) 2. Luo, Z.: Research on automobile intelligent anti-collision system. In: 2011 Second International Conference on Mechanic Automation and Control Engineering (MACE) (2011) 3. Grover, C., Knight, I., Okoro, F., Simmons, I., Couper, G., Massie, P., Smith, B.: Automated emergency breaking systems technical requirements cost and benefits. Published Project Report (2008) 4. Abd El-Kader, S.M., Fakher el Dee, M., Mohamed, M.: Intelligent Vehicle Recognition based on Wireless Sensor Network. Electronics Research Institute, El-Bohothst, Dokki, Giza, Egypt 5. Lang, A., Jonnagadla, D., Hammond, A., Atahua, A.: Model Based Systems Engineering Design of an Automobile Collision Avoidance System (2011) 6. KAA open source to IOT platform. Retrieved 10 Sept 2016 7. Sairam, G.V., Suresh, B., Sai Hemanth, C.H., Krishna Sai, K.: Intelligent mechatronic braking system. Int. J. Emerg. Technol. Adv. Eng. 3(4) (2013) 8. Shrivastava, K., Verma, A., Singh, S.P.: Distance measurement of an object or obstacle by ultrasound sensors using P89C51RD2. Int. J. Comput. Theor. Eng. 2(1) (2010) 9. Park, W.J., Kim, S.B.-S., Seo, D.-E., Kim, D.-S., Lee, K.-H.: Parking space detection using ultrasonic sensor in parking assistance system. In: Intelligent Vehicles Symposium, IEEE (2008) 10. Mahdi, M., Frisk, E., Aslund, J.: Real-time velocity planning for heavy duty truck with obstacle avoidance. In: IEEE Intelligent Vehicle Symposium (2017) 11. Gehrig, S.K., Stein, F.J.: Collision avoidance for vehicle-following systems. Intell. Transp. Syst. 8(2), 233–244 (2007) 12. Wei-Chung, H., Li-Chen, F., et al.: Vision based obstacle warning system for on-road driving. In: Proceedings, 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2003 (IROS 2003) (2003) 13. Gonzalez, I., Catedra, F., Algar, M., Gonzalez, A., Somolinos, A., Romero, G., Moreno, J.: Analysis of collision avoidance systems for automobile applications. In: 2016 IEEE International Symposium on Antennas and Propagation (APSURSI). IEEE (2016) 14. Nishi, T., Yamazaki, N., Kokie, S., Kuno, T., Umzaki, T.: Collision avoidance system using laser beams (2009) 15. Dickmanns, E.D.: The development of machine vision for road vehicles in the last decade. Intelligent Vehicle Symposium. IEEE (2002) 16. Jihua, H., Han-Shue, T.: Design and implementation of a cooperative collision warning system. In: Intelligent Transportation Systems Conference. ITSC ‘06. IEEE (2006)

Privacy Challenges and Enhanced Protection in Blockchain Using Erasable Ledger Mechanism M. Mohideen AbdulKader and S. Ganesh Kumar

Abstract Blockchain is a decentralized network that has a distributed ledger which is one of the emerging invention in recent years. It is immutable, persistent and peer to peer in nature which make it a more secure technology. Beyond cryptocurrencies, blockchain has many potential applications such as smart cities, Internet of Things, online polling system and so on. In addition, the transparent ledger technology of the blockchain makes it more vulnerable to privacy and security issues. Privacy leakage issue in the blockchain is a major limitation which slows down the growth of this technology. In this paper, privacy challenges and existing mechanisms to overcome the privacy issues such as zero-knowledge proof, coin mixing services, ring signatures and commitment scheme are surveyed. The limitations of these existing privacy mechanisms are analyzed. Finally, based on the analyzed limitations and open issues, the future research directions are proposed. In this, combining blockchain with trusted computing and the development of content erasure mechanism can improve the current privacy issues. This idea of privacy improvement is proposed as new direction of development in blockchain domain. Keywords Blockchain privacy · Cryptography · Blockchain security · Anonymity · Distributed ledger technology

1 Introduction Blockchain is a decentralized network that acts as a digital ledger and a mechanism which enables secure transfer of digital assets between the nodes. Bitcoin is the first cryptocurrency built over blockchain application, and it was introduced in 2008 by Satosi Nakamoto. Blockchain bitcoin is a peer to peer and decentralized electronic cash system which is open source and available to anyone who joins the blockchain M. Mohideen AbdulKader (B) · S. Ganesh Kumar SRM Institute of Science and Technology, Kattankulathur, India e-mail: [email protected] S. Ganesh Kumar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_16

183

184

M. Mohideen AbdulKader and S. Ganesh Kumar

network [1]. Furthermore, this blockchain technology is classified into three types as public, private and consortium blockchain based on the accessibility or priority given to the user nodes. More than Trusted Third Party (TTP) nodes security of blockchain also depends on encryption of data, consensus mechanisms and time stamping. Smart contract was possible through Ethereum which is an open source platform for building decentralized applications in the blockchain and these applications of blockchain are not only restricted to smart contracts. Nowadays, blockchain technology is used to build wide range of applications such as Internet of Things (IoT), smart city, supply chain and smart contract [2]. Blockchain is majorly classified as permissioned and permissionless blockchain. In permissioned blockchain, only authorized nodes will perform as consensus nodes. Authority to access the blockchain network will be restricted to limited users, whereas in permissionless blockchain, any node can easily join or leave the blockchain network. Since the blockchain is a transparent ledger techchnology, all the information in the blockchain are open and available to every node in that existing blockchain network. Due to this property of transparency, blockchain faces serious privacy and security risks. Blockchain is an emerging decentralized technology using which multiple applications can be implemented. Bitcoin is an application implemented using blockchain which is also known as cryptocurrency. The invention of cryptocurrency incorporates a distributed way of transaction between the nodes where there is no requirement for trusted third parties. The below-mentioned architecture represents the working of blockchain network. Blockchain is a network that contains n number of nodes. If any node n requests for a transaction in the blockchain network. Block that represents the requested transaction is created and shared with all the other nodes in the network. Every node in the blockchain network validates the transaction. All the nodes in the blockchain network perform any consensus mechanism to validate the transaction for which reward or incentives will be received [3]. If the requested transaction is valid, then it is included the blockchain. All the blocks are connected with a cryptographic hash; hence, tampering of data is very difficult in blockchain network. Here, in this survey, we are going to analyze the types of privacy risks that are faced by blockchain. The existing privacy enhancement mechanisms which address this privacy issue are surveyed, and the limitations of the existing privacy mechanisms are derived. Furthermore, the future directions to deal with these privacy enhancement techniques are elaborated in detail.

2 Privacy Challenges in Blockchain Blockchain faces the two major privacy challenges which are privacy in transaction and privacy in identity.

Privacy Challenges and Enhanced Protection in Blockchain …

185

2.1 Transaction Privacy Challenge Transactions that occurred in the blockchain nodes are stored as records in the transparent ledger which is visible to all nodes in the network. These records have all the transaction-related confidential information such as transaction amount, sender and receiver details in it. There should be some authentication mechanism to verify the authenticity of the transactions that happened in the network. Hence, to enhance the security and privacy of data in blockchain, the sensitive information needs to be kept highly confidential and tamper proof. This transaction-related information can be secured in blockchain network with the help of privacy-preserving mechanisms. By incorporating these privacy-preserving protocols, sensitive data can be secured in the blockchain [4]. For example, consider that there is n number of transactions happening in the blockchain network which contains n number of nodes. Since blockchain network is a distributed ledger, all the transactions happening inside the network are stored in the common ledger. This distributed ledger which is transparent in nature makes the transaction data visible to all the nodes existing in the network. This transparent nature causes the challenge of transaction privacy leakage. By continuously monitoring the transaction analysis graph and using quantum computing an attacker can extract out the transaction details between any two nodes. Hence, sensitive information related to transactions is prone to leakage.

2.2 Identity Privacy Challenge Users and nodes in the blockchain will be given anonymous address. Even then, relationship between the user identity and the assigned blockchain address can be linked by the attacker [5]. Due to the transparent nature of the blockchain, attacker can extract out the real address or identity of the user. In example, if there exists multiple transaction between any two particular nodes of the blockchain, the attacker can use transaction relationship graph to find out the sensitive information about the user’s real identity and transaction data [6]. Privacy protection can be ensured in blockchain network when it follows the below-mentioned conditions. • Transaction linkability needs to be eradicated which improves privacy. The relationship or the link between the two transactions should not be made visible in the public ledger, or it should be kept undiscoverable. • The transaction data or transaction information should be known only to the participants of that particular transaction and should be kept hidden from all other participants.

186

M. Mohideen AbdulKader and S. Ganesh Kumar

3 Existing Privacy-Preserving Mechanisms for Blockchain 3.1 Coin Mixing Services Chaum proposed coin mixing mechanism which is one of the privacy enhancement technique in blockchain. Coin mixing services are mainly used in blockchain bitcoin transactions. Bitcoin transactions are more like a normal transaction between two bank accounts but with slight modifications. Consider a normal transaction where an account holder or sender transfers a particular sum of amount to another account holder who act as receiver. In normal transactions, there exists a central server through which all the transactions are operated. Failure in server leads to transaction failure. Meanwhile, in bitcoin transactions, coin mixing services add an intermediary node between the sender and receiver node which makes the attackers difficult to analyze the communication between these nodes. Anonymity is improved by adding an intermediate node between the sender and receiver. It acts as a middle man to transfer transaction information between sender node and receiver node. In this mechanism, the communication or connection between the sender and receiver is kept hidden so that attacker cannot analyze the correlation between the address of sender and receiver [7]. Coin mixing mechanisms can be implemented using a third-party node which may act as intermediary or using some protocols. Figure 1 represents coin mixing service between two parties [7]. (A)

Coin Mixing with Central Node

In this method, there exists a central node which acts as a middle man and provides mixing service. There exists n number of sender and receiver nodes and a single central node. Consider from the below figure that user A transfers some funds to user E. Initially, user 1 transfers the fund to the central node (mixer), and this central node collects all the fund transfers from different nodes. After certain amount of time, it makes the transfer to the respective receiver node. This method hides the

Fig. 1 Blockchain architecture

Privacy Challenges and Enhanced Protection in Blockchain …

187

Fig. 2 Coin mixing services. Source Ron et al. [5]

transparency of communication between the sender and receiver nodes. Figure 2 represents coin mixing service with the use of a central node [8]. (B)

Coin Mixing Without a Central Node

Another method of coin mixing where no central node is involved is known as decentralized coin mixing mechanism. In this mechanism, there is no third-party node. As it do not have third party, there will not be any extra fee coin mixing service. Mixing services happen with multiple one-to -one node, and it ensures transaction privacy and makes difficult for attackers to find out the transaction links between the nodes. Figure 3 represents coin mixing service without the use of a central node [8]. Limitations of coin mixing services are • N number of users need to participate at a given time, and if one user fails to mix coin, then it may lead to denial of service attack. • As it involves third-party nodes, there is a possibility that the TTP node may reject coin mixing service for specific users or addresses.

3.2 Zero-Knowledge Proof Mechanism Goldwasser proposed zero-knowledge proof privacy enhancement mechanism for blockchain technology. In this mechanism, the sender node has to prove a statement or message to the verifier node for making a transaction. While proving the statement, the sender node do not reveal any sensitive information [10]. For making a transaction, the sender needs to prove a statement to the verifier. If the sender node proved the

188

M. Mohideen AbdulKader and S. Ganesh Kumar

Fig. 3 Centralized coin mixing. Source Dong et al. [9]

statement or proof, then it can make a transaction that will be added in the blockchain. After making the proof verified, transactions can be operated smoothly. Other nodes in the network cannot easily extract out the transaction information between the sender and receiver. It involves two or more parties which undergo a sequence a steps for the completion of the task given. Zero-knowledge proof mechanism can be divided into interactive-ZKPs and non-interactive ZKPs. In both zero-knowledge proof mechanisms, the sender has to prove a statement to be true to the verifier. But, here, in zk-SNARKs, there is no two-way communication between the sender and a receiver [11]. Zero knowledge proof has three major properties such as completeness, reliability and zero knowledge. 1. 2. 3.

Completeness: If the statement is true, then sender can convince the verifiers so that it can make transactions without interruptions. Reliability: Without proving the statement, the sender cannot make the receiver accept the messages or transactions. Zero knowledge: If the sender proved the statement to the verifier, then the verifier gets only the message sent from the sender. Verifier cannot extract out any additional content from the sender.

Limitations of zero-knowledge proof Even though zero-knowledge proof delivers strong anonymity, it has certain limitations such as • It takes high cost for computing the proof every time. • Usage of storage space is also high for computing and storing the proof.

Privacy Challenges and Enhanced Protection in Blockchain …

189

3.3 Ring Signature It is a kind of digital signature proposed by Rivest et al. [12]. In this signature technique, public key is used to secure the data of the sender node. Here, group of nodes form a ring, and transactions happen within that ring nodes. Nodes which come under the ring perform the transactions that are added as a block in the blockchain. In this mechanism, there exists no third-party or central node. When the transaction is requested, the sender node signs the transaction utilizing its own private key and all other node’s public key. In the verification process, the verifier node can check the signature of the transaction and can identify that if the signer exists in the ring node or not. Verifier node can only identify whether the sign belongs to the ring node, but it cannot identify the exact node which made the signature. For every transaction, the output of current computation will be given as the input of the next computation. From the below diagram, consider x is the output of the current computation that is given as input y of the following computations. At the end of all computations, if the x and y are equivalent, then it is said that the signature is correct. Ring signature scheme delivers a complete anonymity of the user information present in the blockchain network. This ring signature scheme delivers an efficient privacy protection. But when number of the participating nodes increases, then this mechanism is pruned to make error or computations become hard (Fig. 4). Limitations of ring signature • Scalability is poor. • Cost is high for computing.

Fig. 4 Ring signature. Source Rivest et al. [12]

190

M. Mohideen AbdulKader and S. Ganesh Kumar

Fig. 5 Homomorphic encryption. Source Dong et al. [9]

3.4 Homomorphic Encryption Homomorphic encryption mechanism permits calculation to be done on the encrypted data. Without having the access to the secret key or decoding key, it is conceivable to perform calculations on the encrypted data. This mechanism is proposed by Rivest et al in 1978 [13]. From Fig. 5, plaintext x is encrypted which gives cipher text f (x). In the ciphertext f (x), some computations can be done which generates f (y). Now, the decryption is performed on f (y) to produce the same plaintext x. Computing operations can be done without knowing about the original data; even then, same result will be obtained. Homomorphic mechanism provides an enhanced privacy in the area of blockchain. The receiver can get the final message, whereas every other information is hidden by the sender. Security of the information can be drastically improved with the use of homomorphic encryption mechanism [14]. Limitations of homomorphic encryption are • It consumes lot of computing time and memory so that not very much suitable for large-scale applications.

3.5 Hidden Address Correlation between the input and output addresses can be solved using this hidden address mechanism which was proposed by Todd [15]. It is widely used in digital currency with multiple addresses so that tracing the exact address becomes difficult for the attacker. Sender initiates the transaction using the public key of the receiver and then calculates the intermediate address. This intermediate address is a temporary address which is calculated using elliptic curve cryptographic algorithm. The coins are first transferred to this intermediate address, and the receiver finds the transaction based on its own public key. It is difficult for the attacker to find out which transaction user that the intermediate address belongs to. This mechanism improves the security and privacy in the blockchain transactions. For example, when a transaction is requested or initiated, the sender will not directly transfer the asset to the

Privacy Challenges and Enhanced Protection in Blockchain …

191

receiver. An intermediate node will be selected randomly each and every time while making the transaction. Now, the sender transfers the asset to the randomly identified intermediate node. Then, the intermediate node makes the transaction to the receiver node. In this way, for each and every transactions, a random intermediary is used which improves the anonymity and security of the blockchain network. Limitations of hidden address mechanism are • It does not provide strong anonymity and also the sender is not anonymous. • Transaction graph can be utilized to analyze the connection between the sender end to intermediate node and to the receiver node.

3.6 Trusted Execution Environment TEE provides a confidential space for sensitive data. TEE provides a best solution to the complex cryptographic problems [16]. It isolates the software running in the hardware and provides a space to store sensitive data. The use of isolated space makes difficult for the attacker to get access to the sensitive data. One such TEE is IntelSGX (Software Guard Extension) which provides instructions for the users to create on enclave [17]. The data inside the enclave is highly secured and cannot be tampered with any outside factor. It reduces the usage of complex cryptographic algorithms. It delivers strong anonymity between the sender and the receiver. Limitation of TEE is • Cost for TEE hardware is high.

3.7 Secure Multi-party Computation SMPC algorithm distributes the computation across multiple nodes to ensure privacy and security. SMPC has some of the major characteristics such as decentralization, computing correctness and privacy enhancement [18]. Smart contract uses secure multi-party computations as it involves multiple parties for making a contract between them. SMPC does not require any trusted third-party node to secure its data. In SMPC, data is broken up and distributed across multiple parties which makes it more secure. Limitations of SMPC are • Computation overhead is a major disadvantage of SMPC. • High computation costs between the parties involved.

192

M. Mohideen AbdulKader and S. Ganesh Kumar

4 Applications of Existing Privacy Mechanisms The following are some of the applications that are currently available based on existing privacy mechanisms.

4.1 Mixcoin It uses signature-based accountability mechanism that exposes coin theft. If the mixer node performed some malicious operation, then reputation of that particular mixer node will be completely destroyed. It also uses blind signature scheme where the messages are blinded before it is signed [19].

4.2 Blindcoin It combines blind signature scheme with public log and fully accounts the mixing process [20]. It also provides evidence against the mixers which tried to misbehave [21]. It ensures that the third-party mixer node cannot disclose the transaction relationship information to the other nodes.

4.3 Dash It uses coin mixing strategy with a master node which hides the flow of funds. This master node can be prevented from cheating with the help of chain mixing mechanism. In chain mixing, users can choose multiple master node randomly [22].

4.4 CoinShuffle It uses group communication protocol which hides the user’s identities completely. It also introduces picketing mechanisms which identify and eliminate the malicious mixer node. Coin theft will be found at the very first time using this mechanism. But CoinShuffle needs all users to be online at the same time failing which may lead to Denial of Service (DoS) attack [23].

Privacy Challenges and Enhanced Protection in Blockchain …

193

4.5 TumbleBit It uses off-chain anonymous payment scheme for privacy protection of off-chain payments. It allows all users to establish a payment channel using untrusted thirdparty platform called Tumber. By using untrusted third party, anonymity is achieved and users can perform transactions even without the direct channel [24].

4.6 Zerocoin It acts as an extension to the existing bitcoin protocol. Zerocoin is a kind of cryptocurrency or digital currency and developed as one of applications of bitcoin. It allows anonymous payments between two parties. Users of zerocoin can send, split or merge their coins with other users. It is said as distributed e-cash system available in blockchain bitcoin application. Zerocoin does not have any trusted third-party nodes. It uses cryptographic techniques to break the links between every bit coin transactions. It provides stronger user anonymity and security [25].

4.7 Zerocash It delivers a privacy-preserving version of bitcoin transactions. All bitcoin transactions are recorded and stored in the public ledger which can be viewed by anyone on the blockchain network. In zerocash, transactions do not contain any public information such as origin or destination of the payment [26]. Uses crptographic techniques which make the bitcoin transactions untraceable. Zerocash is upgraded version of zerocoin where most of the transaction information are hidden and completely secured. It is also said as decentralized anonymous payment system.

4.8 CryptoNote It is a ring-based privacy preservation protocol on blockchain. In this, user can sign only one transaction with a single private key. It uses one-time payment method which ensures sender’s and receiver’s privacy [27].

194

M. Mohideen AbdulKader and S. Ganesh Kumar

5 Summary of Existing Blockchain Privacy Mechanisms Even though existing mechanisms achieve privacy to an extent, it lacks when it comes for large-scale implementations. Performance and efficiency are low in existing mechanisms. Computation overhead is very high, and the existing mechanisms also have storage overhead issues. Overall, these mechanisms are not perfectly suitable for building large-scale blockchain applications. Hence, in the upcoming sections, the limitations of the existing privacy mechanisms are discussed in detail. In addition, an improved privacy enhancement mechanisms by combining with trusted computing are proposed which may overcome the limitations of current methods (Table 1). Table 1 Literature table on existing privacy mechanisms and its achievements S. no.

Privacy mechanism

Proposed year

Applications

Achievements

1

Coin mixing services

1981

Mixcoin Blindcoin Dash CoinShuffle Tumblebit

Security is improved with low mixing time

2

Zero knowledge proof

1980

Zerocoin zerocash

Hides transaction details and resist transaction graph analysis

3

Homomorphic encryption

1978

Confidential transaction Paillier Encryption

Strongly resist transaction graph analysis

4

Ring signature

2001

CryptoNote Monero

Internal unlinkability is achieved

5

Hidden address

2014

Cryptonote Monero

Recipient anonymity is achieved

6

Commitment scheme

1991

Monero RingCT Confidential Transaction

Anonymous transactions can be processed

7

Secure multi-party computation

2014

Millionaire problem

Confidential input messages can be given

8

Trusted execution environment

2010

IntelSGX

Ensures data integrity and security

Privacy Challenges and Enhanced Protection in Blockchain …

195

6 Discussion on Limitations of Existing Privacy Mechanisms There are lot of achievements made on privacy in blockchain network with the implementation of above-mentioned mechanisms. Even then, privacy on blockchain network is not completely secured, and there are lot of improvements that need to be addressed. Privacy protection and trust mechanism need to be incorporated into the blockchain network. Combining blockchain with trusted computing and use of trusted execution environments may enhance the privacy considerably. Existing mechanisms are prone to Denial of Service (DoS) attack. Along this, high handling fee is charged for the mixing services performed by the mixer nodes. Mechanisms which have trusted third party still hold a huge risk of data theft. Computation and storage cost is very high in the existing privacy mechanisms. These limitations make blockchain technology not efficient enough to implement for large-scale applications. Hence, a new mechanism is proposed as below.

7 Proposed Method for Efficient Privacy Enhancement Even though there are lot of achievements on blockchain privacy, it still needs improvement to adapt for large-scale applications. An enhanced privacy protection can be implemented on blockchain by combining it with trusted computing mechanisms [28]. An erasable ledger can be developed so that sensitive transaction information can be removed from the public ledger and can be maintained off-chain. This off-chain information needs to be secured for which IntelSGX (Trusted Execution Environment) can be used. Transaction information can be hidden from the open ledger, whereas hash pointers of the transactions are kept at the ledger. This hash pointer points to the original file that contains the transaction details. These files still need to be highly secured at the respective nodes. These files are secured by concealing it inside the enclave with the help of IntelSGX. This method will drastically improve the privacy in the blockchain.

8 Future Research Directions Existing privacy mechanisms need to be improved a lot to accommodate the upcoming highly efficient blockchain applications. It still remains as unsolved issue to bring up a highly efficient privacy-preserving solution. Decentralization and transparent nature of blockchain makes huge impact in privacy preserving which needs to be considered seriously. Accountability with centralized application is easy but achieving accountability while preserving privacy is a challenging task. Privacy preservation in decentralized nodes with accountability would be one of the future

196

M. Mohideen AbdulKader and S. Ganesh Kumar

research direction that needs to be worked on. Combining blockchain with trusted execution environment by overcoming the storage issue will be another research direction which may bring effective result on privacy preservation.

9 Conclusion Blockchain is an emerging technology with many advantages which are implemented currently in many applications such as supply chain, smart contract and identity preservation. However, the decentralization and straightforward nature of blockchain make it hard to ensure security which makes protection conservation a significant exploration point. In this paper, we analyzed the user identity and transaction privacy challenges of blockchain technology. The existing mechanisms for privacy preservation are surveyed where many privacy issues have been addressed with the existing mechanisms. From the existing privacy mechanisms, security is improved with low mixing time. Our existing mechanisms strongly resist transaction graph analysis which is used by attacker to extract out sensitive information. Even then, these mechanisms fail drastically for implementing large-scale application due to high computation and storage issues. Hence, we proposed a mechanism combining blockchain with trusted computing. Incorporation of IntelSGX may improve the privacy which is proposed. The future research directions in the area of blockchain privacy were also discussed in detail.

References 1. Rajput, U., Abbas, F., Hussain, R., Eun, H., Oh, H.: A simple yet efficient approach to combat transaction malleability in bitcoin. In: Proceedings of International Workshop Information Security Applications, pp 27–37. Springer, Cham, Switzerland (2015). 2. Sivaganesan, D.: Smart contract based industrial data preservation on block chain. J. Ubiquit. Comput. Commun. Technol. (UCCT) 2(01), 39–47 (2020) 3. Wang, H.: IoT based clinical sensor data management and transfer using blockchain technology. J. ISMAC 2(03), 154–159 (2020) 4. Liu, Z., Wang, D., Wang, B.: Privacy preserving technology in blockchain. Comput. Eng. Des. 40(6), 1567–1573 (2019). https://doi.org/10.16208/j.issn1000-7024.2019.06.012 5. Ron, D., Shamir, A.: Quantitative analysis of the full bitcoin transaction graph. In: Proceedings of International Conference on Financial Cryptography and Data Security, vol. 7859, Nov 2013, pp. 6–24. https://doi.org/10.1007/978-3-642-39884-1_2 6. Fleder, M., Kester, M.S., Pillai, S.: Bitcoin Transaction Graph Analysis, Feb 2015. arXiv:1502. 01657 [Online]. Available: https://arxiv.org/abs/1502.01657 7. Chaum, D.L.: Untraceable electronic mail, return addresses, and digital pseudonyms. Commun. ACM 24(2), 84–90 (1981). https://doi.org/10.1145/358549.358563 8. Li, X., Niu, Y., Wei, L., Zhang, C., Yu, N.: Overview on privacy protection in bitcoin. J. Cryptol. Res. 6(2), 133–149 (2019). https://doi.org/10.13868/j.cnki.jcr.000290 9. Dong, G., Chen, Y., Fan, J., Hao, Y., Li, F.: Research on privacy protection strategies in blockchain application. Comput. Sci. 46(5), 29–35 (2019). https://doi.org/10.11896/j.issn. 1002-137X.2019.05.004

Privacy Challenges and Enhanced Protection in Blockchain …

197

10. Goldwasser, S., Micali, S., Rackoff, C.: The knowledge complexity of interactive proof systems. SIAM J. Comput. 18(1), 186–208 (1989). https://doi.org/10.1137/0218012 11. Hu, S., Cai, C., Wang, Q., Wang, C., Luo, X., Ren, K.: Searching an encrypted cloud meets blockchain: a decentralized, reliable and fair realization. In: Proceedings of IEEE Conference on Computer Communications (INFOCOM), Honolulu, HI, USA, Apr 2018, pp. 792–800. https://doi.org/10.1109/INFO-COM.2018.8485890. 12. Rivest, R.L., Shamir, A., Tauman, Y.: How to leak a secret. In: Proceedings of 7th International Conference on the Theory and Application of Cryptology and Information Security, pp. 552– 565 (2001) 13. Rivest, R.L., Adleman, L., Dertouzos, M.L.: On data banks and privacy homomorphisms. Found. Secure Comput. 4(11), 169–180 (1978) 14. Tso, R., Liu, Z.-Y., Hsiao, J.-H.: ‘Distributed E-voting and E-bidding systems based on smart contract. Electronics 8(4), 422 (2019). https://doi.org/10.3390/electronics8040422 15. Todd, P.: Stealth Addresses. Accessed: 6 Jan 2014 [Online]. Available: https://lists.linuxfoun dation.org/pipermail/bitcoin-dev2014-January/004020.html 16. Zhenyu, N., Fengwei, Z., Weisong, S.: ‘A study of using TEE on edge computing. J. Comput. Res. Develop. 56(7), 1441–1453 (2019) 17. Wang, J., Fan, C.-Y., Cheng, Y.-Q., Zhao, B., Wei, T., Yan, F., Zhang, H.-G., Ma, J.: Analysis and research on SGX technology. J. Softw. 29(9), 2778–2798 (2018). https://doi.org/10.13328/ j.cnki.jos.005594 18. Wang, T.: A review of the study of secure multi-party computation. Cyberspace Secur. 5(5), 41–44 (2014) 19. Bonneau, J., Narayanan, A., Miller, A., Clark, J., Kroll, A., Felten, E.W.: Mixcoin: anonymity for bitcoin with accountable mixes. In: International Conference on Financial Cryptography and Data Security, pp. 486–504. Springer (2014) 20. Valenta, L., Rowan, B.: Blindcoin: blinded, accountable mixes for bitcoin. In: Proceedings of International Conference on Financial Cryptography and Data Security. Springer, Berlin, Germany, pp. 112–126 (2015) 21. Valentaand, L., Rowan, B.: Blindcoin: blinded, accountable mixes for bitcoin. In: International Conference on Financial Cryptography and Data Security, pp. 112–126. Springer (2015) 22. Dash is Digital Cash. https://www.dash.org// 23. Ruffing, T., Monero-Sanchez, P., Kate, A.: CoinShuffle: practical decentralized coin mixing for bitcoin. In: European Symposium on Research in Computer Security, pp. 345–364 (2014) 24. Heilman, E., Alshenibr, L., Baldimisti, F., Scafuro, A., Goldberg, S.: Tumblebit: an untrusted bitcoin compatible anonymous payment hub. In: Proceedings of NDSS, pp. 1–15 (2017) 25. Miers, C.G., Green, M., Rubin, A.D.: Zerocoin: anonymous distributed E-cash from bitcoin. In: IEEE Symposium on Security and Privacy, pp. 397–411 (2013) 26. Sasson, E.B., Chiesa, A., Garman, C., Green, M., Miers, I., Tromer, E., Virza, M.: Zerocash: decentralized anonymous payments from bitcoin. In: IEEE Symposium on Security and Privacy, pp. 459–474 (2014) 27. Van Saberhagen, N.: Cryptonote v2.0. https://static.coinpaprika.com/storage/cdn/whitepapers/ 1611.pdf (2013) 28. Zhang, H.G.: Research and development of trusted computing in China. In: Proceedings of 3rd Asia Pacific Trusted Infrastructure Technologies Conference (APTC). IEEE Computer Society, New York, NY, USA, pp. 1–3 (2008)

Data Privacy and Security Issues in HR Analytics: Challenges and the Road Ahead Shweta Jha

Abstract Large companies across the world have adopted HR analytics in a big way in recent times. As personal details of a large number of employees are used in HR analytics, concerns regarding protecting their privacy and preventing pilferage of their sensitive personal information has grown significantly in recent times. Based on a systematic review of extant literature, this paper examines privacy and data pilferage concerns of the people in the context of HR analytics and adequacy of the concomitant responses of the governments as well as corporate houses to contain the matter. Further, this paper also provides a roadmap for resolving the privacy concerns of the people whose personal data are being used for the purpose of HR analytics. Keywords HR analytics · Data · Data breach · Privacy · GDPR

1 Introduction Privacy and security are supposed to be the inalienable entitlements of the human beings sanctioned by the United Nations Organization (UNO) in 1948. According to the Universal Declaration of Human Rights adopted by the UNO, no person can be meted out with whimsical obtrusion with his or her privacy. In the context of sharing personal data, privacy entails the ability of the individuals to control the nature, quality and scope of personal information that can be known to others including the state, employers, banks, tax authorities and companies offering products or services to them. Generally, employers have access to more personal information about their employees as compared to any other agencies. Thus, individuals working in any organization run by state or private bodies or non-profit entities are at risk of pilferage of their highly sensitive personal data that may cause severe damage to them without their knowledge. HR departments of various organizations are the custodian of all the personal data of individuals working in their respective organizations. These days, personal data are being used profusely by HR analytics teams both S. Jha (B) Apeejay School of Management, New Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_17

199

200

S. Jha

within and outside the organizations, thus augmenting the risks of data breach. As a consequence, the stakeholders have often been concerned about the personal, health as well as compensation and benefits related data used in HR analytics in recent times. Already, several cases of data breach have been reported in the media globally, drawing attention of the policy-makers and the international communities. The concerns are genuine and valid as reflected in the steps taken by international bodies and the governments of various countries to protect the privacy of the people across the world. Thanks to the sensitive nature of the HR analytics data, privacy concerns become all the more important for the stakeholders. European Union has tried to ensure data privacy by launching General Data Protection Regulation (GDPR) in 2018. The GDPR has a ripple effect across the world. A number of European companies operating outside the continent have taken GDPR norms to other countries via their subsidiaries. At the same time, companies of various countries dealing with any European company have also adopted the GDPR norms in order to continue their business operations seamlessly. In fact, GDPR norms have motivated governments of a number of countries to enact privacy laws on similar lines. For example, Singapore has enacted a Personal Data Protection Act to safeguard the interest of all its citizens irrespective of geographical location of the company collecting personal details of people of the country. On the other hand, personal information of Canadian citizens is protected by the Personal Information Protection and Electronics Documents Act. Different states in the USA have also enacted laws for protecting the personal information of their people. Indian government is also in the process of enacting a Personal Data Protection Bill [1]. In India, Information Technology Act, 2000 and Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 are already in place. It has been observed that any kind of data breach may result in physical, material or non-material impairment, unfairness, identity purloining or deception, monetary deprivation, mutilation of character, squandering of personal data covertness and socio-economic downside [2]. Further, knowledge about ethnicity, religious beliefs, sexual orientations, health conditions, etc., can be used against the employees or potential employees while processing data for HR analytics. Indeed, personal data of the employees and potential employees augment the vulnerability of the individuals who have no control over the information about their own being. Although the process of HR analytics swears by neutrality and objectivity in terms of collecting, processing and interpreting personal data, it is almost impossible to ensure fairness and protect the larger interests of the employees. Digitization of all the data related to employees has made it easier for the crooks to jeopardise their interests by misusing or misinterpreting their personal details [3].

2 Related Work HR analytics data is the core asset of any organization considering its importance in contemporary times and hence necessitates conscientious data management [4].

Data Privacy and Security Issues in HR Analytics …

201

HR analytics teams are expected to honour the data privacy laws of the countries where they are operating. Any violation of the privacy laws while using data for HR analytics may have serious consequences both for the individual employees and their respective organizations. Table 1 provides a short view of data types and concomitant levels of risk. GDPR provides for severe punishment for data breach to the tune of 20 million Euros or 4% of the annual global turnover of the company if the latter is of greater value [4]. However, the flipside of GDPR is that it is applicable only for European companies operating anywhere in the world. A number of companies outside the purview of European Union have started adopting the GDPR norms if they have to deal with the European companies. Data pilferage risks emanate from both internal weaknesses and external threats. A research conducted in 2018 indicated that 55% of the enterprise network security decision-makers had encountered at least one data breach in the last one year out of which 44% of the data breaches were due to inadvertent exposure of critical HR data to hackers caused intentionally or unintentionally by the employees handling personal information [6]. Hence, it is important that internal risks are periodically assessed so that access to HR analytics data can be balanced with adequate security measures. Also, it is important that the HR teams understand the value of personal data and make concerted efforts to prevent all sorts of pilferage in the interests of their colleagues as well as respective companies where they work. On several occasions, HR teams have been targeted by the hackers to steal critical personal information of the employees [7]. External threats to privacy include phony chatbots, spear phishing and mobile malware that can be contained by educating all the employees who would otherwise be tricked to reveal their personal details to unscrupulous hackers who deceitfully come across as authentic seekers of such information. While organizational focus is on preventing internal misuse of personal data of employees, the companies often Table 1 Data types and levels of risks. Levels of risk Data type

Definition of data type

Highest

Personal data

Personal data is information which identifies an individual in a straight manner or concomitantly

Higher

Pseudonymized data Pseudonymized data is information in which an individual identifier is substituted with an attribute which hides the inputs which can lead to a deliberate or an inadvertent identification of the personal details being processed

Lower

Aggregated data

Aggregated data comprises information based on various sources collated from data inputs provided by a good number of individuals

Lowest

Anonymized data

Anonymized data is information that does not have any personal identifier whatsoever and there is least chance of identification of any personal information vis-a-vis an individual employee or potential employee

Source Adapted from Microsoft [5]

202

S. Jha

ignore the external threats posed by the hackers which can attain disastrous propositions of collateral damages [7]. Vulnerability of HR analytics data also increases because of the involvement of third-party entities in terms of hosting, managing and processing the personal details of the employees of various organizations. Furthermore, often the HR systems are built by external agencies that focus on operational ease rather than data security. Caution in managing data for the purpose of HR analytics is critical. Even an inadvertent and unintentional data breach can cause serious damage to the employees. For example, ineptitude of a Boeing employee led to serious data breach in the company in 2016 putting critical information, e.g. name, ID number, accounting codes, date of birth and social security code of about 36,000 employees at risk [4, 8]. In an earlier case of 2014, a former employee had leaked sensitive information of about 100,000 employees of Morrisons which is the fourth largest chain of supermarkets in the UK even though the company had followed the industry norms regarding prevention of data breach [9]. Casual attitude of the employees with regard to protection of critical personal information has not only affected their colleagues but also the clients of their respective organizations. For example, the Equifax data breach exposed the personal information such as social security numbers, names, date of birth and credit card details of thousands of customers across the UK, the USA and Canada [10]. Likewise, personal details of about 44,000 customers of Federal Deposits Insurance Corp (FDIC) were exposed inadvertently by a departing employee [11].

3 Findings and Discussion Ever since the practice of HR analytics has gained traction in the corporate world during the last two decades, there have been odd incidences of data breach augmenting the vulnerabilities of the employees. Although the number of data breach cases is low, the magnitude of the implications of such incidents is tremendous. Hence, the organizations need to set up appropriate mechanisms to manage data meant for HR analytics so that the personal details remain absolutely safe. In order to protect the critical personal details of the employees as well as the potential employees, the organization need to set up data governance processes revolving around employee consent, control and monitoring of data usage, anonymization of data, data encryption, training and sensitization of the data handlers both within and outside the organization. Adequate safeguards for prevention of any kind pilferage of data being used for HR analytics are essential in the best interest of both the organization and the existing or potential employees. Hence, the HR teams need to be trained in effective and secure data management and must be sensitized about the criticality of personal information before gaining access to HR data [12]. Training and sensitization of HR teams must revolve around identification of critical data so that more risk-prone information can be segregated from the routine details. Moreover, only role-based access of critical HR data can be given to members of the HR teams with sensible control and effective monitoring

Data Privacy and Security Issues in HR Analytics …

203

processes [12]. It has also been observed that information security within an organization can be augmented by way of employee awareness, smarter policies and position descriptions revolving around acceptable technology usage, password format and changes, employee ethics, data protection responsibility and practices [13]. Further, the privacy issues can be reinforced by way of monitoring access, authorizations, data anonymization, data fragmentation and data reconfiguration [14, 15]. GDPR has put the onus of data security on the organizations (Fig. 1). Hence, it is all the more imperative for the organizations to ensure privacy and data protection through appropriate mechanisms involving restrictions on data collections and access and strict compliance with regard to continuous impact assessments [16]. Accountability of the organizations can be ensured by having full-time personnel

Fig. 1 Data obligation for companies. Source People Conscience [16]

204

S. Jha

dedicated for the purpose of privacy and data security, a robust audit system and due diligence in all regulatory compliances [16]. Moreover, appropriate techniques for safeguarding privacy of employee data can be applied especially in the context of HR analytics not only to cover the existing employees but the potential employees as well. It has been observed that applicant tracking systems are used by almost all the Fortune 500 companies which increase the vulnerability of so many potential job applicants whose critical information are handled by the machine owned by external parties at the request of specific organizations [17]. Thus, it is also important that data misuse, discrimination and privacy concerns in recruitment and selection powered by artificial intelligence should be addressed by the organizations across the board [18]. Data-centric security architecture can certainly be quite handy in protecting the interests of all the internal as well as external stakeholders [19]. In Indian context, there is a robust law for the prevention of data breaches, i.e. Information Technology Act, 2000. The government of India has come up with Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 to further streamline the implementation of this piece of legislation for the benefit of all the stakeholders [20]. India has one of the most comprehensive laws for prevention of any sort of data breaches. However, much is desirable when it comes to implementation of the law in true spirit. Besides, in the emerging scenario, the scope of the Indian law needs to be expanded to cover containment of data breaches caused by unscrupulous external agencies or hackers as existing provisions focus primarily on prevention of data breaches through internal agencies of the organizations.

4 Conclusion Existing or potential employees tend to share their personal details to their current or future employers in good faith. They have generally no idea how their personal information can be used or misused by the HR analytics team within or outside the organization. Hence, their privacy is at a higher risk even if the employers do not have any evil intention to put them under any trouble. Onus ultimately lies on the employers to protect the privacy of their employees and creates an enduring system to prevent any misuse of theft of the personal information of their employees or potential employees even if there is no specific law for the purpose. In case, there is any privacy law for protection of personal data where the organization is operating, strict compliance is certainly an imperative. In the absence of data security and privacy laws for the protection of vulnerable employees or potential employees, the organizations need to come forward with a comprehensive policy and robust system to ensure complete protection of the personal details of the employees or potential employees from any sort of deliberate or unintentional pilferage. As best practice, the organizations must provide appropriate privacy notices and mandatory disclosures about how the personal details may be collected, retrieved or deleted, processed and interpreted along with measures to

Data Privacy and Security Issues in HR Analytics …

205

ensure protection and fair use of personal information furnished by the employees or potential employees. Also, in order to boost the moral and confidence of the employees as well as potential employees, it would be great if the organizations conduct an annual data protection impact assessment as part of genuine commitment towards protecting the privacy of the employees as well as the potential employees.

References 1. Ravichandar, H.: When HR becomes a custodian of data. Mint. https://www.livemint.com/mintlounge/business-of-life/when-hr-becomes-a-custodian-of-data-11581440842421.html (2020) 2. GDPR: Recital 75: Risks to the Rights and Freedoms of Natural Persons. https://gdpr-info.eu/ recitals/no-75/ 3. Lucas, S.: What’s Next in Human Resources Data Privacy and Compliance. HR Acuity, https:// www.hracuity.com/blog/hr-data-privacy (2020). 4. Marr, B.: What are the Pitfalls of People Analytics and Data-Driven HR? Forbes. https://www. forbes.com/sites/bernardmarr/2020/02/12/what-are-the-pitfalls-of-people-analytics-and-datadriven-hr/?sh=4b1941c63c38 (2020) 5. Microsoft: Data-protection considerations when using Workplace Analytics. https://docs.mic rosoft.com/en-us/workplace-analytics/privacy/data-protection-considerations (2020) 6. Zielinski, D.: 5 Top Cybersecurity Concerns for HR in 2019. https://www.shrm.org/resources andtools/hr-topics/technology/pages/top-cybersecurity-concerns-hr-2019.aspx (2019) 7. Thibodeau, P.: Attackers seek gold in HR data security breaches. https://searchhrsoftware.tec htarget.com/feature/Attackers-seek-gold-in-HR-data-security-breaches (2019) 8. Wright, A.D.: Boeing Insider Data Breach Serves as Reminder for HR. https://www.shrm. org/resourcesandtools/hr-topics/technology/pages/boeing-insider-data-breach-serves-as-rem inder-for-hr.aspx (2017) 9. Butler, S.: Morrisons found liable for staff data leak in landmark ruling. The Guardian. https:// www.theguardian.com/business/2017/dec/01/morrisons-liable-staff-data-leak-landmark-dec ision (2017) 10. Simberkoff, D.: Don’t be the next Equifax: tips to avoid a security breach. CMS Wire. https://www.cmswire.com/information-management/dont-be-the-next-equifax-tips-toavoid-a-security-breach/ (2017). 11. Davidson, J.: Inadvertent’ cyber breach hits 44,000 FDIC customers. The Washington Post. https://www.washingtonpost.com/news/powerpost/wp/2016/04/11/inadvertentcyber-breach-hits-44000-fdic-customers/ (2016) 12. Simberkoff, D.: Why HR and IT are teaming up to prevent data breaches. CMS Wire. https://www.cmswire.com/information-management/why-hr-and-it-are-teaming-up-toprevent-data-breaches/ (2019) 13. Randstad.: 3 Ways HR Leaders can Help Reduce Data Breach Risks. https://www.randstad. com/workforce-insights/hr-tech/3-ways-hr-leaders-can-help-reduce-data-breach-risks/ (2020) 14. Karthiban, K., Smys, S.: Privacy preserving approaches in cloud computing. In: 2018 2nd International Conference on Inventive Systems and Control (ICISC), Coimbatore, pp. 462–467 (2018) 15. Praveena, A., Smys, S.: Anonymization in social networks: a survey on the issues of data privacy in social network sites. Int. J. Eng. Comput. Sci. 5(3), 15912–15918 (2016) 16. People Conscience: The World of Data Protection and Security for Human Resources. https:// www.peopleconscience.com/2016/10/18/the-world-of-data-protection-privacy-and-securityfor-human-resource/ 17. Shields, J.: Over 98% of Fortune 500 Companies Use Applicant Tracking Systems (ATS). https://www.jobscan.co/blog/fortune500-use-applicant-tracking-systems/ (2018)

206

S. Jha

18. Jha, S.K., Jha, S., Gupta, M.K.: Leveraging artificial intelligence for effective recruitment and selection processes. Lecture Notes in Electrical Engineering, vol. 637, pp. 287–93. Springer, Singapore (2020) 19. Hennessy, S.D., Lauer, G.D., Zunic, N., Gerber, G., Nelson, A.C.: Data-centric security: integrating data privacy and data security. IBM J. Res. Develop. 53(2) (2009) 20. Joseph, B., Basu, P.: India: malicious personal data breach by an employee— consequences. Mondaq. https://www.mondaq.com/india/data-protection/929512/maliciouspersonal-data-breach-by-an-employee--consequences (2020)

Narrow Band Internet of Things as Future Short Range Communication Tool T. Senthil and P. C. Vijay Ganesh

Abstract The increase in need of connecting devices gave a path to the Internet of things (IoT). The heterogeneous nature of connecting with different devices through seamless connectivity aids IoT to connect and acquire needed information. The demand for machine-type communications (MTC) has provided way for modern algorithms, diverse services, technology to meet modern IoT needs. Cellular standards also creating a space to incorporate IoT along the sideline of regular network with low-power and long-range specifications. 3GPP release 13 introduced narrow band IoT (NB-IoT) along with LET-advance standard. In this paper, the state-of-theart of the IoT application requirements in the 3GPP cellular-based low-power wide area solutions for massive to critical IoT is discussed. The need for changes in technology in occurrence to the sensor nodes and transmit information is also focused in this paper with SoC for NB-IoT. Keywords Internet of things · LTE · NB-IoT · WSN · Low-power WAN · Lightweight protocol · 3GPP · Simulators IoT

1 Introduction The Internet of things (IoT) is an emerging and anticipated network with promising technology that can revolutionize the world with physical objects connected. It is projected that per person will have more than five devices connected within 2020 [1]. IoT provides a path to communicate with different devices through Internet. The next generation industry needs more devices to communicate, which brings attention for research community to bring more devices toward wearable sensors, smart appliances, washing machines, tablets, smartphones, smart transportation system, T. Senthil (B) Kalasalingam Academy of Research and Higher Education, Krishnankoil, Tamil Nadu, India e-mail: [email protected] P. C. Vijay Ganesh St. Joseph Engineering College, Mangalore, Karnataka, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_18

207

208

T. Senthil and P. C. Vijay Ganesh

Fig. 1 Overview of a smart hospital model [3]

etc. Machine-to-machine (M2M) communication is a process of having devices to communicate without interference of humans. The devices need not to be homogeneous. The IoT is mostly deployed where human inference needs less. This pushes the manufactures to have devices with low-cost, low-power or more battery life devices. Some of the fields like clinical, traffic and industrial control devices require higher reliability, safety and low latency devices [1]. The wireless sensor network (WSN) having devices with sensors, smart meter, actuators, etc., takes advantages of IoT and move the market with massive number of low-cost and low-power devices. The infrastructure for massive number of devices push the IoT to overcome the communication challenges [2]. An overview of a medical sector is shown in Fig. 1 [3]. The information required for hospital management are collected with the help of different sensor. These sensor values are communicated to main server and served when required. The 3GPP group had introduced a standard to support the next generation device communications. Release 13 of 3GPP adds LTE-M to have communication for machines. Release 14 adds additional layer of narrowband IoT. The groups gave more concentration on designing low-power module to support 4G and 5G cellular networks [4]. The NBIoT was given a competition with the existing low-power wireless devices. It helps to deploy LPWAN to effectively overcome shortcoming in security, reliability and operation and maintenance costs.

2 NB-IoT Low-power wide area (LPWA) is a category in wireless communication which sole aim to improve and support the implementation of IoT in real world [5]. Many technologies are proposed for LPWA in recent times holding different functionality,

Narrow Band Internet of Things as Future Short …

209

among the most popular ones long range (LoRa) and NB-IoT are the two most emerging technologies under unlicensed and the licensed category, respectively, as shown in Table 1. The NB-IoT adds on features from 3GPP R 13. The resource allocated in LTE for NB-IoT is narrow one resource block of 180 kHz with 12 subcarrier. The green bar in Fig. 2 shows the options resource allocation directly deployed along the resource element in global system for mobile communications (GSM) or long-term evolution (LTE) networks. This reduces deployment costs. Industry has shown interest in NBIoT for next generation low throughput, low delay sensitivity and ultra-low device cost. NB-IoT has three different modes of operations, in-band, guard band and GSM carrier as shown in Fig. 2. In-band, comes as the part of the normal LTE resource blocks, is used in regular communication channel. Guard band utilizes the unused band in frequency band of 180 kHz. Standalone system which is based on GSM/GPRS system which is being operated by the service operator [6]. NB-IoT targets a coverage improvement of 20 dB compared with GSM/GPRS. The target is achieved via the utilization of narrowband signals and time diversity. A narrowband signal makes the receiver end having more noise filter thereby increasing the improving the signal-to-interference-plus-noise ratio (SINR). There are two subcarriers allocated in R14: A 15 and 3.75 kHz. Number Table 1 Different technology for device to communication

Unlicensed

Licensed

LoRa

NB-IoT

SigFox

LTE-M

iFrogLab

LTE-MTC

ThingPart wireless

Ultra narrow band (UNB)

Ingenu

Weightless-p

Fig. 2 Narrowband IoT different deployment

210

T. Senthil and P. C. Vijay Ganesh

of repetitions for downlink is fixed to 2048 while 128 for uplink. This helps to improve the probability of signal reception. NB-IoT is inherited from LTE. There are 12 subcarriers having 15 kHz each for both downlink and uplink the channel. One frame has total duration of 10 ms with 20 slots of 0.5 ms each. Recourse element is the smallest time frequency resource having one subcarrier and one symbol. There is a reset for every 1024 frames. The same structure will be repeated for next 1024 times forming hyper frame that comes approximately to 3 h.

3 Applications The quality of the life of end user can be improved with the use of IoT applications. The end user requirement changes based on the character, geography location, climate conditions, industry, etc. The changing nature poses a great challenge to have once structure or a general solution. The common applications mostly uses IoT are smart home, intelligent transportation system, smart city, smart health care and smart grid [7]. The NB-IoT extends the IoT devices for broader application such as logistics, utility meter, industrial, smart cites, pet tracking and waste managements [8, 9].

4 Existing Technology The device-to-device or machine-to-machine communication takes place with less spatial separation. Using long-range communication, it has less effectively utilized. The IoT applications abides with the short range communication standard there by increasing utilized resource and connected with massive and critical IoT devices The short range connectivity merges with either traditional cellular IoT or lowpower wide area networks (LPWAN). LPWA gives a wide area coverage, less power consumption, high energy efficiency, low bandwidth channel, etc. Table 2 shows the range and data rate for different short range communication. Table 2 Range and data rate for different technology

Technology

Range

Maximum data rate

LoRa

15 km

50 kbps

SigFox

20 km

100 bps

Zigbee

16

0

0

36

55

1980

LOW

11–>19

0

0

31

31

961

LOW

16

140

208

72

100

36,320

HIGHEST

16–>22

0

0

64

64

4096

LOW

16–>23

0

0

36

36

1296

LOW

19

0

0

36

36

1296

LOW

22

0

127

97

97

9409

MEDIUM

23

113

113

111

111

25,090

HIGH

3–>10

0

0

22

22

484

LOW LOW

3–>11

0

0

37

65

2405

6

0

0

39

48

1872

LOW

7

0

0

44

44

1936

LOW

3

0

103

55

72

3960

LOW

The critical node to insert the key gate is decided and the key gate is inserted [8]. The node which has the highest FI value is considered to be a critical node for the insertion of a key gate. In this example shown for c17, node 16 which has the maximum fault impact value is considered as the critical node as shown in Table 1.

Lightweight Logic Obfuscation in Combinational …

221

4 Results and Analysis The number of primary inputs, number of primary outputs, and level of circuit give the information about each CUT. The technique behind choosing the node with high fault impact is to increase the output corruption [5, 14, 15]. Fault coverage of each circuit when stuck at faults are being injected has been given in Table 2. For the signature generation suggested by Manoj Reddy et al., circuit-specific parameters such as the number of inputs, the number of outputs and the number of gates were used [16]. This is then incorporated for a signature generation [17] in any selected combination/formula. This is similar to the key inputs to the key gates. From Table 3, it is observed that the percentage increase in area is smaller for larger circuits. Fault coverage is maximum for smaller circuits as the area of coverage and features such as gate length is minimum. The percentage of fault coverage is based on the number of collapsed faults and the number of detected faults. The system specification used for carrying out this simulation is intel core i3 at 2.4 GHz with Table 2 Fault coverage of ISCAS’85 circuits CUT

Fault coverage (%)

Number of collapsed Number of detected faults faults

CPU memory used (KB)

C17

100.00

22

22

17,588

C432

92.18

524

483

17,448

C499

96.44

758

731

18,391

C880

94.27

942

888

18,300

C1355

90.79

1574

1429

18,886

C1908

81.16

1879

1525

17,321

C2670

81.33

2747

2234

17,797

C3540

86.96

3248

2981

17,596

Table 3 Area overhead analysis of obfuscated ISCAS’85 CUA

C17

Area of CUA µm 27.69

XOR obfuscated CUA area µm

% Increase

AND obfuscated CUA area µm

% Increase

46.375

67.47

39.375

73.54

C432

676.844

1163.516

71.90

1236.516

82.15

C499

1887.9655

1894.411

0.34

1889.411

0.07

C880

1540.210

1548.501

0.53

1556.501

1.05

C1355

1961.7991

1976.181

0.73

1969.181

0.37

C1908

2205.880

2210.121

0.21

2215.121

0.41

C2670

2522.612

2531.402

0.35

2557.402

1.37

C3540

2965.98

2978.901

0.44

2970.901

0.16

222

N. Mohankumar et al.

Table 4 Power overhead analysis of obfuscated ISCAS’85 CUA

Total power of actual CUA µW

XOR obfuscated CUA power µW

% Increase

AND obfuscated CUA power µW

% Increase

C17

1.1165

2.7193

133.41

2.563

120.00

C432

38.8722

58.6917

50.98

53.6917

38.13

C499

227.7763

260.65

14.43

267.65

17.51

C880

92.5937

116.3741

25.68

120.3741

30.01

C1355

240.3135

380.3358

58.26

342.3358

42.45

C1908

325.4568

372.9852

14.06

356.9852

09.68

C2670

390.5468

425.598

08.97

430.598

10.25

C3540

374.2125

450.654

20.43

448.563

19.89

8 GB RAM. The CPU memory used during fault simulation has also been shown in Table 2. The system can be operated only by providing an authenticated key; hence, the CUT is referred to as CUA. Area and power profile are obtained by adapting 90 nm technology node using the Synopsys EDA tool.

4.1 Area and Power Analysis Area analysis for ISCAS’85 circuits has been carried out using XOR logic gate and AND logic gate, and percentage increase has been calculated for actual circuits and encrypted circuits. Based on area analysis done for various ISCAS’85 circuits, it is observed that percentage increase in area is relatable. For larger circuits, the increase in area is much less than 1% in XOR and AND CUAs. Power analysis for ISCAS’85 circuits has been carried out using XOR logic gate and AND logic gate, and percentage increase has been calculated for actual circuits and obfuscated circuits and shown in Table 4. For larger circuits, the percentage increase in power is relatively smaller and it is rational. Thus, proving lightweight logic obfuscation is one of the finest encryption techniques. It is understood from Tables 3 and 4 that the proposed technique is scalable to larger circuits, as the number of gates used in the analysis is ranging from less than 10 to much greater than 1000.

4.2 Output Corruption Two distance metrics are evaluated to justify the claim of functional corruptibility when correct and wrong keys are given. Output corruption is the difference between outputs of the actual circuits and the outputs of the fault injected circuits. As a measure

Lightweight Logic Obfuscation in Combinational …

223

of output corruption, an analysis is carried out over the actual circuit and obfuscated circuit. The Hamming distance is the difference in bits between the output of the actual circuit and the output of the obfuscated circuit. When the corrupted output variation is around 50%, the difficulty level of predicting the correct keys increases. Jaro distance metric is a popular metric used to compare a string of characters and numbers. The higher the Jaro gap between two strings, the more similar the strings are. The score is normalized in such a way that 0 does not correspond to any resemblance and 1 is an exact match. Analysis has been carried out by inserting key gate at high FI node, low FI node later Hamming distance and Jaro distance are computed for various circuits which have been shown in Figs. 3 and 4, respectively. XOR has a comparatively greater Hamming distance when a wrong key is applied. Fault impact is used for the identification of a critical node to insert a key gate. This method can be used for any circuit to identify critical key nodes, and any logic can be used to lock the circuit design. It is observed that Hamming distance is very much

Fig. 3 Hamming distance comparison

Fig. 4 Jaro distance comparison

224

N. Mohankumar et al.

smaller for AND logic gate inserted as key gate compared to XOR logic gate inserted as a key gate. So, XOR logic gives better results when used for logic obfuscation.

5 Conclusion Identification of critical nodes is highly essential to insert key gate. Once the node is determined, insertion of key gate is achieved. This has also been analyzed with the help of inserting key gate as AND logic gate. By providing wrong key inputs, the circuit will produce wrong outputs and thus protected from misuse of the functionality of the design. Analysis has been carried out by inserting AND logic gate and XOR logic gate. Comparison is made between both XOR logic as key gate and AND logic as key gate using parameters such as Hamming distance, Jaro distance measure, area and power. From the analysis, it can be observed that XOR has comparatively greater Hamming distance and less Jaro distance during the occurrences of the wrong key; hence, the number of cycles used for identifying the correct key by brute force attack might also be high, thereby enhancing the security of the CUA. It is inferred that high output corruption is possible in this method. Hamming distance is more for an obfuscated circuit than an original circuit when a fault is encountered while using a wrong key. Jaro distance also proves the similarity between the strings is less when correct and wrong keys are provided. Power and area analysis have been carried out for standard benchmark circuits which are logically obfuscated using lightweight obfuscation proving that obfuscated circuit consumes low power. Though this scheme offers security, the chances of predicting the key are always possible. If techniques like dynamic obfuscation are employed, the obfuscated signals change over time and the key used in the design is unpredictable.

References 1. Majzoobi, M., Koushanfar, F., Potkonjak, M.: Testing techniques for hardware security. In: 2008 IEEE International Test Conference, pp. 1–10 (2008) 2. Khalegi, S., Da Zhao, K.: IC piracy prevention via design withholding and entanglement. In: The 20th Asia and South Pacific Design Automation Conference, pp. 821–826 (2015) 3. Amir, S., Shakya, B., Xu, X., et al.: Development and evaluation of hardware obfuscation benchmarks. J. Hardw. Syst. Secur. 2, 142–161 (2018) 4. Fyrbiak, M., et al.: On the difficulty of FSM-based hardware obfuscation. IACR Trans. Cryptog. Hardw. Embed. Syst. 3, 293–330 (2018) 5. Rajendran, J., et al.: Fault analysis-based logic encryption. IEEE Trans. Comput. 64(20, 410– 424 (2015) 6. Chakraborty, R.S., Bhunia, S.: HARPOON: an obfuscation-based SoC design methodology for hardware protection. In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28(10), pp. 1493–1502 (2009) 7. Rajendran, J., Pino, Y., Sinanoglu, O., Karri, R.: Security analysis of logic obfuscation. In: DAC Design Automation Conference 2012, pp. 83–89 (2012)

Lightweight Logic Obfuscation in Combinational …

225

8. Chandini, B., Nirmala Devi, M.: Analysis of circuits for security using logic encryption. In: Security in Computing and Communications. SSCC 2018. Communications in Computer and Information Science, vol. 969 (2019) 9. Saravanan, K., Mohankumar, N.: Design of logically obfuscated n-bit ALU for enhanced security. In: 2019 3rd International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, pp. 301–305 (2019) 10. Ranjani, R.S., Nirmala Devi, M.: A novel logical locking technique against key-guessing attacks. In: 8th IEEEInternational Symposium on Embedded Computing and System Design (ISED) (2018) 11. Baumgarten, A., Tyagi, A., Zambreno, J.: Preventing IC piracy using reconfigurable logic barriers. IEEE Des. Test Comput. 27(1), 66–75 (2010) 12. Baby, J., Mohankumar, N., Nirmala Devi, M.: Reconfigurable LUT-based dynamic obfuscation for hardware security. In: Advances in electrical and computer technologies. Lecture Notes in Electrical Engineering, vol. 672. Springer (2020) 13. Yu, Q., Dofe, J., Zhang, Z.: Exploiting hardware obfuscation methods to prevent and detect hardware trojans. In: 60th IEEE International Midwest Symposium on Circuits and Systems (MWSCAS) (2017) 14. Yasin, M., Rajendran, J.J., Sinanoglu, O., Karri, R.: On ımproving the security of logic locking. In: 2016 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 35(9), pp. 1411–1424 (2016) 15. Karmakar, R., Kumar, H., Chattopadhyay, S.: On finding suitable key-gate locations ın logic encryption. In: 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, pp. 1–5 (2018) 16. Reddy, D.M., Akshay, K.P., Giridhar, R., Karan, S.D., Mohankumar, N.: BHARKS: built-in hardware authentication using random key sequence. In: 4th International Conference on Signal Processing, Computing and Control (ISPCC), pp. 200–204 (2017) 17. Koteshwara, S., Kim, C.H., Parhi, K.K.: Functional encryption of integrated circuits by keybased hybrid obfuscation. In: 51st IEEE Asilomar Conference on Signals, Systems, and Computers (2017)

Analysis of Machine Learning Data Security in the Internet of Things (IoT) Circumstance B. Barani Sundaram, Amit Pandey, Aschalew Tirulo Abiko, Janga Vijaykumar, Umang Rastogi, Adola Haile Genale, and P. Karthika

Abstract The current Internet of things (IoT) technologies will also have a profound economic, industrial and social influence on society. Teh ability of nodes to participate in IoT networks is generally resource-constrained, that also allows them to attract objectives for cyber threats. An extensive research has been conducted to address the issues of security and privacy in IoT networks, primarily through the conventional authentication approaches. The main goal is to review the current research learning related to the safety issues of the IoT and provides an information understanding on the topic. The manual scientific visualization study was carried out over a number of articles, and out of which, 58 research articles are selected for careful observation. From the articles, impacts and solutions with future research on security within IoT gateway have been identified. These visualization researchers have indicated key concerns as well as number of solutions. The findings also indicated the challenges in achieving secured data protection management and data center integration, which still require efficient solutions. The outcome of the physical ınstitutional mapping B. Barani Sundaram (B) Bule Hora University, Bule Hora, Ethiopia A. Pandey College of Informatics, CSRSCD Bule Hora University, Bule Hora, Ethiopia e-mail: [email protected] A. T. Abiko School of Computing and Informatics, Wachemo University, Hosana, Ethiopia J. Vijaykumar Department of Information Technology, College of Informatics, Bule Hora University, Bule Hora, Ethiopia U. Rastogi MIET, Meerut, Uttar Pradesh, India e-mail: [email protected] A. H. Genale Department of Information Science, Adola Haile Genale, Bule Hora University, Bule Hora, Ethiopia P. Karthika Kalasalingam Academy of Research and Education, Krishnankoil, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_20

227

228

B. Barani Sundaram et al.

study is complete with the use of automatic image analysis software on two datasets (–2016 and –2020) derived from the ML and IoT. This qualitative approach generates trends and over decades in particular to the IoT security. Researchers also discuss a number of future research directions for ML- and DL-based IoT security research. Keywords Internet of things (IoT) · IoT networks · Data security · Attacks · Machine learning · Image analysis software

1 Introduction Since, the start of the twenty-first century with the consistent accessibility of the IoT and its, for instance, IEEE characterizes IoT regarding little with complex frameworks [1] while the characterizes IoT as “a worldwide foundation for the data society, empowering progressed benefits by interconnecting (physical and virtual) things dependent on existing and developing interoperable data and correspondence advancements.” Regardless of fluctuating meanings in IoT, these gadgets have just discovered their way into our regular day-to-day existences with the fast development in the quantity of gadgets associated with the Web [2]. More will probably continue in forthcoming years with the presentation of fifth era remote frameworks (5G). The [3] point, “fifth era remote frameworks (5G) are not too far off and IoT is accepting the middle stage as gadgets are relied upon to shape a significant bit of this 5G organization worldview” (p. 10,410). These suggest IoT be likely the best driver of the other guideline examples of development, for instance, 5G. Likewise, agent firm, Gartner checks that by 2020 will live 20 billion Web-related things, which fuses committed limit objects, for instance, sweets machines, fly engines, related vehicles coming about new strategies, improving adequacy [4].

2 Literature Review IoT has a rich research literature, where numerous overviews have been distributed to cover various parts of the IoT security. In this segment, the current reviews are summed up and contrasted with the proposed work. As far as known, the greater part of the reviews in the writing do not zero on the ML methods utilized in IoT [5]. Besides, the current overviews are either application-explicit or it will not incorporate the full range of the security and protection in the IoT organizations. Despite the fact that there is a rich writing accessible on ML- and DL-based methods in IoT organizations; however, we center just on the security parts of IoT organizations and the job of ML in tending to security challenges in IoT organizations [6]. Note that the explanation of separating the current reviews into increment the intelligibility and feature the exhaustiveness of the current studies on ML-driven IoT organizations. The current writing covers the security in IoT by exploring the current conventional arrangements

Analysis of Machine Learning Data Security …

229

and the arrangements gave through the new arising advancements [7]. Likewise, (for example, expanded writing study), new programmed content examination of another dataset from Web of Science was performed.

3 Implementation of Machine Learning IoT Security ML in IoT security within the sight of existing security arrangements at present utilized in IoT organizations. We originally put light on the special attributes of IoT networks that are appropriate to security and afterward address the security challenges that prevent IoT organization just as talk about the holes in the current security arrangements. From that point forward, we set up the inspiration for utilizing ML to address security challenges in IoT organizations.

3.1 A Challenge of Data Security Outstanding expansion in information accessibility and affectability lately has become a test for information security and protection in an associated world. Manyfold advancements, administrations, and norms grasped in Internet, and cloud space recommend security dangers may amplify with brief time. Figure 1 shows the lofty development in security dangers saw throughout the long term, following accessibility of new devices to build up an assault in secure the Internet. Nonetheless, with the complexity of security assaults truly expanding, information expected to dispatch an assault diminishes dramatically [8]. In this manner, security insurance and safety efforts should similarly be versatile to offer start to finish and powerful information security. The cedures like gadget programming, stockpiling encryption, and access control records (ACL) are very idle in nature [9]. Little consideration has been paid to execute viable security and protection measures for ensuring information and access control in agreement to the elements of asset, climate, clients, or the application. Furthermore, expanded client mediation in securing the touchy information is another test [10]. A fine-grained admittance control and shifted security alternatives for a client are subsequently fundamental. Henceforth, it is very fundamental to understand that security and protection strategies must be more powerful and strong in nature to have the option to address steadily expanding requests of the advanced world [11].

3.2 Analysis of ML with Data Security Changed wellsprings of information and heterogeneous gadgets develop an IoT biological system [12]. The IoT can be required to contain a colossal number of sensors gathering and passing information about natural conditions, physiological

230

B. Barani Sundaram et al.

Fig. 1 Challenges of data security in ML IoT

estimations, machine operational information, and so on notwithstanding purchaser gadgets (cell phones, tablets, and gaming reassures), keen home machines and canny figuring hubs with inserted processors are probably going to join IoT unit in the not so distant future. Most interchanges happen naturally absent a lot of manual mediation [13]. A keen vehicle control framework can recharge your vehicle protection; consequently, by choosing the best accessible arrangement and utilizing your Visa subtleties. Directing admittance to exceptionally touchy data like Mastercards is subsequently a prime need in IoT information security. The examination of the dataset from IoT security in ML viewpoint on IoT security of research. The NAILS utilizes the latent Dirichlet allocation (LDA) point displaying calculation [14] for categorization of gatherings. LDA utilized as measurable content digging technique for relegating archives into points, which are distinguished utilizing word affiliation and disseminations [15]. It is usually utilized for text examination, and identical techniques have been utilized to genuinely investigate logical writings in past investigations [16]. Figures 2 and 3 represent the consequences of point in 2016 and 2020 individually. The 2016 dataset (3243 articles) is remembered for 2020 dataset (6591 articles). When all is said in done, the terms utilized inside these articles are around equivalent to appear by Figs. 2 and 3. Table 1 present the subjects recognized the LDA demonstrating highlight NAILS. The points have creators dependent on substance with there is by all accounts an away from of the subjects. Theme 2 (of 2020 dataset) underlines the all encompassing

Analysis of Machine Learning Data Security …

231

Fig. 2 Challenges of data security in ML IoT

framework see the Table 2 as Topics 1 and 5 are available organization with convention level. Theme 3 underscores information and administration, and the Topic 4 is by all accounts the one of our advantage (albeit all the articles were gathered by the somewhat expansive pursuit inquiry). Subsequently, the programmed investigation will be engaged to Topic 4.

3.3 Analysis of ML with Data Security The examination of the patterns introduced in Fig. 4 uncovers some broad advancement of exploration accentuation on IoT ground. Early days, hubs (2009) and correspondence advances such as RFID (2010) were highlighted, although the focal points of its most recent years were much more information (2015, 2017), administration (2015) and implementations (2015, 2016). Security and prevention issues were also not really that noticeable during the all picture. In the event that a gander is taken

232

B. Barani Sundaram et al.

Fig. 3 IoT security analysis for articles in 2020 Table 1 LDA-based ML with IoT information themes and number of articles in every subject until 2016

Topic 1 networks

Topic 2 systems

Topic 3 loT security

Topic 4 service

Network Protocol

System

Secur

Data

Technolog

IoT

Servic

Propos

Smart

Internet

Comput

Sensor

Develop

Thing

Privaci

Attack

Inform

Devic

Model

Scheme

Home

Network

User

Authent

Manag

Applic

Cloud

Key

Monitor

Challeng

Access

Secur

Intellig

Architectur

Mobil

Wireless

Research

Communic

Provid

639

533

574

397

Analysis of Machine Learning Data Security …

233

Table 2 LDA-based ML with IoT information subjects and number of articles in every point until 2020 Topic 1 networks Topic 2 systems Topic 3 service Topic 4 loT security Topic 5 protocols Network

System

Data

Iot

Attack

Smart

Servic

Secur

Secur Protocol

Sensor

Technolog

Comput

Internet

Scheme

Wireless

Develop

User

Thing

Authent

Node

Inform

Cloud

Devic

Propos

Result

Home

Privaci

Challeng

Key

Detect

Industri

Access

Applic

Devic

Propos

Monitor

Control

Connect

Implement

Method

Intellig

Mobil

Architectur

Communic

Energi

Manag

Provid

Paper

Base

904

1165

915

1129

1065

Fig. 4 Co-event of all catchphrases of 2020 ML with IoT dataset as ordered by KHCoder

234

B. Barani Sundaram et al.

Fig. 5 Co-event of Topic 4 watchwords of 2020 ML with IoT dataset as arranged by KHCoder

from the patterns in the Topic 4 of co-event diagram, the security is vital segment of examination as 2018 (Fig. 5).

4 Conclusion The IoT from security and protection viewpoint has examined the security and protection challenges in IoT, assault vectors, and security prerequisites. We have portrayed diverse IoT and ML strategies and their applications to IoT security. We have additionally revealed insight into the impediments of the conventional ML systems. At that point, we have talked about the current security arrangements and plot the open difficulties and future exploration headings. Nevertheless, they do not change in accordance with the heterogeneous and resource obliged atmosphere of the IoT.

Analysis of Machine Learning Data Security …

235

Additionally, broad work has been never truly change the current shows for IoT purposes or grow absolutely new ones for lightweight encryption and secures association transmission. Considering this current examination’s outcomes, the most insufficient with respect to part of the IoT security is as of now confirmation and endorsement. The growing number of IoT devices in customers’ step-by-step lives makes approval and security essential. After approval, the passageway control issue ought to be settled, as few out of every odd individual gets to everything. Various experts present this as an essential issue important to handle, anyway these disclosures suggest a comprehensive, compelling and versatile response for IoT affirmation issues is missing. Finally, the various attack vectors of the IoT are alarming. Despite the current Internet perils, there are diverse new vectors presented. The open and public nature of various IoT structures makes them especially powerless against pernicious attacks.

References 1. Ithnin, N., Abbasi, M.: Secure hierarchical routing protocols inwireless sensor network; security survey analysis. Int. J. Comput. Commun. Netw. 2, 6–16 (2020) 2. Karthika, P., Vidhya Saraswathi, P.: A survey of content based video copy detection using big data. Int. J. Sci. Res. Sci. Technol. (IJSRST) 3(5), 114–118, May–June (2017). Online ISSN: 2395-602X, Print ISSN: 2395-6011. https://ijsrst.com/ICASCT2519 3. Niu, W., Lei, J., Tong, E., et al.: Context-aware service ranking in wireless sensor networks. J. Netw. Syst. Manage. 22(1), 50–74 (2014) 4. Komar, C., Donmez, M.Y., Ersoy, C.: Detection quality of border surveillance wireless sensor networks in the existence of trespassers’ favorite paths. Comput. Commun. 35(10), 1185–1199 (2012) 5. Karthika, P., Vidhya Saraswathi, P.: IoT using machine learning security enhancement in video steganography allocation for Raspberry Pi. J. Ambient Intell. Hum. Comput. (2020). https:// doi.org/10.1007/s12652-020-02126-4 6. Baig, Z.A.: Pattern recognition for detecting distributed node exhaustion attacks in wireless sensor networks. Comput. Commun. 34(3), 468–484 (2011) 7. Abbas, S., Merabti, M., Llewellyn-Jones, D.: Signal strength based Sybil attack detection in wireless Ad Hoc networks. In: Proceedings of the 2nd International Conference on Developments in E-Systems Engineering (DESE’09), pp. 190–195, Abu Dhabi, UAE, December (2009) 8. Karthika, P., Vidhya Saraswathi, P.: Image security performance analysis for SVM and ANN classification techniques. Int. J. Recent Technol. Eng. (IJRTE) 8(4S2), 436–442 (Blue Eyes Intelligence Engineering & Sciences Publication) (2019) 9. Sharmila, S., Umamaheswari, G.: Detection of sybil attack in mobile wireless sensor networks. Int. J. Eng. Sci. Adv. Technol. 2, 256–262 (2012) 10. Anand, G., Chandrakanth, H.G., Giriprasad, M.N.: Security threats & issues in wireless sensor networks. Int. J. Eng. Res. Appl. 2, 911–916 (2020) 11. Karthika, P., VidhyaSaraswathi, P.: Digital video copy detection using steganography frame based fusion techniques. In: International Conference on ISMAC in Computational Vision and Bio-Engineering, pp. 61–68 (2019). https://doi.org/10.1007/978-3-030-00665-5_7 12. Ssu, K.-F., Wang, W.-T., Chang, W.-C.: Detecting sybil attacks in wireless sensor networks using neighboring information. Comput. Netw. 53(18), 3042–3056 (2009) 13. Karthika, P., VidhyaSaraswathi, P.: Content based video copy detection using frame based fusion technique. J. Adv. Res. Dyn. Control Syst. 9, 885–894 (2017)

236

B. Barani Sundaram et al.

14. Vasudeva, A., Sood, M.: Sybil attack on lowest id clustering algorithm in the mobile ad hoc network. Int. J. Netw. Secur. Appl. 4(5), 135–147 (2019) 15. Karthika, P., Vidhya Saraswathi, P.: Raspberry Pi—a tool for strategic machine learning security allocation in IoT. Apple Academic Press/CRC Press (A Taylor & Francis Group). Proposal has been accepted (provisionally) for the book entitled “Making Machine Intelligent by Artificial Learning”, to be published by CRC Press 16. Balachandaran, N., Sanyal, S.: A review of techniques to mitigate sybil attacks. Int. J. Adv. Netw. Appl. 4, 1–6 (2019)

Convergence of Artificial Intelligence in IoT Network for the Smart City—Waste Management System Mohamed Ishaque Nasreen Banu

and Stanley Metilda Florence

Abstract Several insights are provided for Smart city strategies and development around the world. Smart city developments are meant for better reliability, security and efficiency in urban areas. On integration of artificial intelligence (AI) and Internet of Things (IoT) with Information Technology (IT), several urban cities were able to maintain and manage their water supply system, transport facility, law enforcements, colleges, schools, universities, hospitals, power plants, etc., better and in a more efficient way than before. Government was able to communicate flexibly with the public in a proficient way with the help of ICTs. As the concept of smart cities is quite new with R&D works ongoing, it is safe to say that there is no much work done in smart waste management that makes use of IoT and AI. There is a tremendous need in remediating chief environmental issues like waste collection and dumping and this in particular is considered as a significant question which necessitates academic research investigation. Put together with a combinative literature review of 34 papers, this paper provides understandings into the possibility of smart cities and linked publics in simplifying efforts taken for waste management. Thus, this paper aims to review the most appropriate and associated works done in smart waste management system and their drawbacks and the inefficiency, thereby arriving at the problem statement and multiple objectives as the solution. The chief drive of this review is to discover numerous ideas and notions in Smart city development with regard to waste management system. These data aids in predicting the upcoming trend that supports in making a waste management system model for the proliferating population using artificial intelligence (AI) and Internet of Things (IoT). Keywords Smart cities · Internet of Things (IoT) · Artificial Intelligence (AI) · Waste management · Sensors · Data science

M. I. Nasreen Banu (B) · S. Metilda Florence SRM Institute of Science and Technology, Kattankulathur, India e-mail: [email protected] S. Metilda Florence e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_21

237

238

M. I. Nasreen Banu and S. Metilda Florence

1 Introduction Solid waste management needs to be improvised due to the proliferating urban population, and requirement for the smart waste management system has become inevitable part of smart city project in India. Growth of population is said to be related to the growth in migration toward urban areas that has good industrial development. This has simultaneously led to increased consumption that resulted in upsurge in issues with regard to environment, social and economic aspects of the community. Inadequate management and lack of control in reducing solid waste seem to be the greatest environmental concern today in urban/metro areas. Waste generation and accumulation have tremendously swelled throughout the world. World Bank Forum stated that “city’s solid waste was around 1.3 billion tons, with nearly 1.2 kg/person/day”. By many studies, it was estimated that the amount of solid waste might exceed 2.2 billion tons around 2025. The acceptance of active and good waste management techniques and strategies and were important for controlling the spread of waste to all places so that a clean and safe environment is developed for living in a healthy environment [1]. It is perceived that there are detailed criteria for various nations to follow in order to have a better waste management system and also for other nations to follow it. Fundamentally, the waste management technique is determined by confined circumstances, monetary options and other factors. As a result, rules and regulations set for waste management differs from one nation to another (Fig. 1). Numerous schemes incorporate technologies like Global Positioning System (GPS), Radio Frequency Identification (RFID) and Geographic Information System (GIS), for instance, to monitor solid waste collection containers and trucks. The GIS associated with MCDA has been used to help administrators and officers in taking up decisions for determining a suitable location to set up a new landfill, or determining the finest locations and suitable capacity of containers for collecting waste. Many genetic algorithms are used for optimizing the solid was collection routes for better well-being of the people and also to reduce pollution. This article reviews the most appropriate and related works done to know AI for smart cities. The chief drive of this review is to discover the problems and drawbacks of the existing system and put forth ideas and notions in Smart city development with regard to waste management system. These data aids in predicting the upcoming trend that supports in making an efficient waste management system model for the everincreasing population using artificial intelligence (AI) and Internet of Things (IoT) (Fig. 2).

2 Solid Waste Management with IoT Technology Numerous available articles cover diverse features of IoT technology associated with providing solutions to better waste management. For instance, Catania and Ventura

Convergence of Artificial Intelligence in IoT Network for the Smart …

239

Fig. 1 Cloud-based smart waste management architecture. Source Aazam et al. [2]

[4] tried to offer a solution by means of intelligent monitoring for garbage collection. With the help of Smart-M3 platform-an addition of cross-domain quest for triple information, inter-operation of various ICT fields is made possible and this also helps in easy implementation. The solution is two phases: The first is known as the monitoring phase through which the waste levels within the compartments are continuously measured, conveyed and kept; the second phase takes place with information calculation that is used for optimizing the channels of waste collection. In [5], the writers provide a dynamic waste management model by means of infrastructure services provided for smart cities, and this is done using IoT. Both radio

240

M. I. Nasreen Banu and S. Metilda Florence

Fig. 2 Mind map for solid waste management. Source de Souza Melaré et al. [3]

frequency (RFID) and actuators make use of sensors for the process of identification and monitoring in the identification monitoring process, and it is separated into 3 major phases: (i)

(ii) (iii)

Preparation and implementation of waste collection by means of transport trucks with dynamic route adaptation to comply with the restriction rules provided; Conveyance to a particular site based on the waste type; and Recycling of reusable waste. Though, it was mainly employed in the first one that deals with preparation and waste collection.

The term dynamic denotes the system capacity that easily gets adapted with realtime parameters without affecting the residual collection during the activity. This offers a better understanding of infrastructures associated with waste management along with a stress on the processes applied to the waste management excellence without the use of sensors that are employed for monitoring and other purposes. In the literature, Chuah [6] proposed a solution which is termed as intelligent cloud-based waste management (Cloud SWAM). This discusses a solution with particular containers for every waste type and is also made with sensors that constantly monitor and the information are cloud updated so that the stakeholders who are connected can obtain significant data. In addition, [7] briefed about a novel waste management model that particularly emphasises on finding improved regions

Convergence of Artificial Intelligence in IoT Network for the Smart …

241

for landfill construction. Since landfills are employed as the last destination for domestic, commercial and manufacturing waste, ascertaining an appropriate site in large cities necessitates distinctive consideration, as there must be apprehensions regarding the economy, the public health and environment. The solution makes use of the information collected by the waste management system together with a language that makes use of genetic algorithm that helps in the proper land selection for landfill construction. In [8], the authors describe various waste disposal methods that can be used for waste management. The sensor has an integrated solution for volume filling with solar-powered waste compression called smart box, which improves waste collection. Information is transferred to the cloud server via wireless communication and applied to any type and container, and shareholders can connect to the server and access to the data in real time via smart box monitoring. Among the solutions discussed, the authors present a more targeted approach on concessions, consider lower collection costs, provide information on interest wastage, and reduce transportation costs for land clearing. The real problem is facing big business. Cities represent poor waste management. Furthermore, the proposed architecture is not well mentioned. Waste management has proven to be a model for improving waste collection [9]. In countries like Australia, municipalities usually charge for city-generated waste and measure the weight of waste per neighborhood or street, and then the average household rate per user. This collection model is not very accurate, and as the cost of disposal of waste increases each year, waste manufacturers (computer users) demand a solution that minimizes costs and changes used at a fixed rate. Smart waste management can solve this problem by ensuring that users are taxed only on the basis of the waste generated by them. In addition, the system can reduce the number of lost or stolen containers. If the lack of waste collection only affects the citizens’ budget, the works presented here respond to a good plan, but the architecture used is not yet specified.

3 Logistics and Collection of Waste In the current era, waste management seems to have shifted from a logistics industry to manufacturing industry. However, logistics is an important way to integrate transport with all other industries. Innovative technological prospects for a digital station make it possible to produce waste management logistics effectively. Many distinct techniques are explained in more detail by following authors [10–13].

4 Introduction of Smart Bins Various technologies and designs in the field of “smart bins” are available today. Numerous companies and makers started making containers that comes with sensors

242

M. I. Nasreen Banu and S. Metilda Florence

that determine the conditions for the most efficient disposal [14–17]. In homes, smart speakers like “Google Home” or “Amazon Echo” are spreading rapidly, and have proven to be convenient in gathering information and data. The Sanitation workers in customary waste management system do not look into level of garbage in collection bins, if they are jam-packed by litter, it starts to overspill due to excessive waste deposit and leads to unhygienic circumstances for individuals as well as animals present in the location [18]. Found a litter box that are closed and opened using hand gestures otherwise voice commands. Furthermore, additional growth may be a future option based on advising customers on waste classification or research and proposing independent disposal solutions. Also, the progress of self-directed robots for collection of waste provides added opportunity. Many chief researchers focus on self-directed robotic solutions for collection of wastes since 2006: A project called the DustBot project emphasized on improving robotic systems from the year 2006 to the year 2009 and the project aimed to increase urban sanitation as well as managing wastes. During the research, various road cleaning robots (dustclean) were found, along with portable robots that came with easy-to-use designs (dustCarts). Through this, consumers can order at the desired address, where they can pick up small household waste and transport it to the respective center of waste collection [19]. In 2015, on collaboration with Swedish carmaker Volvo and Renova, a Germanybased waste management firm, a research study was carried out. A self-driven and portable robot (“ROARy”) was industrialized with the ability to locate trash cans and distribute them to vehicle collections. The first prototype comes from mid-2016. A drone emptying the container searches the area for debris and sends coordinates related to the robot. The robot integrates with vehicle collection and uses coordinates, sensors, and cameras to detect debris—retrieve it, transport it to vehicle collection and return empty containers. The research is not considered to be completed by Volvo since the robots are slow-functioning [20, 21]. Many firms have initiated and established systems for waste collection that are incorporated with object recognition sensors that allow diverse portions for detection. The French company “Green Creative” introduced a tank called “R3D3” in the year 2016 that gathered beverage containers (coffee cups, metal beaker cans and PET bottles) in distinct containers [22]. In addition, the Polish firm “Bin-E” makes machine with similar kind of functionality. Furthermore, as per the maker, the smart bin has an AI and in-depth knowledge gathering capacity besides sending specific information to external databanks which could be uploaded to more sites of the very maker, thereby refining the authenticity of the products. Networking of Waste will allow disposal companies to report on the status of filling distinct bins [23]. Depending on the maker, target sets for a single product may include offices, large houses, airports, and business centers.

Convergence of Artificial Intelligence in IoT Network for the Smart …

243

5 Vehicles Automatic trucks and lorries are frequently put to use in industry, particularly in excavating and quarrying [24–26] and passenger transport [27] are gaining importance in autonomous vehicle collection scheme was introduced specifically for waste management. The collaboration between Volvo—the car makers—and Renova—the waste management company—seems to have changed the notion of the autonomous Volvo FMX truck which are employed for excavating procedures since 2016 in northern Sweden mines. Because the route is planned in advance, the truck is provided with cameras and sensors, allowing the vehicle to avoid difficulties in its tracks. The driver is responsible for taking the entire tank up to the vehicle and emptying it. The driver is capable of releasing and interrupting the vehicle’s route by making use of the regulator that is provided behind the truck, [26]. More focus is needed to reduce the fuel cost and safeguard the environment from vehicle emissions and to prevent traffic congestion due to garbage collection trucks.

6 Research Gap To fill all this gap in the literature, evaluating how relevant the ideas and technologies of smart cities are in relation to cities has been done. Increased waste productions are stated by numerous nations in the USA, China, Canada, Malaysia, the Philippines, and some European nations [28, 29]. This can be explained through an example of Canada. The total waste production in the year 2006 was nearly 5 million tons when compared to the waste production in the year 2002. Similarly, in the USA during 1990–2006, the total waste production was around 31 million tons. It can be said that this escalation is an unceasing threat and numerous nations have conducted models to evaluate production of waste to comprehend the issue better as well as suggest ways of suitable dumping [30, 31]. The literature review suggests that the planning shortage along with inadequate arrangement for solid waste disposal results in waste disposal in public spaces such as large open spaces and with land. For example, in Brazil, solid city waste production in the year 2012 augmented by 1.3% and it was also stated that nearly 6.2 million tons of municipal were not collected. Furthermore, in the year 2006, only 220 tons were collected out of the 300 tons of waste produced per day, in the Philippines [32]. Users’ failure to accept new technologies is unlikely to be due to rapid technological development. However, when (cm) autonomous vehicles and containers begin disposal, the barrier does not address legal issues, for example, liability issues in the event of an accident. In addition, there are already preliminary reviews and ideas on how to use underground storage in smart cities. However, until now, this option is not financially possible. Along with AI, engineering along with ICT aid in the model development offering solutions to real-life circumstances. This review reports on systems and schemes that

244

M. I. Nasreen Banu and S. Metilda Florence

incorporate methods, streamlining techniques, besides computational techniques to help develop decision support simulations and prototypes for environmental issues. Given the many issues related to waste management and the huge potential for contribution in this area that needs to be addressed, a detailed map and what has been done and the results achieved are very important for further improvement in every phase of waste management. These research efforts cover nearly ten years of research in the field of ICT waste management. This survey includes thirty-four papers, with only a few others using IoT technology as a backend system to provide intelligent applications. This survey is inclined more on energy-efficient IoT as a catalyst for a variety of applications, including waste management. In particular, its goal is to introduce larger models that deal with effective waste management. Particular attention is paid to waste collection. The idea is to predict the future bin requirement based on the existing and past data using data science so as to provide the information such as the number, size and shape of the bin that will be required for efficient management of waste and the efforts for intellectual transportation in the context of IoT and Smart Cities for waste collection. The attention was toward the efforts to incorporate the ICT model for waste collection in SC. Distribution of the strengths and weaknesses of the surveyed model.

7 Conclusion and Future Scope Smart cities have become a promising strategy to create a sustainable and enjoyable urban future [33]. Discussion and awareness in smart cities are particularly high in global business and business communities. Smart cities are very popular in urban policy areas around the world. Local, regional and national governments are trying to transform their cities into smart cities through strategies, plans and programs related to the main commitment to technical waste management solutions. However, the expectations of smart cities are very unrealistic as they are full of speculation [34]. Knowledge and understanding: trends in ideas and technologies are limited; relationship between popular ideas and technologies; waste management concepts and policies that affect the understanding and application of technologies. The scope lies on fine-tuning the existing system to make it suitable for implementation by considering the factors (rules and policies, government and public interaction) that influence the system. Instead of privatizing the system, it can be enhanced with the technology such that it is beneficial for both the government and the citizen. The future work will focus on the definition of an efficient IoT-enabled model for waste collection, which touches on the inclusion of high-capacity waste trucks as mobile depots. In addition, waste bins were placed to optimize the comfort of the residents. However, as part of future work, focus will be at bin connectivity barriers that affect their placement, for example, the output power of the communication sensor should be set too high, which will speed up the battery. In this case, the bin can be placed

Convergence of Artificial Intelligence in IoT Network for the Smart …

245

where the power consumption works more efficiently. In the end, what matters is the wellness of society and a better nation with the sustainable approach.

References 1. Wilson, D.C. et al.: Wasteaware’ benchmark indicators for integrated sustainable waste management in cities. Waste Manage. 35, 329–342 (2015) 2. Aazam, M. et al.: Cloud-based smart waste management for smart cities. In: 2016 IEEE 21st International Workshop on Computer Aided Modelling and Design of Communication Links and Networks (CAMAD). IEEE (2016) 3. De Souza, M., Vitorino, A. et al.: Technologies and decision support systems to aid solid-waste management: a systematic review. Waste Manage. 59, 567–584 (2017) 4. Catania, V., Ventura, D.: An approach for monitoring and smart planning of urban solid waste management using Smart-M3 platform. In: Proceedings of the 15th Conference of Open Innovations Association FRUCT, St. Petersburg, Russia, 21–25 Apr 2014, pp. 24–31 (2014) 5. Campos, L.B., Cugnasca, C.E., Hirakawa, A.R., Martini, J.S.C.: Towards an IoT-based system for Smart City. In: IEEE International Symposium on Consumer Electronics (ISCE), pp. 129– 130 (2016) 6. Chuah, J.W.: The Internet of Things: an overview and new perspectives in systems design. In: Proceedings of the IEEE Conferences, International Symposium on Integrated Circuits (ISIC), Singapore, 10–12 Dec 2014, pp. 216–219 7. Atzori, L., Iera, A., Morabito, G.: The Internet of Things: a survey. Comput. Netw. 54, 2787– 2805 (2010) 8. Pellicer, S., Santa, G., Bleda, A.L., Maestre, R., Jara, A.J., Skarmeta, A.G.: A global perspective of smart cities: a survey. In: Proceedings of the International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, Taichung, Taiwan, 3–5 July 2013, pp. 439–444 9. Sumi, L., Ranga, V.: Sensor enabled Internet of Things for smart cities. In: Proceedings of the IEEE Conferences—Fourth International Conference on Parallel, Distributed and Grid Computing (PDGC), Waknaghat, India, 22–24 Dec 2016, pp. 295–300 10. Rovetta, A., Xiumin, F., Vicentini, F., Minghua, Z., Giusti, A., He, Q.: Early detection and evaluation of waste through sensorized containers for a collection monitoring application. Waste Manage. (New York, N.Y.) 29(12), S.2939–S.2949 (2009) 11. Anagnostopoulos, T., Zaslavsky, A., Kolomvatsos, K., Medvedev, A., Amirian, P., Morley, J., Hadjieftymiades, S.: Challenges and opportunities of waste management in IoT-enabled smart cities: a survey. IEEE Trans. Sustain. Comput. 2(3), 275–289 (2017) 12. Shah, R., Pandey, A.B.: Concept for automated sorting robotic arm. Proc. Manuf. 20, 400–405 (2018) 13. Esmaeilian, B., Wang, B., Lewis, K., Duarte, F., Ratti, C., Behdad, S.: The future of waste management in smart and sustainable cities: a review and concept paper. Waste Manage (New York, N.Y.) 81, S.177–S.195 (2018) 14. Binando: https://binando.com/de (2019) 15. Bigbelly Solar: https://www.friendly-energy.at/produkte/bigbelly-solar/bigbelly-solar/ (2019) 16. Enevo: https://www.enevo.com/ (2019) 17. E Cube Labs: CleanCUBE; The solar powered trash compactor. https://www.ecubelabs.com/ solar-powered-trash-compactor/ (2019) 18. Simple Human: Sensor Can. https://www.simplehuman.com/ (2019) 19. DustBot: Networked and cooperating robot for urban hygiene. www.dustbot.org/ (2006) 20. VolvoGroup: The ROAR project—robot and drone in collaboration for autonomous refuse handling (2016) 21. Robarts, S.: Volvo’s robot refuse collectors ROAR into life. (2016) 22. Green Creative: Smart Bin R3D3. https://www.green-creative.com/en/r3d3-sorting-bin (2019)

246

M. I. Nasreen Banu and S. Metilda Florence

23. Bin-e: Smart Bin. https://bine.world/ (2019) 24. Liebherr-International Deutschland GmbH. Liebherr Presents its Autonomous Haulage Surface Mining Solution, Biberach, Deutschland (2017) 25. Komatsu: Autonomous Haul System (AHS). https://www.komatsu.com.au/innovation/autono mous-haulage-system (2019) 26. Volvo Germany: Volvo Construction Equipment enthülltautonomenMaschinenprototyp (Volvo Construction Equipment unveils autonomous machine prototype) (2019) 27. Tesla: AutonomesFahren. https://www.tesla.com/de_AT/autopilot (2019) 28. Young, C.-Y., Ni, S.-P., Fan, K.-S.: Working towards a zero waste environment in Taiwan. Waste Manage. Res. 28(3), 236–244 (2010) 29. Chang, N.-B., et al.: Comparisons between global warming potential and cost–benefit criteria for optimal planning of a municipal solid waste management system. J. Cleaner Prod. 20.1(2012), 1–13 30. Malakahmad, A., Khalil, N.D.: Solid waste collection system in Ipoh city. In: International Conference on Business, Engineering and Industrial Applications (ICBEIA), pp. 174–179 (2011) 31. Hannan, M.A., Arebey, M., Begum, R.A., Mustafa, A., Basri, H.: An automated solid waste bin level detection system using Gabor wavelet filters and multilayer perception. Resour. Conserv. Recycl. 72, 33–42 (2013) 32. Iloilo City: 10-Year Solid Waste Management Plan of the Local Government Iloilo City, Panay, Philippines. Local Solid Waste Management Board of Iloilo City, Philippines, p. 129 (2006) 33. Yigitcanlar, T., Kamruzzaman, M., Foth, M., Sabatini-Marques, Costa, J.E., Ioppolo, G.: Can cities become smart without being sustainable? Sustain. Cities Soc. 45, 348–365 (2019) 34. Bloomfield, J. et al.: Connected clusters: landscaping study (2019)

Energy Aware Load Balancing Algorithm for Upgraded Effectiveness in Green Cloud Computing V. Malathi and V. Kavitha

Abstract Cloud computing leverages the processing force and resources as an administration to the clients present across the world. This plan was acquainted as a route with an end for the customers around the world by giving elite at a more affordable expense in contrast to the devoted high-performance computing machines. For running HPC, business and web applications, cloud computing remains an exceptionally versatile and financially savvy stage. Enormous amount of electrical energy are utilized by server ranches for achieving a high help cost and carbon-di-oxide transmission. The power productivity issues with server ranches are basically a cost of force and cooling structure that has a significant bit of their operative expenses. Energy adequacy is considered as a significant challenge and it is demanding to win in a green answer to deal with the entire patterns that influence the cloud energy utilization. There are four strategies for creating energy effectiveness: equipment level force improvement, energy cognizant planning in lattice structures, server consolidation with the useful resources of virtualization and power minimization. Thus, there is an increasing need for green cloud computing solutions that remains most effective to store power and, however, also reduce operational prices. The imaginative and prescient-based energy efficient control on cloud computing environments is supplied right here. EALB algorithm has been introduced to foresee the heap and incorporate energy productivity in overloaded and under stacked framework. Keywords Green cloud · Load balancing · Power management · Task scheduling

1 Introduction Cloud computing is the conveyance of various services through the Internet, including information stockpiling, servers, databases, system administrations and programming. Cloud-based capacity creates it to be conceivable to save records in a distant databank and recover them on interest [1]. Administrations can be both public and V. Malathi (B) · V. Kavitha Department of Computer Applications, Hindusthan College of Arts and Science, Coimbatore, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_22

247

248

V. Malathi and V. Kavitha

private, public administrations are given online to an expense while private administrations are facilitated on an organization to the explicit customers. It is a framework that essentially included three service areas, such as: • Software-as-a-Service (SaaS): It provides a software application to client based on request. • Infrastructure-as-a-Service (IaaS): It provides strategy for conveying everything from operating systems to servers and capacity through IP-based availability. • Platform-as-a-Service (PaaS): PaaS imparts a few likenesses to SaaS, the essential contrast being that as opposed to conveying software on the web, it is really a stage for making software that is conveyed by means of the Internet. A cloud SLA (cloud service-level agreement) is an arrangement among a cloud specialist organization and client that guarantees a base degree of administration is maintained. It ensures the levels of unwavering quality, accessibility and responsiveness to frameworks and applications; indicates who administers, when there is administration interference; and depicts punishments if administration altitudes are not met. Cloud computing models have become a transcendent contender inside the allotted structures worldview. Using this design, buyers are offered induction to assets given by a cloud merchant as portrayed in their service-level agreement. When figuring administrations are given through web, at that point this model of conveyance of services is called cloud computing [2]. Cloud computing is providing utility-orientated IT administrations to clients worldwide. In data communities, green cloud computing managing energy productivity issues. The consistently developing call for is treated through enormous scope datacenters, which unite masses and heaps of workers with various framework comprehensive of cooling, garage and network structures. Many net organizations which incorporate Google, Amazon, eBay, and Yahoo are running such enormous datacenters round the area [3]. The commercialization of this advancement is characterized at present as Cloud computing, where processing is presented as delicate product on a compensation as-you-cross establishment. Today, a standard datacenter with 1000 racks needs 10 mW of energy to perform, which promotes better operational expense. Henceforth, for server farms the bundles of Green Cloud Computing in energy cost is a size capable thing of its working and direct front expenses. Likewise, in April 2007, Gartner assessed that the Information and Communication Technologies (ICT) industry produces about 2% of the all-out worldwide CO2 discharge, which is equivalent to the avionics business. Predictable with a report distributed with the guide of the European Union, a lower in outflow amount of 15–30% is required sooner than year 2020 to keep up the worldwide temperature development [4]. Characteristics There are five characteristics that outline cloud computing.

Energy Aware Load Balancing Algorithm for Upgraded …

1.

2. 3.

4. 5.

249

On Demand Self-service—The user needs any data quickly then user can undoubtedly sign into the record of the specialist organization and concentrate the ideal data with no trouble [5]. Network Access—The users can access their information from anywhere in the world. Resource Pooling—Cloud computing allows and permits a few customers to thump directly into a solitary pool of a server or plate stockpiling or certain different assortments of specific carport. Elasticity—The client in some cases needs extra assets in a little timeframe for this situation cloud computing gives extra resources. Usage Fee—Once the user uses it specific period of time, he has to pay for that amount of time (Fig. 1).

Fig. 1 Cloud infrastructure

250

V. Malathi and V. Kavitha

Cloud foundation is a term used to depict the segments required for distributed computing, which incorporates equipment, disconnected assets, stockpiling, and organization assets.

2 Related Work A few methodologies had been proposed to deal with load adjusting issues in cloud computing frameworks. Every one of these works expected to improve the way toward conveying the outstanding task at hand among cloud hubs and attempt to accomplish ideal asset usage, least information preparing time, least normal reaction time and over-burden evasion. Anyway the majority of these methodologies disregarded the impact of burstiness on the heap adjusting measure. Makroo et al. [6] have talked about essential Throttled load adjusting procedure for cloud conditions wherein it becomes thought about that VM can address solitary test least difficult and the moving toward tasks are assigned to the idle VMs, which are picked subjectively if extra VMs are found to be dormant. Swarnkar et al. [7] have portrayed a traditional cooperative methodology for adjusting the heap. A bunch of accessible VMs gets the assignments at the irregular establishment and the strategy of undertaking designation proceeds in round development. While an errand is wanted to the VM then it goes to the end work inside VM list. The referred to procedure does not know about length of the moving toward commitments so suffers with the impediments of some over-trouble centers. Other than this, the advantage of this algorithm is that between cycle correspondence is not required. Moharana et al. [8] have introduced a weighted round-robin approach for adjusting the store in cloud setting. The characterized plot is mix of load allotting and cooperative method. The limit of the VM to oblige the task urges in allocating burden to the VM and in the wake up choosing the VM customary round-robin approach is done. Sethi et al. [9] planned another heap adjusting strategy utilizing fluffy rationale dependent on a round-robin (RR) algorithm to get quantifiable upgrades in asset usage and accessibility of distributed computing environment. The proposed strategy utilizes a fuzzifier to play out the fuzzification cycle that changes over two kinds of data sources which are the processor speed and the appointed heap of the Virtual Machine (VM), and one yield which is the decent burden to make an induction framework. The fuzzy dependent round-robin (FRR) load balancer contrasted with the traditional round-robin (RR) load balancer limits the server farm preparing time and generally reaction time. The issue with round-robin algorithm as a rule in any case is that they cannot deal with bursty remaining tasks at hand. Indeed, even with the proposed upgrade on RR by utilizing fluffy rationale, burstiness is not thought of. In [10], a fluffy rationale load balance algorithm zeroed in on a public cloud was proposed. The primary thought of the algorithm was to segment the cloud into a few cloud parcels with each segment having its own heap balancer, and a fundamental

Energy Aware Load Balancing Algorithm for Upgraded …

251

regulator to deal with all these segments. Results demonstrated upgrades in asset use and accessibility in the distributed computing climate. The disadvantage of this methodology is the trouble of testing the strategy in a genuine climate to ensure that it has accomplished great outcomes. In [11], a smart burstiness-mindful calculation (ARA) to adjust burst outstanding tasks at hand across all registering destinations, and along these lines to improve in general framework execution, was proposed. The introduced algorithm forecasts the start and the finish of remaining burden blasts and consequently on-the-fly move between two plans: “avaricious” (i.e., consistently select the best site) which has superior reaction time further down the instance of no burstiness and “arbitrary” (i.e., arbitrarily select one) which has improved reaction time beneath burstiness. Both reenactment and genuine trial effects show that this algorithm increases the exhibition of the cloud framework underneath both burst and non-burst outstanding burdens. Despite the fact that this calculation gives great outcomes, it does not think about a significant factor in burden adjusting, which is the current use of accessible assets. In [12], a way to deal with conquer the un-used asset provisioning and the force utilization issues under burst and fractal conduct remaining task at hand was proposed. It comprises of two stages for asset use provisioning, called “prescient and receptive provisioning”. Right off the bat the determining module predicts the outstanding burden for the following control skyline, and afterward the regulator assesses the quantity of vital assets, for example, preparing centers, for the anticipated piece of the approaching burden. To evade the results of gauging mistakes, the framework apportions additional assets that can be utilized to serve erratic burdens. This assignment is made dependent on the historical backdrop of the framework activity. The proposed approach improves the asset usage and the force utilization. Then again, if some expectation blunder occurs past the assessment of the additional assets, it is liable to postpone getting the asset till the framework apportions accessible assets.

3 Green Cloud Computing Green cloud computing includes planning, delivering, and utilizing computerized spaces in a way to diminish its effect on the environment. A green cloud arrangement can save energy as well as essentially diminish undertaking operational expenses. Green distributed computing permits clients to use the advantages of distributed storage while diminishing its antagonistic impacts on the climate, at last influencing human prosperity [13]. Clouds mean to drive the structure of the next generation data focuses by architecting them as systems of virtual administrations (equipment, database, UI, application rationale) so clients can get to and send applications from wherever on the planet on request at genuine expenses depending upon their QoS

252

V. Malathi and V. Kavitha

Fig. 2 Integrated green cloud computing architecture

prerequisites. Figure 2 shows the noteworthy level design for supporting energyproficient assistance assignment in a Green Cloud computing framework. There are basically four primary entities included: • • • •

Consumers/Brokers Green Resource Allocator Virtual Machines Physical Machines.

4 Load Balancing Load adjusting is a method of reallocating the entire burden to the single hubs to amplify the utilization of resource and reaction time, while maintaining a strategic distance from a state where numbers of nodes are tickly stacked et al. together does practically no work. Load balancing guarantee that every node in framework

Energy Aware Load Balancing Algorithm for Upgraded …

253

perform equivalent amount of work [14]. A load balancer captures the web traffic sent by customer by separating the traffic into singular demands and chooses which node ought to get and handle the solicitations. The following is the process for basic load balancing transaction [15]. • The consumer sends his/her request to the closest server. • Then the server connects to the following servers which become load balancer. The load balancer then decides the destination server and forwards the request to that server. • The selected server accepts the request, technique the request and ship the reaction returned. • The load balancer has processed it and responds to the customer. • The client receives the response without any information of the server. Types of Load Balancing Load balancing is of two types • Static load balancing algorithm It is suitable for frameworks with low varieties in burden. It utilizes past applications mindfulness and quantifiable data about the device and likewise courses the burden among servers. This algorithm requires earlier information on framework assets the presentation of the processors is resolved toward the start of the execution, consequently the choice of moving of the heap does not rely upon the present status of framework [16]. • Dynamic load balancing algorithms In this algorithm which looks for the lightest server inside the organization after which novel fitting burden on it. In this, remaining task at hand is circulated the different processors at runtime. The algorithms in this grouping are pondered complex and in any case have higher transformation to non-basic disappointment and for the most part execution [17]. For this continuous correspondence with network is required which can build the traffic in the framework. Here, present status of the framework is utilized to settle on choices to deal with the heap. Dynamic algorithm reacts to the real current framework state in settling on burden move choices. Since present status of the framework is utilized to settle on powerful burden adjusting choices, measures are permitted to move from an overused machine to an under used machine continuously progressively [18].

254

V. Malathi and V. Kavitha

5 Existing Algorithm 5.1 One-sided Random Sampling One-sided Random Sampling algorithm [19] is an administrated and adaptable load adjusting approach that utilizes arbitrary testing of the gadget region to get selfassociation thusly adjusting the load all through all nodes of the framework. The load on a server is spoken to in this algorithm as a virtual diagram that has availability with every node. Each server is spoken to as a node in the chart, with everyone in-degree coordinated to the free assets of the server. Every hub should have one in-degree. It likewise utilizes the walk length boundary for measures, which is the crossing starting with one hub then onto the next. Each time a node executes a task, it eradicates a moving toward edge, which recommends a reduction inside the openness of free valuable asset. After clear piece of an activity, the hub adds on a moving toward perspective, indicating an advancement in the availability of released assts. For designating the errand to a hub, it starts with an arbitrary hub and contrasts the walk length and the limit and on the off chance that it is same or more than the edge esteem once load balancer allots the assignment to that hub and diminishes the level of that hub by one. Again a fresh out of the plastic new planned chart is molded and load adjusting is finished in a completely decentralized way, in this manner making it appropriate for tremendous organization framework like cloud [20].

5.2 Bumble Bee Foraging To boost the throughput in cloud computing worldwide have created bumble bee foraging load adjusting algorithm. Bumble bee foraging algorithm [20] is gotten from the conduct of bumble bees for finding and harvesting food. Bees generally look for the foodstuff and after discovering the area of food, they show through waggle dance and this dance gives a thought regarding value, amount and area just as distance of the foodstuff. Utilizing this thought, different bees begin to gain the foodstuff. Similar methodology is functioned in cloud computing for load adjusting. To test for change mainstream of commitments, servers are amassed under virtual servers, having its very own high level provider line. Each server setting up a requesting from its line learns a pay or acclaim on reason of CPU usage, which is compared to the best that the bumble bees display in their wiggle dance and put it accessible on the poster board. All of the workers take the components of either a searcher or a guide. A server serving a solicitation, computes its benefits and review it with the settlement benefit, if the pay changed into superfluous, by then the worker remains at the forefront progressed specialist and if it ended up being low, by then the server re-appearances of the searcher or scout lead, there counterbalancing the burden with the server [20].

Energy Aware Load Balancing Algorithm for Upgraded …

255

6 Proposed Algorithm EALB algorithm is intended to be used by associations expecting to realize little to medium estimated nearby clouds. This algorithm should scale to greater assessed clouds since one among the most responsibilities of the group regulator is load adjusting register nodes. In EALB algorithm [19], the utilization rates of each process node are evaluated. This engages in choosing the quantity of registering nodes that should continue working even as various nodes completely shut down. The algorithm has three territories: Balancing part, upscale portion and downscale segment. Adjusting section is answerable for sorting out where virtual machines might be fired up subject to utilize conceivable outcomes. The upscale pieces power on the additional register nodes and the downscale section conclusion the latent compute nodes. This algorithm is ensured to cut down the overall power utilization even as keeping up the arrangement of assets in contrast with other load balancing algorithms. In this work our scheduler algorithm is cause of a centralized scheduler schemes, i.e., accumulating entire the information respecting task then dispatching to them in accordance with the regulation with suitable rule consumption. Scheduler choice creates the jobs (load) or keeps the queue. The scheduler wish additionally takes a look at the rule specification that dispatches the assign in accordance with the system. And postulate the dead heat will upward thrust than the crucial dead heat the provision pleasure discard. As a end result, the scheduler desire takes a look at the 2nd provision along a dead heat less than the essential temperature. If the dictation is below the imperative anger afterward scheduler assessments the limit destruction on up to expectation system. As properly as like it pleasure broadcast every the messages in imitation of test the limit wreck over every the structures or below it will keep a queue over facts of each or each provision together with less power. In our strength environment-friendly algorithm, consequent specifications are vital temperature, least temperature and electricity intake. Energy aware load balancing algorithm

256

V. Malathi and V. Kavitha

Equilibrium: for all dynamic calculate hub s k ∑ [p] do qk present use of calculate hub k end for If all q k > 75% use // all accessible nodes are dynamic boot vm on most underused q k end if else boot vm on most us ed q k end else upscale: if each q k > 75% usage if q k < p boot register hub qk+1 end if end if

Downscale : if vm k inactive > 6 hours or client started closure Closure vm K end if if qk has no dynamic vm Closure q k end if

The EALB algorithm has three major fragments. 1.

2. 3.

The adjusting segment is liable for sorting out where virtual machines will be launched. It organizes this by first assembling the usage level of every powerful cycle hub. For the circumstance that all cycle hubs q are overhead 75% use, EALB starts up another virtual machine on the process hub with the least usage number. It is miles well worth raising inside the circumstance where all register nodes are above 75% use, all the open process hubs are in operational. Something different, the latest virtual machine (VM) is booted on the most raised use processing node. The restriction of 75% usage was picked since when 25% of the resources are open; in any event another virtual machine is consistently obliged using three out of five available arrangements. The upscale part of the algorithm is utilized to control additional cycle nodes. It does this if over 75% is utilized by all currently working registering nodes. The downscale portion is liable for closing down dormant figure nodes. In case the register hub is using under 25% of its resources, EALB sends a conclusion request to that node.

Energy Aware Load Balancing Algorithm for Upgraded …

257

Load Balancing Metrics and Comparison of Algorithms Subsequent to inspecting the dynamic burden adjusting algorithms, we ensure thought about the algorithms at the bases of some predefined measurements [21]. These measurements are according to the accompanying: • Throughput—This is the measure for registering the quantity of occupations which has been finished their execution. It should be unreasonable to blast the capability of system. • Overhead Associated—comes to a choice the proportion of overhead covered simultaneously as executing an algorithm. • Adaption of Internal Failure—This is the limit of an algorithm to work reliably paying little heed to any irregular disappointment in system. • Migration Time—Time needed to redistribute the sources from one node to some other. The intention is to keep up it limited as a way to deal with enlivens the overall execution of the device. • Reaction Time—This is the measure of time taken to react by means of an algorithm. It ought to be limited for stable working of system. • Asset Utilization—It is used to check the use of assets. It must be improved for a green load adjusting. • Adaptability—It is far the limit of a calculation to do stack balancing for a machine with any limited assortment of nodes • Execution—It is far used to test the presentation of the framework. This should be ventured forward at an explanation capable cost (Table 1). Experimental Setup We have completed the analysis in the cloud simulator where we have actualized the Load adjusting Scheduler algorithm in.NET. Existing algorithm are in cloud simulator like one side random sampling and bumble bee foraging which additionally works in energy-proficient ideas. Subsequent to contrasting it and the existing algorithms here are the tables of examination. We have executed the program multiple times for the assessment of the normal time taken by the Scheduler to keep up the qualified VM line for load adjusting. Here are the outcomes (Fig. 3; Table 2).

7 Conclusion In this research work, a recently characterized system EALB algorithm is favorable to presented through which energy advancement is acquired. The proposed model is best for the dynamic designation of assets by envisioning the load in a similar environment. Energy aware load balancing is an crucial and worrying studies area nowadays. Current and control-based totally preparation plan will assist to decrease the cooling value and the pleasure make bigger the reliability regarding the computing sources above cloud computing paradigm. The algorithm choices remain useful

Throughput

Poor

Poor

Good

Parameters (→) Algorithms (↓)

ORS

BBF

PALB

Good

Poor

Good

Overhead

Good

Poor

Poor

Adaption of internal failure

Table 1 Comparisons of load balancing techniques

Good

Poor

Poor

Migration time

Good

Poor

Poor

Reaction time

Good

Good

Good

Assets utilization

Good

Poor

Poor

Adaptability

Good

Poor

Good

Execution

258 V. Malathi and V. Kavitha

Energy Aware Load Balancing Algorithm for Upgraded … Fig. 3 Comparison chart of energy efficiency

259

Energy Efficiency Chart 90.5 90 89.5 89 88.5

Energy Efficiency

88 87.5 87 86.5 ORS

Table 2 Comparison of time complexity and energy efficiency

BBF

EALB

Algorithm Time complexity (MHz) Energy efficiency (kWh) ORS

O(n4)

BBF

O(n2)

89.86 88.12

EALB

O(n)

87.65

among lowering the CO2 emission. The intention of our study is in accordance with limit the temperature regarding the computing nodes and to distribute the workload in a green manner thinking about current and power balance of the device. The greatest remarkable route toward energy security is virtualization. In future, EALB algorithm can be expanded further for mixed environment. The structure model may be loosened up to impact a more prominent assortment of residual weight and application organizations for a vastly improved amusement of Cloud conditions. The energy aware load balancing algorithm continues with the state of all register hubs and maintained use probabilities goes to a decision the amount of computing hubs that need to be running.

References 1. Gayathri, B., Anbuselvi, R.: Hybrid approach for enhancing the metrics in Green Cloud Computing. IJARCSSE 7(11) (2017) 2. Kathuria, S.: A survey on security provided by multi-clouds in cloud computing. Int. J. Sci. Res. Netw. Secur. Commun. 6(1), 23–27 (2018) 3. Jain, A., Kumar, R.: Hybrid load balancing approach for cloud environment. Int. J. Commun. Netw. Distrib. Syst. 18(3–4), 264–286 (2017) 4. Malathi, V.: Energy attentive resource allocation using enhanced best fit decreasing with energy saving (EBFD-ES) algorithm for VM allocation in cloud. Int. J. Adv. Sci. Technol. 29(06), 9265–9268 (2020). ISSN: 2005-4238.

260

V. Malathi and V. Kavitha

5. Malathi, V.: A study on various load balancing algorithm in cloud computing environment. Int. J. Adv. Innovative Res. 6(1)(vii) (2019). ISSN: 2394-7780 6. Makroo, A., Dahiya, D.: An efficient VM load balancer for Cloud. Applied Mathematics, Computational Science and Engineering, pp. 333359 (2014) 7. Swarnkar, N., Singh, A.K., Shankar: A survey of load balancing technique in cloud computing. Int. J. Eng. Res. Technol. 2(8), 800–804 (2013) 8. Moharana, S.S., Ramesh, R.D., Powar, D.: Analysis of load balancers in cloud computing. Int. J. Comput. Sci. Eng. (IJCSE) 2(2), 101–108 (2013) 9. Sethi, S., Anupama, S., Jena, S.K.: Efficient load balancing in cloud computing using Fuzzy logic. IOSR J. Eng. 2(7), 65–71 (2012) 10. Singhal, U., Jain, S.: A new Fuzzy logic and GSO based load balancing mechanism for public cloud. Int. J. Grid Distrib. Comput. 7(5), 97–110 (2014) 11. Tai, J., Zhang, J., Li, J., Meleis, W., Mi, N.: ARA: adaptive resource allocation for cloud computing environments under bursty workloads. In: 2011 IEEE International Performance Computing and Communications Conference, pp. 1–8, USA, 17–19 Nov 2011 12. Ghorbani, M., Wang, Y., Xue, Y., Pedram, M., Bogdan, P.: Prediction and control of bursty cloud workloads: a fractal framework. In: 2014 International Conference on Hardware/Software Codesign and System Synthesis, pp. 1–9, New Delhi, 12–17 Oct 2014 13. Kumar, S.R.: Various dynamic load balancing algorithms in cloud environment: a survey. Int. J. Comput. Appl. 129(6), 0975–8887 (2015) 14. Randles, M., Lamb, D., Taleb-Bendiab, A.: A comparative study into distributed load balancing algorithms for cloud computing. In: Proceedings of IEEE 24th International Conference on Advanced Information Networking and Applications Workshops (WAINA), Perth, Australia, Apr 2010 15. Malathi, V.: Analysis and survey on various cloud computing algorithms and its challenges. Int. J. Emerg. Technol. Innovative Res. 6(6) (2019). ISSN: 2346-5162 16. Kashyap, D., Viradiya, J.: A survey of various load balancing algorithms in cloud computing. Int. J. Sci. Technol. Res. 3(11) (2014) 17. Kumar, S., Rana, D.S.: Various dynamic load balancing algorithms in cloud environment: a survey. Int. J. Comput. Appl. 129(6), 0975–8887 (2015) 18. Malathi, V.: Green cloud computing: demand allocation. Adalya J. 8(12) (2019). ISSN: 13012746 19. Galloway, J.M., Smith, K.L., Vrbsky, S.S.: Power aware load balancing for cloud computing. In: Proceedings of the World Congress on Engineering and Computer Science, vol. 1 (2011) 20. Kumar, S., Rana, D.S., Dimri, S.C.: Fault tolerance and load balancing algorithm in cloud computing: a survey. Int. J. Adv. Res. Comput. Commun. Eng. (2015) 21. Garg, A., Dang, S.: Load balancing techniques, challenges & performance metrics. Motherhood Int. J. Multi. Res. Develop. 2(1), 19–27 (2017)

Review on Health and Productivity Analysis in Soil Moisture Parameters M. Meenakshi and R. Naresh

Abstract The term “Smart Agriculture” is increasingly widespread by Machine Learning [ML] of simply apparent refractory to the digital technology. In the field of agriculture soil moisture, prediction is more beneficial to farmers. Health is analyzed by nutrients present in surface of soil. Nutrients are phosphorus, potassium, and nitrogen play an important role for productivity of crops. For crop yield, soil moisture is vital to grasp to influence hydrological and agricultural process to the interaction between the atmosphere and land. In this paper, distinctive machine learning techniques utilized in foreseeing the soil type and soil moisture parameters are examined. This study starting for exploiting and comparing from soil moisture parameters for crop harvest by various machine learning techniques to suggest which fertilizer and crop are suitable to invest. The novelty of this paper is based on various regression algorithms are giving accuracy for the nutrients present in soil for productivity. Keywords Soil moisture · Parameters · Regression

1 Introduction Agriculture is a non-technical area, wherein the technology may be integrated for the betterment. Agricultural generation wishes to be brief in implementation and clean in adoption. Farmers generally comply with a way known as crop mutation after each consequent crop yield. This approach is historically applied in many nations, where crop alteration is accomplished after a loss in the yield by continuously cultivating the identical crop. Crop mutation permits the soil to regain the minerals that have been utilized by the crop previously and utilize the leftover minerals for cultivating the brand new crop. This technique will assist in consistently preserving the soil M. Meenakshi SRM Institute of Science and Technology, Kattankulathur, Tamilnadu 603203, India R. Naresh (B) Associate Professor, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Tamilnadu 603203, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_23

261

262

M. Meenakshi and R. Naresh

fertility. To realize if the soil has reached the point wherein it’s far not worthy to yield the specific crop, farmer has a loss in the yield.

1.1 Soil Based on Texture S.

Soil types

Presence

Suitable crops

1

Sandy soil

60% sand and clay

Maize, coconut, barley, millet, cotton, cactus

2

Clayey soil fine particles of clay Paddy

3

Loamy soil sand, clay and silt

no.

Cotton, jute, pulses, sugarcane, vegetables, oil-seeds, wheat

1.2 Soil Based on Color S. no.

Soil types

Presence

Suitable crops

1

Red soil

Iron-oxide

Millet, groundnuts, pulses, tobacco, cotton

2

Black soil

Lava rocks and is rich in clay

Wheat, millet, sugarcane, tobacco, cotton, oil-seeds

The continuing capacity of soil to act as a critical living environment that sustains animals, humans, and plants is soil health, also referred to as soil quality. This concept corresponds to the value of soil conservation such that subsequent generations are healthy. To use it, we have to note that soil contains organic compounds that perform the functions needed to generate food and fiber when supplied with the basic necessities of life—food, shelter, and water. Clean air and water, bountiful crops and forests, fertile grazing lands, abundant wildlife, and beautiful landscapes are provided to us by healthy soil. By performing five important functions, soil does all this: • Climate regulation—Soil helps regulate where water goes from rain, snowmelt, and irrigation. Over the ground or into and through the soil, water and dissolved solutes drain. • Sustaining animals and plants—The soil relies on the variety and sustainability of living organisms. • Filtering and buffering of possible contaminants—The filtering, buffering, degrading, immobilization and detoxification of organic and inorganic materials including municipal and industrial have by environmental compounds, is the responsibility of the mineral and microbial in the soils.

Review on Health and Productivity Analysis in Soil …

263

• Commuting nutrients—The soil stores, transforms, and cycles carbon, nitrogen, phosphorus, and many other nutrients. • Physical stability and protection—A medium for plant roots is provided by the soil structure. Soils also provide shelter for human structures and protection for archeological treasures. Soil is one among the important resources. Classification of soil with its mapping is incredibly vital for agricultural functions. Completely different| soils will have different form of options and differing types of crops is mature on each type of soil. The features and properties of assorted soils area unit needed to grasp, so as to know that crops are mature higher in specific form of soils. During this case, machine learning technique can be used. Even now, machine learning is associate rising and rigorous analysis field in agricultural knowledge analysis. To yield a crop, Soil could be very important. To classify a soil, there is link between different types of natural entity and soil samples. In Web site, soil classification helps to gauge the overall quality of the location to obtain the physical and mechanical properties, and for construction, it helps to choose the quality.

Soil Dataset

Soil Parameters

Soil Prediction Module

Data Pre-processing and cleaning

Algorithm Techniques

Results

Predicted crops

PH sensor Label Encoding Sampling Technique

Prediction Algorithm

Soil Health Prediction

Architecture for Soil Moisture Analysis

2 Materials and Methods Commercial Ultra-Wide Band (UWB) modules are a low cost of soil moisture estimation, of measuring UWB-transceivers and Times of Flights (ToF) and also soil dielectric constant has been estimated. The channel impulse of UWB-modules is

264

M. Meenakshi and R. Naresh

enabled besides the ToF. The support vector machine (SVM) supports soil type characterization of machine learning. Then, the measuring characterization of soil type more accurate for constant moisture-dielectric constant [10]. Estimate of Soil moisture content in conventional neural network technique is legitimate candidate opportunity, which as the desirable robustness to noise and intrinsic generalization capability, appealing properties in SVR. Its effectiveness on this utility is classed through the use of discipline measurements and thinking about numerous mixtures of the entire features. SVR method overall performance is compared the use of benchmark inside the address, multilayer perceptron neural network. Real-Time application for estimation of constructing soil moisture process for satellites [1]. A recurrent dynamic getting to know neural network (RDLNN) is means of rainfall for soil moisture evaluation. Rainfall measurement and soil moisture content is gathered day by day or hourly precipitation for a period of Long-Time. Soil moisture contents envisioned from day by day and/or hourly precipitation via way of means of RDLNN had been as compared with ground measurements. Estimate soil moisture, RDLNN is a best device for hourly predictions [2]. The Soil Moisture Ocean Salinity (SMOS) which provide to monitor the weather, is to a spatial decision of Twenty-five km is down-scaled to one km, like-minded with the models of crop. Ensemble Kalman-filter states the information of probabilistic relationships of downscaling soil moisture is sensible to estimate the parameter of state-vector approach. This rain-fed location becomes stricken by the decreased way of agricultural season is indicated precipitation. CONAB and IBGE validated and determined the crop yields [4]. The Particle Filter (PF) algorithm which finds the density of the soil, with typically used methods such as Significance Resampling (PF-SIR) and the PF Markov chain Monte Carlo sampling (PF-MCMC). In his experiment, the potential of assimilatory remotely detected near-floor soil wetness measurements right into a 1-D mechanistic soil water model (HYDRUS-1D) the utilization of the PF-SIR and PFMCMC algorithms is analyzed. The artificial assimilation outcomes indicated that PF-MCMC scheme is always greater than PR-SIR when each upgrading scheme confirmed capacity to accurate the soil moisture and estimating that hydraulic Parameters. Assimilation of model is remotely sensed Soil Moisture for accurate statistics [5]. The soil moisture results in the dependence to reduce volumetric content of soil moisture as a characteristic linear radar backscatter ancillary statistics. Radiometer and L-band Aquarius radar statistics of soil moisture parameters based on two polarizations. They recommend estimated approach of soil moisture; it’s far promising for the independence of statistics and its simplicity. The estimated classification scheme performance of used land in soil moisture modifications varies season, the sign of sensitive radar soil moisture variation is tremendously small that affects retrieval accuracy [6]. A coarse spatial decision of passive microwave soil moisture product is the relationship model-based totally on random Forest area technique became constructed. To obtain the excellent estimation of soil moisture, random forest technique turned

Review on Health and Productivity Analysis in Soil …

265

into brought this idea to assemble soil moisture relationship model. Through the evaluation take a look at the overall performance from usually used purely—empirical method, this technique indicates right capacity to seize the Spatio-temporal version of surface soil moisture. The technique provides right capacity in passive microwave soil moisture downscaling examine [7]. Soil moisture (SM) retrieval compare 2 Fuzzy logic systems (FLSs): (1) Adaptive Network-based Fuzzy system (ANFIS) and (2) type-1 FLS; to extracting fuzzy parameters, machine learning algorithms: (1) Artificial Neural Network and (2) Random Forest (RF), SM classification applied with principal analysis of element. The extraction of VWC classifications and soil echoes UWB radar of a powerful SM retrieval [8]. Ultra-wide-band (UWB) radar echoes ensemble strategies totally based on Soil PH values, because of the truth that the ensemble technique has a quick strolling speed, fewer parameters, and the quantity of facts required isn’t large. Sixteen classes of UWB echoes of soil with one of a kind pH values are gathered and investigated with the aid of using four kinds of ensemble strategies such as bagging, random forest, Adaboost and gradient boost. Principal aspect analysis (PCA) is used to lessen the size of the uncooked facts to lessen the general quantity of computation. First, PCA set of rules implemented to predict the features extract of prediction models. At last specific SNRs overall performance model is compared with that prediction model. The simulation test effects display the function measurement is decreasing from 4 to 10, Adaboost and gradient boost compared with lesser performance than random forest and bagging [11]. The sensing nodes, concerned with inside sensing ground and environment, to recollect soil temperature, soil moisture, Ultraviolet (UV) Radiation, relative humidity, air temperature, of the crop field. The proposed device intelligence is considers sensed statistics of climate parameters UV, humidity and air temperature. The sensor node deployed the statistics is accumulated over cloud, where the entire device has been deployed and evolved. The usage of web-primarily and webofferings totally based on the statistics decision and visualization assist device affords the statistics of real-time. The information of sensors and climate at the evaluation are forecast. The closed-loop irrigation scheme is to manipulate and realize the delivery of water [9]. The analysis sets the premise for a way, to allow the access of a greater part of root zone soil moisture. Additional developing the strategy ought to give a micro-profile mensuration of is based on root zone depth of soil moisture. The strategy is to predict on the rising technique of millimeter waves, that provides improvement in spatial resolution of, subterranean coincidental with mapping surface. Then to investigate an event use strategy delineate that can build perceive processes governed the soil–water interface, like infiltration, crusting, runoff, and soil erosion [3]. Global Navigation Satellite System (GNSS) is a satellite navigation system for positioning globally as Geo-spatial. Soil dampness recovery dependent on the issues in potential of extraordinary dampness of soil with Global Navigation Satellite System—Interferometry and Reflectometry (GNSS-IR) which senses that near field, can be applied on the current GNSS observing organizations. Boundaries assume a

266

M. Meenakshi and R. Naresh

significant job in soil dampness recovery utilizing GNSS-IR and various determinations could have extraordinary effect on the outcomes. This examination expects to research the impacts of soil dampness recovery utilizing GNSS-IR dependent on various boundary of determinations [12–16].

3 Results and Discussion A significant component of ecological and biological processes is the amount of water kept in the soil, and it is used in applications such as irrigation, soil protection, water conservation, and weather prediction. Soils normally have a small amount of water, which can be represented as the moisture content of the soil. In the solid particles within soil aggregates, known as inter-aggregate pore volume, moisture occurs in the soil and within pores in the soil aggregates themselves, known as intra-aggregate pore space. The soil is absolutely dry if the pore space is filled entirely by air. By regression algorithm, soil nutrients are calculated for crop yield.

4 Conclusion By machine learning algorithm, the depth of the soil is static for crop yield. The nutrients of Phosphorus, Potassium, and Nitrogen are to be calculated to review health of soil parameters. Using this model, it is possible to apply soil moisture prediction and provide technical support for irrigation strategies and dryness regulation.

Algorithm/modules

Support vector regression (SVR) and Multilayer perceptron (MLP) Pasoll et al. (2011)

Recurrent Dynamic Learning Neural Network (RDLNN) Tzeng et al. (2012)

Multivariate linear regression Eliran et al. (2013)

Downscaling algorithm Chakrabarti et al. (2014)

Particle filter (PF) Algorithm Yan et al. (2015)

Radar retrieval algorithm Mariko et al. (2016)

Random Forest Zhao et al. (2017)

Random forest (RF) Artificial Neural Network (ANN) Liang et al. (2018)

Prediction algorithm Goapa et al. (2018)

Article

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

PF-MCMC—RMSE-0.032–0.031 PF-SIR—RMSE—0.032 to 0.033

EnKF S1—673 EnKF S2—805

ANFIS—95.96% CRR

SVR MSE—0.15 K Means + SVR MSE—0.10

SVR and K means + SVR

RF—0.7

Type-1 FLS, Adaptive Network-based Fuzzy Inference System (ANFIS)

AMSR-E, MODIS, Relationship model

(continued)

The accuracy and blunder rate is higher than the expected values

An powerful UWB radar and VWC classification technique specializes in SM retrieval of soil echoes

Estimation accuracy, stable, robust to outliers

Quantify the version with inside the radar sensitivity to soil moisture. Then seasonal change in characteristics of water content

Marginal gains at both the surface of open-loop simulation and layer of root soil

Assimilated yield of crop was improved

The backscattering measure supports to predict the wetness content

Soil moisture estimate in RDLNN is a best device for hourly predictions

RDLNN = 60%

RMSE—80%

Greater estimation accuracy, stable, robust to outliers

Advantages

MLP—0.74 SVR—0.61

Accuracy

International Geosphere-Biosphere RVI of 0.37–0.53 for rice Programme (IGBP) classification

Particle filter Markov chain Monte Carlo sampling (PF-MCMC) and sampling importance resampling (PF-SIR)

Kalman-filter

The backscattering coefficient

MODIS data Land Surface Temperature (LST) Normalized Difference Vegetation Index (NDVI)

SVR estimation method

Method

Review on Health and Productivity Analysis in Soil … 267

Algorithm/modules

Ultra-wide band (UWB) support vector machine (SVM) Malajner et al. (2019)

Random forests (RF), Adaboost, Gradientboost, Bagging predictors Yang et al. (2019)

Sat-track algorithm Zhu et al. (2020)

Article

[10]

[11]

[12]

(continued)

Global Navigation Satellite System—Interferometry and Reflectometry (GNSS-IR)

Ensemble Method

TDR method (conventional time domain reflectory)

Method

RMSE—20.9% MAE—20.4%

RF—0.995, Bagging—0.995, Adaboost–0.985, Gradient boost—0.997

TDR—12.5% RMSE—7.5%

Accuracy

Different parameter effects the soil moisture retrieval selection using GNSS-IR

Adaboost and gradient boost compared with lesser performance than random forest and bagging in soil PH prediction

The small components of the estimating hardware of RF signals utilized (probes), due to that recurrence is greater

Advantages

268 M. Meenakshi and R. Naresh

Review on Health and Productivity Analysis in Soil …

269

References 1. Pasolli, L., Notarnicola, C., Bruzzone, L.: Estimating soil moisture with the support vector regression technique. IEEE Geosci. Remote Sens. Lett. 8(6) (2011) 2. Tzeng, Y.C., Fan, K.T., Lin, C.Y., Lee, Y.J., Chen, K.S.: Estimation of soil moisture dynamics using a recurrent dynamic learning neural Network. 978-1-4673-1159-5/12/$31.00 ©2012 IEEE 3. Eliran, A., Goldshleger, N., Yahalom, A., Ben-Dor, E., Agassi, M.: Empirical model for backscattering at millimeter-wave frequency by bare soil subsurface with varied moisture content. IEEE Geosci. Remote Sens. Lett. 10(6) (2013) 4. Chakrabarti, S., Bongiovanni, T., Judge, J., Zotarelli, L., Bayer, C.: Assimilation of SMOS soil moisture for quantifying drought impacts on crop yield in agricultural regions. IEEE J. Sel. Topics Appl. Earth Obs. Remote Sens. 7(9) (2014) 5. Yan, H., DeChant, C.M., Moradkhani, H.: Improving soil moisture profile prediction with the particle filter-Markov Chain Monte Carlo method. IEEE Trans. Geosci. Remote Sens. 53(11) (2015) 6. Burgin, M.S., van Zyl, J.J.: Analysis of polarimetric radar data and soil moisture from aquarius: towards a regression-based soil moisture estimation algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 9(8) (2016) 7. Zhao, W., Li, A., Pan, H., He, J., Ma, X.: Surface soil moisture relationship model construction based on random forest method. IGARSS IEEE (2017) 8. Liang, J., Liu, X., Liao, K.: Soil moisture retrieval using UWB echoes via Fuzzy logic and machine learning. IEEE Internet Things J. 5(5) (2018) 9. Goapa, A., Sharmab, D., Shuklab, A.K., Rama Krishnaa, C.: An IoT based smart irrigation management system using machine learning and open source technologies, pp. 0168–1699. Elsevier B.V. (2018) 10. Malajner, M., Gleich, D., Planinsic, P.: Soil type characterization for moisture estimation using machine learning and UWB-time of flight measurements, pp. 0263–2241. https://doi.org/10. 1016/j.measurement.2019.06.042. Elsevier Ltd. (2019) 11. Yang, C., Liang, J.: Soil pH value forecasting using UWB echoes based on ensemble methods, special section on mission critical sensors and sensor networks (MC-SSN). IEEE Access, 2956170 (2019) 12. Zhu, Y., Shen, F., Sui, M., Cao, X.: Effects of parameter selections on soil moisture retrieval using GNSS-IR. IEEE Access (2020). https://doi.org/10.1109/ACCESS.2020.3039504 13. Raj, G.S., Lijin Das, S.: Survey on soil classification using different techniques. Int. Res. J. Eng. Technol. (IRJET) (2020) 14. Elijah, O., Orikumhi, I., Rahman, T.A., Babale, S.A., Orakwue, S.I.: Enabling smart agriculture in Nigeria: application of IoT and data analytics. In: International Conference on ElectroTechnology for National Development (NIGERCON), Owerri, Nigeria, Nov 2017, pp. 762– 766 15. Li, L.: Application of the Internet of Thing in green agricultural products supply chain management. In: Proceedings of International Conference on Intelligent Computation Technology and Automation (ICICTA), vol. 1, pp. 1022–1025. Shenzhen, China (2011) 16. Díaz, S.E., Pérez, J.C., Mateos, A.C., Marinescu, M.-C., Guerra, B.B.: A novel methodology for the monitoring of the agricultural production process based on wireless sensor networks. Comput. Electron. Agric. 76(2), 252–265 (2011)

Soft Computing-Based Optimization of pH Control System of Sugar Mill Sandeep Kumar Sunori, Pushpa Bhakuni Negi, Amit Mittal, Bhawana, Pratul Goyal, and Pradeep Kumar Juneja

Abstract In the sugar mill, the raw juice, yielded from juice extraction plant, is not pure, and hence, having low pH value. Controlling the pH of this solution is of great significance, as color and purity of produced crystals of sugar are severely governed by it. To accomplish this control, a regulated quantity of lime milk is added to it for obtaining the desired pH value. In practice, the control system design for this process is very challenging due to its nonlinear and time-variant behavior. This paper addresses the control system design for this pH neutralization process, using MATLAB. In this work, the PI control system is developed for this process. For improving its response, soft computing techniques GA and SA are used. The control performance of all the designed controllers is finally compared. Keywords Sugar mill · Optimize · Genetic algorithm · Simulated annealing · Control system · Settling time · PH value

1 Introduction The very first stage, juice extraction, in a sugar factory, produces the raw juice which is highly impure. Therefore, first of all, it is filtered in the clarifier. Then a suitable quantity of milk of lime is mixed to it for neutralization of its pH value. Then finally, the resulting solution is evaporated. Crystallization is hence performed, with the residual syrup, resulting in the growth of crystals [1]. It is studied that the temperature and pH value are the significant parameters to control the sucrose inversion to glucose and fructose. The temperature rise more than room temperature supports the process of inversion. The architecture for the pH control system for sugar mill is depicted in Fig. 1. The pH value of primary juice, extracted from crushing process, is less than 7. This juice is made to undergo carbonation by adding milk of lime. By this, the pH of S. K. Sunori (B) · P. B. Negi · A. Mittal · P. Goyal Bhimtal Campus, Graphic Era Hill University, Dehradun, India Bhawana · P. K. Juneja Graphic Era University, Dehradun, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_24

271

272

S. K. Sunori et al.

control signal

sugarcane juice

Controller lime feed Pump

Lime container feedback signal

Mixer

Sulphitation

Fig. 1 pH control system of sugar mill [2]

this juice rises up to a value more than 7. The resulting juice is now made to undergo the sulphitation by adding SO2 to it. After sulphitation, the final pH becomes almost neutral, i.e., nearly 7.

2 Genetic Algorithm Genetic algorithm (GA) mimics the reproduction process in living beings, in which, the genes of parents combine to constitute the genes of their children [3]. For the generation of new population, in every GA iteration, the operations which are used are selection, crossover and mutation [4, 5]. The crossover process will be performed with the fittest individuals, termed as chromosomes. In this process, there will be a mutual interchange of genetic information fragments between the selected chromosomes, and hence, there will be generation of new solutions, termed as offspring, having some good properties inherited from both the parents. After crossover, the offspring undergo mutation operation by altering some genes of chromosome strings. Finally, the newly produced more fit offspring replace the old population of solutions. The most attractive aspect of using GA, as an optimization algorithm, for finding global optimum solution, is that it does not require searching in the complete space of solutions. The execution time only grows as the square of size of project [6]. The flow chart of GA is presented in Fig. 2.

Soft Computing-Based Optimization of pH Control … Fig. 2 Flowchart of GA [7, 8]

273

START

Generation of initial population

Computation of fitness of every solution

Selection of fit individuals Crossover Mutation Generation of new population

Stopping condition satisfied? NO YES STOP

3 Simulated Annealing Technique The flowchart of SA (simulated annealing) approach is presented in Fig. 3. It is an iterative procedure which is executed to minimize a predefined objective function. The simulated algorithm (SA) is an imitation of annealing process, performed for removing defects of solid structures, by first heating the solid, then allowed to cool down gradually [12]. SA is an iterative algorithm, with decrease in temperature in every iteration, of which some finite iterations are executed until the temperature becomes minimum. The rate of temperature fall is to be carefully chosen to get the best result. Let F be the objective function formulated for the given problem then the acceptance probability of choosing the worst point as present state is given in following Eq. (1) [13, 14]. P=e

( F1 −F2 ) KT

(1)

274

S. K. Sunori et al. START

Initiate with random solution (Z1) Compute objective function value (F1) at Z1 Generate new solution (Z2) Compute objective function value (F2) at Z2

F2 > F1 ?

NO

YES

Initialize the number of iterations again

P>N? N represents a random number NO in the interval (0,1)

NO

NO

Z1=Z2 F1=F2

YES

Final iteration executed?

YES

Stopping criterion met?

NO Reduce the temperature

NO

YES STOP

Fig. 3 Flowchart of SA [9–11]

Here, F1 and F2 are respectively the present and next state values of the objective function, T represents temperature, and K represents Boltzmann constant.

4 PI Controller Design and Optimization Using GA and SA The state-space model of the considered pH neutralization process [15] of sugar factory is presented in Eqs. (2) and (3). dx = Ax + Bu dt

(2)

y = C x + Du

(3)

Soft Computing-Based Optimization of pH Control …

275

Fig. 4 Simulink model of PI control system



⎡ ⎤ 1 ⎦; B = ⎣ ⎦; C = [00.0042]; D = [0]. where A = ⎣ 1.0 0 0 Now the PI control system is simulated for this system in MATLAB Simulink as shown in Fig. 4 The transfer function of this PI controller (with K p = 3.98, K i = 0.88) is given in Eq. (4) −0.5083 −0.0042



PI = 3.98 + (0.88/s)

(4)

The setpoint tracking response exhibited by this PI control system is depicted in Fig. 5. It is clearly visible that this response is not at all satisfactory as it has very large settling time of 264 s and peak overshoot of 64.7%. Now this performance is to be improved by optimization techniques GA and SA. The objective function (F) that will be used for optimization, as presented in Eq. (5), has been determined on basis of the available data of the process. F = 112.7 − 1.466K p − 13.85K i − 27.72K i2 + 0.78K p K i

(5)

Both, the GA and SA, will now be executed on this objective function F, for its minimization. The various GA specifications used are presented in Table 1. Some output plots of GA, such as fitness value, current best individual, score histogram and stopping criteria, are displayed in Fig. 6. The optimization result returned by the GA technique is K p = 422, K i = 0.1100. So, the transfer function of the optimized PI controller using GA approach is presented in Eq. (6) GA = 422 + (0.1100/s)

(6)

Now the SA algorithm is executed on the same objective function F with initial population of 80 and re-anneal interval of 500. The snapshot of some executed iterations of SA, in MATLAB, is shown in Fig. 7. Some output plots of SA approach, such as the current function value, the best function value, the current temperature and stopping criteria, are displayed in Fig. 8. The optimization result returned by this

276

S. K. Sunori et al.

Fig. 5 Setpoint tracking response of initial PI control system

Table 1 GA specifications

Parameter

Value/function type

Size of population

50

Selection method

Roulette wheel

Crossover fraction

0.7

Mutation method

Gaussian

Crossover method

Two point

Generations

200

SA technique is K p = 134, K i = 0.1100. So, the transfer function of the optimized PI controller using SA approach is presented in Eq. (7). SA = 134 + (0.1100/s)

(7)

The setpoint tracking response of the optimized PI controllers based on GA and SA is showcased in Fig. 9. It is very clear, in this figure, that both, the peak overshoot and the settling time are drastically reduced. For GA control system, peak overshoot and settling time are respectively 53.1% and 15.1 s. For SA control system settling time is 14.3 s and 31.3% respectively. The Bode diagrams of initial PI controller, and SAand GA-based controllers are presented in Fig. 10 indicating that the optimization has improved the bandwidth of control systems also. All the three designed controllers can be quantitatively compared clearly on basis of Table 2.

Soft Computing-Based Optimization of pH Control …

277

Fig. 6 Output plots of GA

In this table, the settling time, peak overshoot, phase margin and peak gain are denoted by T set. , Op , M p and Gp .

5 Novelty After reviewing the available literature, this is inferred that a vast research work, based on the pH control system of sugar mill, has already been reported by a number of researchers. It is observed that, in that work, various conventional approaches of control system design have been employed. The work done in this paper addresses the optimization problem of the PI controllers using soft computing techniques GA and SA.

278

Fig. 7 Snapshot of few SA iterations implemented on MATLAB

Fig. 8 Output plots of SA

S. K. Sunori et al.

Soft Computing-Based Optimization of pH Control …

Fig. 9 Setpoint tracking response of GA- and SA-based control systems

Fig. 10 Bode diagrams of PI (initial), GA- and SA-based control systems

279

280

S. K. Sunori et al.

Table 2 Comparison of performance of designed controllers PI controller

T set. (s)

Op (%)

M p (degree)

Gp (dB)

Initial

264

64.7

38.4

10.7

GA optimized

15.1

53.7

31.5

8.48

SA optimized

14.3

31.3

57.6

3.85

6 Conclusion In this work, a state-space representation of the pH neutralization scheme of sugar mill is taken up for design and optimization of control system using MATLAB. It is observed that the initially designed PI controller does not come up with very satisfactory performance, as both, the peak overshoot and settling time values, of setpoint tracking response are very large. Now for improving its performance, the GA and SA optimization techniques have been used. A drastic improvement in the performance, with significantly reduced settling time and peak overshoot, of the control system has been achieved by these optimization techniques.

References 1. Sunori, S.K., Shree, S., Juneja, P.K.: Control of sugarcane crushing mill process: a comparative analysis. In: International Conference on Soft Computing Techniques and Implementations (ICSCTI), pp. 1–5 (2015) 2. Rukkumani, V., Khavya, S., Madhumithra, S., Nandhini Devi, B.: Chemical process control in sugar manufacturing unit. IJAET, 2732–2738 (2014) 3. Sunori, S.K., Juneja, P.K., Chaturvedi, M., Aswal, P., Singh, S. K., Shree, S.: GA based optimization of quality of sugar in sugar industry. Cienc. Tec. Vitivinicola 31(4), 243–248 (2016). ISSN: 0254-0223 4. Sivanandam, S.N., Deepa, S.N.: Principles of Soft Computing, 2nd edn. Wiley (2011) 5. Yang, X.-S.: Chapter 5—Genetic algorithms. Nature-Inspired Optimization Algorithms, pp. 77–87. Elsevier (2014) 6. Mariajayaprakash, A., Senthilvelan, T., Gnanadass, R.: Optimization of process parameters through Fuzzy logic and genetic algorithm—a case study in a process industry. Appl. Soft Comput., 94–103 (2015) 7. NithyaRani, N., GirirajKumar, S.M., Anantharaman, N.: Modeling and control of temperature process using genetic algorithm. Int. J. Adv. Res. Electr. Electron. Instr. Eng. 2(11), 5355–5364 (2013) 8. Sunori, S.K., Juneja, P.K., Chaturvedi, M., Bisht, S.: GA based optimization of control system performance for juice clarifier of sugar mill. Orient. J. Chem. 32(4) (2016) 9. Barzinpour, F., Saffarian, M., Makui, A., Teimoury, E.: Metaheuristic algorithm for solving biobjective possibility planning model of location-allocation in disaster relief logistics. J. Appl. Math., 1–17 (2014) 10. Roshan, S., Jooibari, M., Teimouri, R., Asgharzadeh-Ahmadi, G., Falahati-Naghibi, M., Sohrabpoor, H.: Optimization of friction stir welding process of AA7075 Aluminum alloy to achieve desirable mechanical properties using ANFIS models and simulated annealing algorithm. Int. J. Adv. Manuf. Technol. (2013)

Soft Computing-Based Optimization of pH Control …

281

11. Sunori, S.K., Bhakuni, A.S., Maurya, S., Jethi, G.S., Juneja, P.K.: Improving the performance of control system for headbox consistency of paper mill using simulated annealing. I-SMAC 2020, Palladam, India, pp. 1111–1116 (2020) 12. Najafi, M.: Simulated annealing optimization method on decentralized Fuzzy controller of large scale power systems. Int. J. Comput. Electr. Eng. 4(4), 480–484 (2012) 13. Tenne, Y.: A simulated annealing based optimization algorithm. Chapter 3, Computational Optimization in Engineering—Paradigms and Applications, pp. 47–67 (2017) 14. Nouraniy, Y., Andresenz, B.: A comparison of simulated annealing cooling strategies. J. Phys. A: Math. Gen. 31, 8373–8385 (1998) 15. Karthik, C., Valarmathi, K., Prasanna, R.: Modeling and control of chemical process in sugar industry. In: ICVCI, pp. 24–28 (2011)

A Comparative Analysis of Various Data Mining Techniques to Predict Heart Disease Keerti Shrivastava and Varsha Jotwani

Abstract Identifying cardiovascular diseases (CVD) in people at risk is a keystone for preventive cardiology. The risk forecasting tools recently suggested by medicinal plans naturally depend on the restricted numeral of predictions with sub-optimal interpretation beyond all groups of patients. Information-driven approaches depend on Machine Learning to enhance the interpretation of prediction by determining new techniques. This research helps recognize the current procedures included in predicting heart disease by classification in data mining. A review of related DM procedures that are included in heart disease prediction gives an acceptable prediction model. The main inspiration of the paper is to progress an efficient, intelligent medicinal decision system depending upon data mining techniques. Keywords Cardiovascular disease · Cleveland dataset · Risk prediction · Machine learning approaches

1 Introduction Now, it is best to identify symptoms at an early stage to reduce human death rates caused by multifaceted diseases. Very effective clinical treatment with the best result will be achieved through early identification of the symptoms. Latest advancements in biotechnology open up exciting possibilities for a greater biological understanding of several complicated human diseases at a molecular level, potentially leading to early diagnosis and treatment of those diseases. In addition to available genomic information, life science researchers are looking at proteomics, to gain insight into cell functions by seeking ways to express, process, recycle, and locate proteins in cells. Proteomics is known as proteomics research that applies to the whole group of proteins expressed in a cell. Structural, functional, as well as interaction studies [1], are the main research areas for proteomics. Structural proteomics utilizes nuclear magnetic resonance, X-ray crystallography, or even both K. Shrivastava (B) · V. Jotwani Department of Computer Science and Information Technology, Rabindranath Tagore University, Bhopal, M.P, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_25

283

284

K. Shrivastava and V. Jotwani

to analyze the final 3D protein structure. Mass-spectrometry (MS) is used for studying modulation and protein expression timing and location in functional proteomics. Interaction experiments are designed to learn how protein pairs interact to form more complex models of molecular machines with other cell components. In specific, large-scale protein characterization or comparative studies can be used for various purposes including disease classification and prediction, treatment, and production of a new drug, variables in virulence factors and genetic mapping, as well as species determinants, profiles of protein expression or proteomic expression [2–5]. Proteomics provides clear benefits relative to transcriptional profiling in functional genomics because it gives a more straightforward method to cellular functions as most gene functions are marked by proteins [6]. Chronic diseases, which account for 63% of all deaths and are the leading cause of death in the world [7], represent diseases of long length and usually slow development such as cardiovascular disorders, diabetes and cancer. To control the prevalence and growth of chronic diseases, patterns need to be better understood, especially looking at potentially preventable events. Major Adverse Cardiac Events (MACE) is the composition of all-cause mortality associated with cardiovascular disorders, Acute Coronary Syndrome (ACS), MACE type, is a disease caused by ischemia of coronary artery infarctions or downstream cardiac tissue that occurs following a rapid drop in bloodstreams. ACS patients are frequently accompanied by abrupt chest pain, breathlessness, dizziness, nausea, or sweating [8]. For six examples, those values are missing in the Cleveland Heart Dataset. There are maximum values in the remaining 297 instances. Most researchers thus use the 297 instance values for heart disease prediction. A significant point for the accuracy of the algorithm is the machine learning size of the dataset. The larger scale dataset is very accurate with noise-free data with no missing values. But if the dataset is small, we are facing a problem. More patterns in the dataset can be studied for greater accuracy. The cross-validation shall be included [9]. Heart disease is the sort of condition that may lead to death. Too many people are dying each year as a result of heart disease. Heart disease may arise when the heart muscle weakens [10]. Besides, heart failure may be defined as heart failure to pump blood. Heart disorders are also referred to as CAD (coronary artery disease). CAD may occur as blood supplies to arteries are inadequate [11]. Chest pain, high blood pressure, cardiac arrest, hypertension, etc. can be used to diagnose cardiac disease. There are several different forms of heart disease with varying symptoms. Like: 1. 2.

Blood vessels heart disease: chest pain, breath shortness, throat pain, Abnormal heartbeat conditions: discomfort, pain in the chest, slow heartbeat, etc. Chest pain, breath shortage, discomfort, and chest pain are the most frequent symptoms.

Pain in the chest, breathlessness, and weakness are the world’s most common symptoms. The causes of heart disease are elevated blood pressure, diabetes, smoking, drugs, and alcohol [12]. Often the infection is also present in cardiovascular

A Comparative Analysis of Various Data Mining Techniques to Predict Heart Disease

285

conditions which affect the internal membrane the symptoms such as fever, fatigue, dry cough, and skin eruption. Bacteria, viruses, and parasites are sources of heart infection. Health disorders type: heart arrest, high blood pressure, heart cancer, heart failure, congenital heart disease, slow heartbeat, heart disease of the form stroke, and angina pectoris kidney disease. Now for many days, the prediction of heart diseases like ML (machine learning), DM (data mining), and DL (deep learning) is too multiple automated techniques. Therefore, in this work, we will briefly introduce techniques of machine learning [13]. In this, we use ML repositories to train datasets. There are a variety of risk factors based on the prediction of heart disease. Risk factors are sex, age, rate of cholesterol, blood pressure, diabetes, cardiovascular disease family history, smoking, overweight, alcohol, chest pain, cardiac rate [14]. Hospitals produce enormous volumes of patient data. The data from a hospital help develop predictive models for diseases as a result of the data processing overbroad data. Data mining techniques can predict the hidden pattern of broad hospital data and assist us in developing an effective framework for medical diagnosis. One form of heart disease or another has been the primary cause of a patient’s death [1]. Regardless of region, country, and age, the leading mortality cause is heart disease. Cardiovascular disorders require regular monitoring and treatment. Yet rural people are not readily accessible and viable with daily medical checkups. This disorder is a life-threatening situation for those who suffer from severe heart disease. A 2010 study reveals that $1 is used for cardiovascular diseases for every 6 spent on health care. The primary cause of 370,000 people every year is Coronary Heart Disease (CHD). However, it is estimated that the cost of their treatment is roughly US$444 billion. The chances of recovery are higher when forecast before an emergency happens. In comparison, the results suggest that the recovery of severe heart attacks from the hospital is very limited. This paper explores the various forms of predictive modeling of cardiac disorders based on ML, DM as well as AI techniques.

2 Literature Review Several analyses previously were done that have attention on heart disease estimation. Investigators used various data mining approaches for estimating heart disease and attained various prospects for several prediction models. In this research, a new technique is to device multi-parameter features with linear and nonlinear features of heart rate variability (HRV). The author discovers characteristics of HRV for three leaning locations specific to prone, left and right lateral locations. They directed several experiments on features of HRV directories to measure numerous classifiers. Here, a prototype was known as intelligent heart disease prediction system (IHDPS) made by provision DM methods like neural network and decision trees. The outcomes demonstrated the unusual strength of every methodology in understanding the purpose of definite removal points [16].

286

K. Shrivastava and V. Jotwani

This article analyzes the issue of classifying inhibited suggestion rules for the estimation of heart disease. The measured database enclosed records of medicinal heart disease patients through characteristics of risk aspects. They identify heart perfusion dimensions and artery narrowing [17]. Here, work may estimate for every medicinal process or problem. This worthwhile applied to figure out the decision tree quickly through the surgeon data service. The key disadvantage of analysis was data attainment. The requirement of gathering suitable information makes a suitable mode compulsory [18]. In [19] introduces a prototype that depends upon Coactive Neuro-Fuzzy Inference System (CANFIS) forecast cardiac disorder. CANFIS diagnoses the occurrence of illness through integrating numerous approaches that involve adaptive capabilities of the neural network, fuzzy logic, quantitative method and additional combining GA [19]. In the proposed approach, the original approach called predominant correlation, presented a fast filter approach that recognizes related properties and redundancy between related properties without pairwise correlation investigation [20]. In this proposed methodology, the method used for relating classification approaches with a valuation model, constancy and cogency in the variable assortment. This analysis gives a systematic strategy intended for relating the recital of 6 classification methods by Monte Carlo simulations and signifies the variable assortment procedure essential in associating practices to confirm minimum bias, improved stability, and optimized presentation [16]. For heart attack, the patient’s classification of association rules was introduced. In the first phase, the database is preprocessed for efficient mining work. The database has screen medical data of heart patients—this presented withdrawal of substantial strategies from the disease database. The association rules utilized for preprocessing missing standards and applies equivalent intermission binning by estimated value depends on clinical professional advised on the Pima Indian dataset [21]. Here it is introduced applications of DM approaches in heart attack prediction. The probable usage of DM classification methods like as rule-based, decision tree, Naive, and ANN with a huge amount of healthcare data. Tanagra DM prototype remained utilized for investigative analysis of data, ML, and statistical learning approaches. The training database contains 3000 illustrations through various characteristics. The illustrations in the database represent outcomes of various kinds of testing to forecast the accurateness of disease [22]. Some of the disadvantages were found in the data mining models like support vector machine and decision tree: 1. 2. 3.

If no. of features is considerably greater than no. of data points, selection of kernel functions and regularization time to prevent overfitting is important. SVMs do not have calculations of probabilities explicitly. Those are computed with costly cross-validation five times. Works well on small samples due to its high time of training. Further, some of the disadvantages were found in decision trees mentioned below:

A Comparative Analysis of Various Data Mining Techniques to Predict Heart Disease

1. 2. 3. 4. 5.

287

A minor change in data will lead to a major systemic change in DT, leading to instability. Calculations can also be much more difficult than other algorithms for DT. Sometimes DT takes longer to train a model. DT training is relatively costly since it requires more time and complexity. Regression and the prediction of continual values insufficient for the DT algorithm.

3 Data Mining Techniques The healthcare environment generally includes rich data about patients and their treatment, which are stored in health management systems. Such data is valuable, especially if the existing information is cultivated into useful practices. Data mining techniques can help in extracting valuable knowledge from hidden relationships and trends among data. The DM approaches are divided into four types: classification, clustering, regression, and association rule mining. Classification methods are the most widely used algorithms in the healthcare sector as it helps to predict the status of the patient by classifying patient records and find the class that matches the new patient record [3]. Classification is known as supervised learning techniques that require the data to be initially classified into initial classes or labels. Then these data are entered into a classification algorithm to be learned, as shown in Fig. 1. Particularly, the relationship between attributes needs to be discovered by the algorithm to predict the outcome. At this level, classification procedures make the classifier with the training set comprised of dataset tuples and the relevant class tags. Every tuple that establishes the training set is stated as a class. When a new case arrives, the developed classification algorithm is used to classify it into one of the predefined classes. The term which specifies how “good the algorithm” is called

Fig. 1 Building the classifier phase

288

K. Shrivastava and V. Jotwani

prediction accuracy. Here some classification algorithms are explored for supervised learning techniques. Naive Bayes (NB) Algorithm NB Classifier is a basic probabilistic classifier that depends upon Bayes theorem’s implementation with strong (naive) independence assumptions. A more precise term will be “independent feature model” for the underlying probability model. P(A/B) =

P

B

P( A) P(B) A

(1)

The probability that A occurs can be found using the Bayes theorem, as B has occurred. Here, B is the evidence and A is the hypothesis. The predictor/feature here is believed to be distinct. That is one particular feature’s presence does not influence the other. Thus, this is known as naive. Support Vector Machine (SVM) SVM typically contracts with pattern classification, which means that this algorithm is often utilized to distinguish various pattern forms. There are now numerous versions, i.e., linear as well as nonlinear. Linear patterns are patterns that can be simply differentiated or divided into low dimensions, while nonlinear patterns cannot be distinguished simply or cannot be simply isolated, meaning that both pattern forms may be simply separated. The core concept behind SVM is a design of an optimum hyperplane for the classification, of linearly separable patterns. Decision Tree (DT) It is a term utilized in this work for student classification technologies. A decision tree is a structure like a tree in which the internal node includes split attributes. It is a measure of the attribute. Decision trees typically are composed with training dataset while checking or validating the consistency of a decision tree is carried out with the test dataset. The decision tree is a framework for the flowchart, with the following characteristics: 1. 2. 3. 4.

Often referred to as a non-leaf node, each inner node denotes an attribute test; Each branch is the outcome of the test; The class label holds every leaf node or terminal node; The root node is the topmost node of a tree. The decision typically consists of nodes forming a rooted tree which means the directed tree has a node called root with no incoming edges [23].

C4.5 Algorithm C4.5 is an algorithm that is utilized to generate DT developed by Ross Quinlan. It is an expansion of the ID3 algorithm. For this purpose, C4.5 is sometimes referred to as a “statistical classifier”. A drawback of ID3 is that it is too vulnerable to features with a wide number of values. The C4.5 decision trees can be used for classification. The algorithm fundamentally employs Single Pass Pruning Process to minimize overfitting. 1.

It is important to deal with discrete and continuous data

A Comparative Analysis of Various Data Mining Techniques to Predict Heart Disease

2.

289

C4.5 should cope very well with the issue of incomplete data.

Classification and Regression Trees (CART) It is like C4.5, however differs because it supports (regression) numeric target variables and fails to calculate rule sets. With the purpose and thresholds, CART creates binary trees that give each node the greatest information gain. It is an algorithm of a binary decision tree that Recursively parts the data into two subsets, allowing for the consideration of misclassification costs, previous divisions, cost complexity pruning within each subset. K-Nearest Neighbor (KNN) Algorithm The KNN algorithm believes similar things happen near. Similar objects are near to each other, in other words. KNN captures the concept of similarities in certain arithmetic (sometimes called distance, closeness, or proximity) that we might have learned in our childhood—the distance b/w points on a graph measure. Genetic Algorithms (GA) The idea of genetic algorithms derives from natural growth. The first population is generated in genetic algorithms. This first population comprises randomly generated rules. The strings can be seen by a string of bits through each rule. For example, suppose that the samples are identified by two Boolean attributes such as A1 and A2 in a given training set. And there are two classes like C1 and C2 in this provided training set. Feedforward Neural Network (FFNN) A feedforward neural network is a biologically motivated classification algorithm. It comprises a (possibly large) number of basic, layer-organized, neuron-like processing units. Both units of the previous layer are connected in a layer. Not all these connections are identical. They can vary in intensity or weight for each connection. The weights of these ties encode network information. Often units are also called nodes in a neural network. Data enters the inputs and passes the network, layer by layer before the outputs are reached. When it serves as a classifier, there is no feedback between layers during normal operations. This is why they are named feedforward neural networks.

4 Proposed Methodology Problem Statement The rare ARM (Association Rule Mining) community does not address the issue of rare ARM from dynamic databases. With improvements in real-life databases being taken into consideration, incremental rare ARM has become a critical issue. This section addresses various fundamental principles concerning the mining of rare associations along with the problem definition taken into consideration in this review. Researchers in the area of rare ARMs attempt to find objects which are insignificant in the collection but closely interrelated. The principle of generating

290

K. Shrivastava and V. Jotwani

rare rules for associations aims at identifying regular or rare patterns and creating associated rare connections between such patterns. Rare ARM attempts to recognize strong relationships between the items through a certain measure of importance so no rules generated could be of interest to the consumer. The rise of the mortality rate due to life-threatening diseases in today’s world has become a problem. To mitigate the severity of their side effect, early identification and diagnosis of diseases are important. Therefore computationally, various data mining techniques are applied and compared with each other to see the most efficient and accurate algorithm. Though data mining is built to predict the model by classifying the datasets based on certain attributes, some of those classification algorithms are discussed here and the result is visualized using Python 3.6 simulation tool. Research Methodology Various data mining approaches have been applied and compared in this field for predicting the performances of machine learning techniques along with the rule mining generation techniques for mining the frequent pattern and compute the execution time. Random forest and SSP tree can predict the best accuracy with less complexity. The novelty behind using random forest. Data Preprocessing The datasets used have been obtained from the Irvine data archive of the University of California. The selected data was verified using the distributed frequency to verify the noise, inconsistency, and missing values while box plots were used to detect the outside. Noises and inconsistencies found in the dataset were manually corrected and missed values substituted by a Nearest Neighbor algorithm with the most similar. In medical records, missing values are very common. These values should be treated before being used because they can contribute to the assessment of loss or the prediction of incorrect disorders. Two typical methods for treating the datasets could be employed: deletion and imputation. Deletion is the most widely proposed option for treating lost values as unprocessed cases are deleted. Since this procedure is embedded, it is also considered ethical, notably when it comes to medical datasets. In comparison, valuable details may be discarded. Alternatively, the imputation process is used to substitute missing values with estimated values. Feature Selection Feature Selection is the mechanism where the features that have the best prediction variable or output that is of concern for researchers are chosen automatically or manually. The fact that your data has irrelevant features will minimize model accuracy and help your model learn from irrelevant features. Principal Component Analysis (PCA) This technique replaces the input variable with the “least important” variables and still preserves the most useful pieces of all the variables! PCA is a technique for feature extraction. Also, each of the “new” PCA variables is independent of each other.

A Comparative Analysis of Various Data Mining Techniques to Predict Heart Disease

291

Chi-Square If two features are independent, the count measured is near the count predicted, and hence the Chi-Square value is smaller. The high Chi-Square value thus suggests that the independence theory is false. The higher the value of Chi-Square the feature depends more upon the answer and can be chosen for model training. Random Forest (RF) Classifier RF is a supervised algorithm for learning. It may be utilized for both regression and classification. This algorithm is perhaps the most flexible and simplest to use. RF technology is capable to focus both on observations and training data variables for establishing individual decision trees and to take maximal votes in respect of classifications and the overall average for problems of regression. It also utilizes a bagging technique that selects all columns that cannot be considered as significant variables at the root of all decision trees in a random way. In this way, an RF only allows trees that are mutually dependent by penalizing accuracy. 1.

Randomly select “k” features from the total “m” features. Where k  m

2. 3. 4. 5.

Between “k” features, compute node “d” using best split point. Split node into daughter nodes using best split. Replication 1–3 steps till “l” no. of nodes has been reached. Generate forest by reproducing steps 1–4 for “n” no. times to generate “n” no. of trees.

Beginning RF algorithm begins by arbitrarily selecting “k” features from total “m” features. In the next step, a randomly chosen “k” feature will use the best split approach to find the root node. Figure 2 demonstrates an implementation of the proposed methodology that is a figurative representation to visualize all the steps of this research.

5 Results and Discussions In this article, ML techniques (also ultimately comparing them) are used for classifying if a person is suffering from a cardiac disorder or not, using one of the most utilized datasets—CHDD (Cleveland Heart Disease Dataset). The dataset used for the study consisted of 14 variables measured on 303 persons with heart disease in Cleveland Report contained in UCI ML storehouse. It may be downloaded from https:// archive.ics.uci.edu/ml/datasets/heart+disease. Below is the figure (Fig. 4) showing the attributes of the Cleveland dataset. The dataset for CHDD includes 76 attributes, only 14 are taken into consideration in cardiopathy diagnostic testing studies. The occurrence or absence of heart disease is demonstrated in 303 cases and by 5 class labels (0–4). The exclusion is reported in class 0 and the heart failure is reported in class 1–4.

292

K. Shrivastava and V. Jotwani

Fig. 2 Flow chart of the research procedure

Start

Insert dataset Pre-process the dataset

Apply Feature selection algorithm CHI and CHI-PCA

Divide dataset into training and testing

Apply classification algorithms (DT, GBT, LOG,

Deploy models

Classifier Heart patient

Normal end

Fig. 3 Comparison of ML classifiers accuracy

Accuracy 120% 100% 80% 60% 40% 20% 0% DT

GBT

Raw-Data acc

LOG

MPC

CHI acc

NB CHI PCA acc

RF

A Comparative Analysis of Various Data Mining Techniques to Predict Heart Disease

293

Figures 3 and 5 are the result visualization for the accuracy and F1-measure respectively of various machine learning techniques—Decision Tree, Gradient boosting technique, Logistic regression, Multilayer perceptron classifier (MPC), Naïve Bayes (NB), Random forest (RF) on three different types of a dataset which are raw datasets i.e., original Cleveland dataset, using CHI square feature selection algorithm, and using CHI square-PCA. The above figures highlight the results of classification algorithms on the Cleveland dataset. The result is measured in terms of F1 score and accuracy.

Fig. 4 Cleveland dataset visualization

F1 Score

Fig. 5 Comparison of ML classifiers F1 Score 120% 100% 80% 60% 40% 20% 0% DT

GBT

Raw-Data acc

LOG

MPC

CHI acc

NB CHI PCA acc

RF

294

K. Shrivastava and V. Jotwani

Fig. 6 Execution time of frequent pattern generation

Effect of db 80 60 40 20 0 10 FP-Growth

20 FUP2

30 FUFP-Tree

40 SSP

In many real-world applications, frequent pattern mining techniques, for example, in market basket analysis, advertisement, medical field, patient routine tracking, etc. have become an apparent requirement. For comparison of these algorithms, we use the Cleveland database for all algorithms, which is based on the daily work of cardiac patients and their health characteristics. Research on frequent pattern mining is known for its applicability in the field of data mining, constraint-based frequent patterns mining, correlations, and many additional data mining tasks. Figure 6 elucidate the effect of delete transaction updates on the Cleveland dataset throughout the frequent patterns generation and rare patterns using various classification algorithms. All the above discussions along with figures are discussed concisely to mention all the significant facts including the advantages and disadvantages of the data mining algorithms. The results and discussion show that the above-evaluated result is giving the best accuracy with random forest using all the three feature selection algorithms that can be seen in Figs. 4 and 5. Whereas Fig. 6 show the effectiveness of various pattern mining techniques like Frequent pattern mining Tree (FP Tree), fast updated (FUP), Fast Updated 2 (FUP2), and Single scan pattern mining (SSP) which has been discussed by Borah et al. [24].

6 Conclusion In this research, various DM techniques were studied to improve the early detection of heart disease. This is however concluded that a dataset with appropriate samples and accurate data should be used to create a predictive model for heart disease. The dataset must be preprocessed in advance since it is the most crucial aspect in preparing the dataset for the learning algorithm and achieving successful results. In the design of a prediction model, an appropriate algorithm should also be used. Moreover, an extensive analysis was conducted using the Cleveland dataset. Also, it shows the effect of delete transactions on generated patterns. An interactive forum

A Comparative Analysis of Various Data Mining Techniques to Predict Heart Disease

295

between doctors and patients will in the future be developed to communicate risk and factors.

References 1. Palmer, A.J.: Computer modeling of diabetes and its complications: A report on the fifth Mount Hood challenge meeting. Value Health 16, 670–685 (2013) 2. Thomas, M.R., Lip, G.Y.: Novel risk markers and risk assessments for cardiovascular disease. Circ. Res. 120(1), 133–149 (2017) 3. Ridker, P.M., Danielson, E., Fonseca, F., Genest, J., Gotto, A.M. Jr., Kastelein, J., et al.: Rosuvastatin to prevent vascular events in men and women with elevated C-reactive protein. New England J. Med. 359(21), 2195 (2008). pmid:18997196 4. Kremers, H.M., Crowson, C.S., Therneau, T.M., Roger, V.L., Gabriel, S.E.: High ten-year risk of cardiovascular disease in newly diagnosed rheumatoid arthritis patients: a population-based cohort study. Arthritis Rheumatol. 58(8), 2268–2274 (2008) 5. D’Agostino, R.B., Vasan, R.S., Pencina, M.J., Wolf, P.A., Cobain, M., Massaro, J.M., et al.: General cardiovascular risk profile for use in primary care: the Framingham Heart Study. Circulation 117(6), :743–753 (2008). pmid:18212285 6. Conroy, R., Pyörälä, K., Fitzgerald, A.E, Sans, S., Menotti, A., De Backer, G., et al.: Estimation of ten-year risk of fatal cardiovascular disease in Europe: the SCORE project. Eur. Heart J. 24(11), 987–1003 (2003). pmid:12788299 7. Sjöström, L., Lindroos, A.K., Peltonen, M., Torgerson, J., Bouchard, C., Carlsson, B., et al.: Lifestyle, diabetes, and cardiovascular risk factors ten years after bariatric surgery. New England J. Med. 351(26), 2683–2693 (2004). pmid:15616203 8. Siontis, G.C., Tzoulaki, I., Siontis, K.C., Ioannidis, J.P.: Comparisons of established risk prediction models for cardiovascular disease: a systematic review. BMJ. 344, e3318 (2012). pmid:22628003 9. Coleman, R.L., Stevens, R.J., Retnakaran, R., Holman, R.R.: Framingham, SCORE, and DECODE risk equations do not provide reliable cardiovascular risk estimates in type 2 diabetes. 30(5), 1292–1293 (2007). pmid:17290036 10. McEwan, P., Williams, J., Griffiths, J., Bagust, A., Peters, J., Hopkinson, P., et al.: Evaluating the performance of the Framingham risk equations in a population with diabetes. 21(4), 318–323 (2004) 11. Martín-Timón, I., Sevillano-Collantes, C., Segura-Galindo, A., del Cañizo-Gómez, F.J.: Type 2 diabetes and cardiovascular diseasehave all risk factors the same strength 12. Buse, J.B., Ginsberg, H.N., Bakris, G.L., Clark, N.G, Costa, F., Eckel, R., et al.: Primary prevention of cardiovascular diseases in people with diabetes mellitus. 115(1), 114–126 (2007) 13. Ambale-Venkatesh, B., Wu, C.O., Liu, K., Hundley, W., McClelland, R.L., Gomes, A.S., et al.: Cardiovascular event prediction by machine learning, p. CIRCRESAHA–117 (2017) 14. Ahmad, T., Lund, L.H., Rao, P., Ghosh, R., Warier, P., Vaccaro, B., et al.: Machine learning methods improve prognostication, identify clinically distinct phenotypes, and detect heterogeneity in response to therapy in a large cohort of heart failure patients. 7(8), e008081 (2018) 15. Nathan, D.M., Cleary, P.A., Backlund, J.-Y.C., Genuth, S.M., Lachin, J.M., Orchard, T.J., et al.: Intensive diabetes treatment and cardiovascular disease in patients with type 1 diabetes. 353, 2643–2653 (2005) 16. Shreve, J., Schneider, H., Soysal, O.: A methodology for comparing classification methods by assessing model stability and validity in variable selection. 52, 247–257 (2011) 17. Ordonez, C.: Improving heart disease prediction using constrained association rules (2004) 18. Lemke, F., Mueller, J-A.: Medical data analysis using self-organizing data mining technologies. 43(10), 1399–408 (2003)

296

K. Shrivastava and V. Jotwani

19. Parthiban, L., Subramanian, R.: Intelligent heart disease prediction system using CANFIS and genetic algorithm. 3(3) (2008) 20. Li, W., Han, J., Pei, J.: CMAR: an accurate and efficient classification based on multiple association rules (2001) 21. Deepika, N., Chandrashekar, K.: Association rule for classification of Heart Attack Patients. 11(2), 253–57 (2011) 22. Srinivas, K., Rani, K.B., Govardhan, A.: Application of data mining techniques in healthcare and prediction of heart attacks. 2(2), 250–255 (2011) 23. Neelamegam, S., Ramaraj, E.: Classification algorithm in Data mining: An Overview, vol. 3, issue 5, pp. 1–5 (2013). 24. Borah, A., Nath, B.: Identifying risk factors for adverse diseases using dynamic rare association rule mining. Expert Syst. Appl. (2018)

Performance Comparison of Various Controllers in Different SDN Topologies B. Keerthana, Mamatha Balachandra, Harishchandra Hebbar, and Balachandra Muniyal

Abstract The continuous surge in the number of devices connected to the Internet and the added lineup of Internet of things (IoT) devices to these networks have led to the complexity of connected devices. One of the ways to overcome this issue is by the introduction of software-defined network (SDN), and the management of these networks is done by the use of various controllers. Addition of a controller improves the management of flow control and also adds intelligence to a network. The performance of three such popular controllers; the Ovs, PoX, and Ryu are used in this study. This work is carried out in an SDN environment with varying topologies and packet transmission. Network performance is considered as an important factor in communication. The main aim of the study is to check any change in the round-trip time with respect to the controller and find whether there is any relationship between the number of nodes in the network and round-trip time. This work is designed based on the utilization of mininet in various network topologies. The other aim of the study was to interpret the change in the round-trip time with respect to the varying topologies. The correlation between the performance of the network and the controller type is also analyzed in this study. Keywords IoT · Ovs · PoX · Ryu · SDN · Network performance · Round-trip time

B. Keerthana (B) · H. Hebbar School of Information Sciences, Manipal, India e-mail: [email protected] H. Hebbar e-mail: [email protected] M. Balachandra · B. Muniyal Manipal Institute of Technology, Manipal, India e-mail: [email protected] B. Muniyal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_26

297

298

B. Keerthana et al.

1 Introduction In this technological era, Internet of things (IoT) plays a significant role in our day-to-day activities. The growth is drastically increased due to the improvement in networking, communication, and processing. It helps the consumers to improve the quality of life. International Data Corporation (IDC) is the premier global provider of market intelligence advisory services, and events for the information technology, telecommunication, and consumer technology markets, estimates that there will be around 41.6 billion devices and will generate 79.4 zeta bytes (ZB) of data by 2025 [1]. Many of the huge networks such as in enterprises, some of the vital networks demanding improved security such as the access networks and the ever-increasing use of wireless networks require to perform at their best. Moving into the use of SDN in most of the networks demands for increased performance, and one of the major factors requiring attention is the controller. Consequently, the task of network administrators to manage the network using traditional setup and operation has encumbered. In order to meet the demands of increasing network devices, one of the major requirements is in device deployment flexibility and programmability, and SDN was introduced as a measure to meet this requirement. Traditional IT networks comprise switches and routers connected to other networking devices. In this setup, another factor hindering the performance and the connections would be the devices which are vendor-specific. In this modern era of networking, IoT devices with low latency, requirement of high quality, and service standards have led to the study and adoption of SDN at datacenters. Handling of the packets in a traditional network involves the logic of switches, routers, and other physical devices. This control logic of all devices makes up the control plane the hardware in these networks acts as the forwarding plane. This makes the control plane distributed throughout the network which becomes difficult to control. The use of traditional networks involves physical devices, and any change needed has to be done within the connected devices manually. With a global view of the network, it would be easier to implement these changes, and based on the overall design, it would be much easier to implement in SDN rather than in traditional networks. In SDN, the architecture is designed to decouple the routing and forwarding functions with the use of a controller. These controllers manage the network from a single point and any changes required would be easy to implement. Capabilities of SDN oversee the fit it was built for, and this led to the way networking is looked at today. The ease of control and management by the flexibility and programmability of SDN makes designing, building, and operating networks a simpler task [2]. Having a one point control by the use of controllers reduces the cost involved and the time invested for the maintenance of a network. The use of SDN also helps in the improvement of authenticity, integrity, consistency, availability of controller for centralized control over networks, and also the appropriate utilization of network resources [3]. SDN has various applications in backbone networks, datacenters [4], enterprise networks, access networks, wireless networks [5], and many other areas [6].

Performance Comparison of Various Controllers …

299

A controller is used within an SDN in situations where the performance of a network is of utmost importance. If there is no suitable entry in the flow table for an incoming packet, it is forwarded to the controller. The controller decides on the processing of the packet. It can either create a flow entry for this and further packets in the flow or modify based on requirements for a switch to decide or discard the packet. The controller residing within the control plane is capable of dynamically implementing the policies and rules based on traffic requirements. Accordingly, the rules are set in the forwarding plane and the devices connected would identify, accept/reject, or forward network packets. There are various choices of controllers available for SDN, such as PoX, NoX, Onix, beacon, Ryu, ONOS, OpenDaylight, libfluid, and Floodlight. Each one of them has their own features that are designed based on the programming language used and functionality required. Selection of controllers needs to be considered based on the applications it is being used for and this tends to get more important with the augment of the size, scalability, altering the entire topology, and many other factors [7, 8]. In software testing and creating network environments for study, the use of emulators and simulators is very common these days. Simulators and emulators permit the user to run software for testing within a software defined environment. This helps in reducing the time taken and the cost incurred in the construction of the said environment, if it is an environment requiring networks. Hence, in SDN, the use of emulators is considered to be a good option for testing and troubleshooting. A simulator helps to run the software requirements in a testing setup, and an emulator helps to mimic both the software and the hardware requirements. This way an emulator is close to the functioning performance in a real environment. There are also open-source network emulators, simulator, and actual test beds with different performance metrics. Some of the available emulators of SDN are mininet, Die-cast, and ModelNet, fs-sdn, NS-3 and EstiNet. For SDN, OMNET++ is a simulator. With NS-2 being the most used network simulator, it also has the limitation that it does not provide us with an actual networking environment. For SDN, mininet, which is based on a command line interface, acts as an apt testing platform because the switches in it support Open Flow controller suitable for SDN and setting up customized topologies. Aim of the Study Round-trip time is one of the important factors in network performance. To detect and prevent from the network attacks, it is important to check the network performance. If the time of RTT is more, there is a possibility of attack in the network. Research Question RQ1 RQ2

Is there any change in the round-trip time with respect to linear and single topology. Is there is any correlation between performance and Ovs versus PoX versus Ryu controllers.

300

B. Keerthana et al.

Methodology Approach: In this study, quantitative values are used to measure the network performance. Data collected from the network created by using a mininet emulator. Design: A network created with linear, single topology with nodes from 2 to 10. Payload transferred from the first host to last host. The payloads are increased gradually to 5–50. Similar experiment tried with reference, PoX and Ryu controller. Following details are on the arrangement of this manuscript: Section 2 explains the related word and need of our work. (Filling the gap). Section 3 explains background. Section 4 explains implementation experimental setup. Sect. 5 gives results. Section 6 summarizes and concludes with the outcome and future prospects.

2 Related Work Islam et al. [2] used Ryu controller to develop single topology with three nodes. They evaluated the network performance interims on bandwidth, throughput, round-trip time, jitter, and packet loss parameters among the nodes, using the mininet emulator. Iperf utility is to generate TCP traffic and iperf3 command for measuring throughput. They also reported on jitter and packet loss. Ping utility is used to measure round-trip time. Kumar and Sood [9], in this paper authors compared the characteristics of traditional networks and software-defined networks and worked on the comparison of different topologies based on the performance metrics such as throughput, roundtrip time, end-to-end delay, and bandwidth. They used mininet and Wireshark for the construction of the study environment and measuring performance metrics, respectively. Based on the results of their study, they had identified the best topology for their environment of study. Li et al. [10] compared the network performance based on bandwidth and latency of Ryu and floodlight controllers in different topologies like single, minimal, linear, tree, reversed and custom, using mininet emulator. The result showed that floodlight performed better in all the topologies. Chouhan et al. [11], the network performance was compared using throughput, jitter, latency, and packet loss of the Ryu and floodlight controllers under different topologies, namely single, linear, tree, torus, and custom using mininet emulator. The result showed that Ryu is better in throughput compared to floodlight controller with all topologies. Ryu performed better in all topologies with respect to latency and jitter except torus. Ali et al. [12], python-based controller Pox and Ryu controllers were compared on bandwidth and latency with different topologies such as single, linear, tree, dumbbell, datacenter networks (DCN), and software-defined naval networks which use satellite communication systems (SATCOM), i.e., SDN-SAT. Experimental results showed

Performance Comparison of Various Controllers …

301

Ryu had superior performance compared to PoX controller in both throughput and latency. Kaur et al. [13], python-based controller PoX, Ryu, and Pyretic controllers were compared for their network performance on mininet emulator. They used single topology with three nodes and calculated RTT using ping command. Iperf was used to calculate throughput. The result showed that Ryu performed better against PoX, but the performance of pyretic lags far behind. Bholebawa and Dalal [14], network performance comparison of both PoX and floodlight controller over different topologies like single, linear, tree, and custom using mininet emulator was done. Comparison of round-trip time between end host using ping command and throughput was performed. The result showed that floodlight controller was faster when compared with PoX controller. The performance of mininet reference controller using three topologies, namely single, linear, and tree were calculated for network performance measure using bandwidth utilization, packet transmission rate, RTT, and throughput [15]. Rohitaksha and Rajendra [16], the authors analyzed the network performance latency and throughput of two controllers (PoX and Ryu) using hybrid softwaredefined network topologies. They used linear and tree topology using mininet emulator. The result showed that PoX controller was better than Ryu controller. Pandian [17] proposed improved-flow scheduling and the mobility management protocol (IFSMM) to guarantee the network performance; it provides a seamless accessing of Internet of things which is put forth to manage the heterogeneous multinetwork using NS2. In another study by Chen and Smys [18], the growing demands from the user’s perspective were considered and multimedia analytics was performed by means of trust-based paradigm. One of the considerations in this study was the use of runtime security in SDN. They created an environment for the detection of suspicious flow using a hybrid deep learning-based detection of anomaly. For this purpose, they had used the benchmark dataset and real-time evaluations.

3 Background 3.1 Mininet Mininet is lightweight emulator, and users can build network model efficiently. It enables you to quickly create, interact with, customize, and share a SDN prototype and provides a smooth path to run on hardware. Mininet can start a server, a switch, and a controller in short time using commands by using a python interface. Users can create a custom topology using python scripts. It is simple and convenient to create topology, using its command line interface. It supports OpenFlow protocol, which is used to communicate between physical layer and control layer.

302 Table 1 Number of links and switches in single, linear topology

B. Keerthana et al. Topology

No. of hosts

No. of switches

Single

N

1

Linear

N

N

3.2 Topologies A network topology defines the arrangement of the nodes within a network, and this could be in any specific manner depending on the requirement. In a network, the topology depends on the nodes which might include the end users, or the host machines, core network devices such as hubs, switches, and routers. These nodes could also be the printers, scanners, or fax machines connected within the network. The overall arrangement of these devices would in total called as the topology of the network itself. Mininet contains many default topologies, and from those, single and linear topologies were used in our study. Kaur et al. [19] have explained the setting up of linear and single topologies in their study, where they used mininet as an emulator for the creation of their network environment. Linear topology contains “N” switches and “N” hosts. It also creates link between each host and each switch. Single topology contains “N” hosts connected to one switch. It creates a link between switch and “N” hosts (Table 1).

3.3 Round-Trip Time The time required for sending a packet to a specific destination and the receipt of a response is calculated as the round-trip time, and this is considered as one of the parameters for the assessment of performance of a network. The technique generally used for calculating RTT is by sending probe packets from one host to another host in the network [20]. This is used to find the delay between the nodes in the network and used ping to probe the host in the network with different topologies and compared.

3.4 SDN Controllers SDN controller features are summarized in Table 2. Feature Comparison Between Ovs, PoX, and Ryu Controller

Performance Comparison of Various Controllers …

303

Table 2 Comparing the features of Ovs, PoX, and Ryu controller Features

Ovs controller [21]

PoX [22]

Ryu [23]

Language

C

Python

Python

OF version

Niciria

1.0

1.0, 1.1, 1.2, 1.3, 1.4, 1.5, Niciria extensions

Location

Reference controller

Internal

External

Partner

Independent developer

Nicira network

Nippo telegraph and telephone corporation

Open source license

Apache

GNU

Apache

REST support



Partial

Yes

GUI

No

Yes

Yes

Platform support

Linux

Linux, Windows, Mac

Linux

Distributed

No

No

Yes

Multithreading

No

Yes

Yes

Documentation

Poor

Poor

Medium

4 Implementation 4.1 Test Bed Description The study bed for this study was created using the mininet emulator. Two different topologies (single and linear) were designed with 2, 3, 4, 5, and 10 nodes in each of them. The traffic was directed from host h1 to host hn , where “n” is the number of the host connected and depends on the type of the topology and nodes present. Here each of the topology was designed with varying number of nodes, and the experiment was run with these topologies created on a mininet emulator (Figs. 1 and 2). The payload was set in the range of 10–50, with increments of 10. Round-trip time was calculated for the performance evaluation of the network with these topologies and nodes. Fig. 1 Linear topology created using mininet

304

B. Keerthana et al.

Fig. 2 Single topology created using mininet

Table 3 System requirement

Task

Tools used

Operating system

Ubuntu 18.04 × 86 64

Switch

OpenvSwitch 2.4.0

Southbound API

OpenFlow 1.3

Emulator

Mininet 2.2.1

Create packet

Ping command

Controller

Ovs/PoX/Ryu

4.2 System Requirement See Table 3.

5 Result and Analysis The results of our study indicate that the Ovs controller performs better than the Pox and Ryu controllers in varying nodes and payloads. On comparing the Ovs controller in linear and single topologies with varying number of nodes (2, 3, 4, 5, and 10) in each of the experiments, the Ovs controller performed better in the linear topology and whereas in the single topology, the RTT scores decreased as the number of nodes increased in each of the experiments. When seen individually in each number of nodes setup within each topology, the Ovs controller performed consistently and better in the linear topology than single topology (Fig. 3). On comparing the Pox controller in linear and single topologies with varying number of nodes (2, 3, 4, 5, and 10) in the different experiments, the Pox controller performed better in the single topology as against the linear topology (Fig. 4). The other controller was Ryu in our study, and it was seen to perform better in a single topology than a linear topology

Performance Comparison of Various Controllers …

305

Fig. 3 Comparison of Ovs controller in linear and single topology with different number of nodes (2, 3, 4, 5, and 10)

Fig. 4 Comparison of Pox controller in linear and single topology with different number of nodes (2, 3, 4, 5, and 10)

(Fig. 5). When seen against the varying nodes, the performance of Ryu controller was better in all the individual experiments with different number of nodes (Figs. 6, 7, 8, 9, and 10). On a whole, the comparison of the controllers in varying number of nodes and different payloads suggests that the Ovs controller performed better in a linear topology on an average calculated based on the RTT values. The other two controllers, Pox and Ryu, were seen to perform better in the single topology as against their RTT values seen in the linear topology. In the next scenario with a fixed number of nodes in each of the experiments, the controllers (Ovs, Pox, and Ryu) were compared against each other with varying topologies and varying payloads. With two nodes, the Ovs controller performed better than the Pox and Ryu controllers (Fig. 6). With increase in the number of nodes in the topologies in the study, Ryu controller performed better. With further increase of the number of nodes to three, the Pox controller performed better with varying payloads (Fig. 7). With the number of nodes increased to four and five, the

306

B. Keerthana et al.

Fig. 5 Comparison of Ryu controller in linear and single topology with different number of nodes (2, 3, 4, 5, and 10)

Fig. 6 Comparison of Ovs, Pox, and Ryu controllers in the linear and single topologies with two nodes and varying payloads (5, 10, 20, 30, 40, and 50)

Fig. 7 Comparison of Ovs, Pox, and Ryu controllers in the linear and single topologies with three nodes and varying payloads (5, 10, 20, 30, 40, and 50)

Performance Comparison of Various Controllers …

307

Fig. 8 Comparison of Ovs, Pox, and Ryu controllers in the linear and single topologies with four nodes and varying payloads (5, 10, 20, 30, 40, and 50)

Fig. 9 Comparison of Ovs, Pox, and Ryu controllers in the linear and single topologies with five nodes and varying payloads (5, 10, 20, 30, 40, and 50)

Fig. 10 Comparison of Ovs, Pox, and Ryu controllers in the linear and single topologies with ten nodes and varying payloads (5, 10, 20, 30, 40, and 50)

308

B. Keerthana et al.

performance of Pox controller was seen to be better in the linear topology and the Ovs controller performed better in the single topology, respectively (Figs. 8 and 9). In the scenario with maximum number of nodes (ten), the Ovs controller was a better performer than the Pox and Ryu controllers (Fig. 10). On a whole, when considering the average of the RTT values in all the scenarios with individual controllers, the Ovs controller on a whole performed better than the Pox and Ryu controllers in both linear and single topologies.

6 Conclusion and Future Scope The overall performance of each of the controller in varying environment shows that the Ovs controller performs better than the Pox and Ryu controllers. But when seen from the individual scenario, the controllers have their individual performance levels and this was seen to vary between experimental setups. With the increase in the number of nodes or with variations in the topologies, the performance of the controller was seen to vary. Linear and single topologies have their own characteristics, and these appear to have an impact of the type of controller used in these topologies. On a whole, this study indicates that the requirement of each setup could be different in the real world and the networks should be designed along with the controllers chosen, based on the performance requirement of the network setup. Though the performance of the controllers has been tested with varying number of nodes and topologies, further evaluation of these controllers with heavier payloads could provide insights and their suitability in various networks. Hence, further studies should be designed with increased number of nodes so that the performance of the controllers is clearly understood with increasing number of nodes.

References 1. The Growth in Connected IoT Devices Is Expected to Generate 79.4ZB of Data in 2025, According to a New IDC Forecast. Available: https://www.idc.com/getdoc.jsp?containerId= prUS45213219 (2020) 2. Islam, M.T., Islam, N., Al Refat, M.: Node to node performance evaluation through RYU SDN controller. Wirel. Pers. Commun. 1–16 (2020) 3. Alshaer, H., Haas, H.: Software-defined networking-enabled heterogeneous wireless networks and applications convergence. IEEE Access 8, 66672–66692 (2020) 4. Priya, A.V., Radhika, N.: Performance comparison of SDN OpenFlow controllers. Int. J. Comput. Aided Eng. Technol. 11, 467–479 (2019) 5. Shahzad, F., Khan, M.A., Khan, S.A., Rehman, S., Akhlaq, M.: AutoDrop: automatic DDoS detection and its mitigation with combination of Openflow and Sflow. In: International Conference on Future Intelligent Vehicular Technologies, pp. 112–122 (2016) 6. Cui, Y., Yan, L., Li, S., Xing, H., Pan, W., Zhu, J., et al.: SD-Anti-DDoS: fast and efficient DDoS defense in software-defined networks. J. Netw. Comput. Appl. 68, 65–79 (2016)

Performance Comparison of Various Controllers …

309

7. Tivig, P.-T., Borcoci, E.: Critical analysis of multi-controller placement problem in large SDN networks. In: 2020 13th International Conference on Communications (COMM), pp. 489–494 (2020) 8. Alghamdi, K., Braun, R.: Software defined network (SDN) and OpenFlow protocol in 5G network. Commun. Netw. 12, 28 (2020) 9. Kumar, D., Sood, M.: Analysis of impact of network topologies on network performance in SDN. In: International Conference on Innovative Computing and Communications, pp. 357– 369 (2020) 10. Li, Y., Guo, X., Pang, X., Peng, B., Li, X., Zhang, P.: Performance analysis of floodlight and Ryu SDN controllers under mininet simulator. In: 2020 IEEE/CIC International Conference on Communications in China (ICCC Workshops), pp. 85–90 (2020) 11. Chouhan, R.K., Atulkar, M., Nagwani, N.K.: Performance comparison of Ryu and floodlight controllers in different SDN topologies. In: 2019 1st International Conference on Advanced Technologies in Intelligent Control, Environment, Computing & Communication Engineering (ICATIECE), pp. 188–191 (2019) 12. Ali, J., Lee, S., Roh, B.-h.: Performance analysis of Pox And Ryu with different SDN topologies. In: Proceedings of the 2018 International Conference on Information Science and System, pp. 244–249 (2018) 13. Kaur, K., Kaur, S., Gupta, V.: Performance analysis of python based openflow controllers (2016). 14. Bholebawa, I.Z., Dalal, U.D.: Performance analysis of sdn/openflow controllers: Pox versus floodlight. Wirel. Pers. Commun. 98, 1679–1699 (2018) 15. Bholebawa, I.Z., Dalal, U.D.: Design and performance analysis of OpenFlow-enabled network topologies using Mininet. Int. J. Comput. Commun. Eng. 5, 419 (2016) 16. Rohitaksha, K., Rajendra, A.: Analysis of POX and Ryu controllers using topology based hybrid software defined networks. In: International Conference on Sustainable Communication Networks and Application, pp. 49–56 (2019) 17. Pandian, M.D.: Enhanced network performance and mobility management of IoT multi networks. J. Trends Comput. Sci. Smart Technol. (TCSST) 1, 95–105 (2019) 18. Chen, J.I.Z., Smys, S.: Social multimedia security and suspicious activity detection in SDN using hybrid deep learning technique. J. Inf. Technol. 2, 108–115 (2020) 19. Kaur, K., Singh, J., Ghumman, N.S.: Mininet as software defined networking testing platform. In: International Conference on Communication, Computing & Systems (ICCCS), pp. 139–42 (2014) 20. Atary, A., Bremler-Barr, A.: Efficient round-trip time monitoring in OpenFlow networks. In: IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications, pp. 1–9 (2016) 21. Ovs controller. Available https://docs.openvswitch.org/en/latest/faq/openflow/ (14 Dec) 22. PoX controller. Available https://noxrepo.github.io/pox-doc/html/ (14 Dec) 23. Ryu controller. Available https://ryu-sdn.org/ (14 Dec)

Preprocessing of Datasets Using Sequential and Parallel Approach: A Comparison Shwetha Rai , M. Geetha , and Preetham Kumar

Abstract Data preprocessing is a technique in data mining to make the data read for further processing according to the requirement. Preprocessing is required because the data might be incomplete, redundant, come from different sources which may require aggregation, etc., and data can be processed either sequentially or in parallel. There are several parallel frameworks such as Hadoop, MPI, and CUDA to process the data. A survey has been done to understand these parallel frameworks, and a comparison between sequential and parallel approach is carried out to compare the efficiency of the two approaches. Keywords Data cleaning · Data preprocessing · Parallel algorithms · Redundant data · Sequential algorithms

1 Introduction The advent of computer and its usage in multiple areas has contributed to the generation of data enormously. As the years passed by, the memory space offered by the computers also increased from kilobytes to petabytes. This helped in storing and processing enormous amount of data generated from various fields of science. The processing and analysis of data are carried out for understanding the nature of the users or systems. This helps in the improvement of service provided to users or the performance of the system, respectively. For the data to be processed and S. Rai (B) · M. Geetha Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India e-mail: [email protected] M. Geetha e-mail: [email protected] P. Kumar Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_27

311

312

S. Rai et al.

analyzed, it must be relevant, complete, and without noise, otherwise it will lead to inappropriate analysis of the data. In the real world, data can be incomplete, with noise, with high dimensions which will require preprocessing before mining and analysis. There are several data preprocessing techniques which can be broadly divided into data preparation and data reduction [1]. The dataset obtained from different sources may vary from kilobytes to petabytes or even more. Preprocessing of data can be done within few seconds if the size of dataset is small but it may take years to preprocess big data. The data is said to be big data if it has the following characteristics [2]: 1. 2. 3. 4.

Volume: the amount of data available. Velocity: the speed at which the data is created. Variety: the representation of the data such as textual, pictorial, and video. Complexity: the degree of interconnections and interdependence in big data structures such that a small change in one or a few elements can yield very large changes. 5. Value: the usefulness of data in making decisions. Preprocessing of big data in a traditional sequential pattern will take years together to complete the task. Hence, there is a need for a parallel approach to process the data which is much faster than the sequential process. There are several programming models for processing big data such as Hadoop, MPI, and CUDA [3–7]. Message passing interface (MPI) is used as a communication interface between the processors in a shared multiprocessor environment. It is a standard specification for libraries used to pass messages in MPI.

2 Background Theory Knowledge discovery from the databases is carried out using data mining algorithms. These algorithms require data to be preprocessed such that they are suitable for mining and knowledge discovery. Several data preprocessing techniques are used in preparing the data for mining. The employment of data preprocessing technique in any research is based on the requirement of research under taken. In [8], the data preprocessing module comprised of data normalization, data clustering, and data splitting to get the processed input for process mining, whereas the work in [9] required the normalization of the dataset before processing the information. A comparison of various preprocessing techniques required for Twitter data was carried out in the paper [10] which proved that removing the punctuation from the dataset did not have a significant contribution for improving the accuracy in the classification. The experimentation conducted in [11] showed that the Naïve Bayes classifier performed better on preprocessed data and the false positive rate was reduced by a significant margin of 25.39%. Preprocessing is crucial step in big data since the large volumes of data must be processed and the speed of processing the data must match the velocity

Preprocessing of Datasets Using Sequential and Parallel Approach . . .

313

of data at which it arriving [12]. Hence, if the redundant data is removed from the dataset at an earlier stage, then it will facilitate in speeding up the process if gaining the information from big data. If the data is large, then the sequential approach would be inefficient, and hence, a parallel approach is required to preprocess the data.

2.1 Data Preprocessing (DPP) DPP is a technique to prepare the data for further processing in order to obtain useful information that can be used for analysis. Preprocessing is required because the data might be incomplete, redundant, and come from different sources which may require aggregation. The important steps involved in data preprocessing are cleaning, integration, reduction, and transformation of data [13]. Data Cleaning Data cleaning technique comprises concepts such as: 1. Handling missing values: Missing values in the dataset can be handled using methods such as rejecting the tuple, manually filling the missing value, replacing the missing value by the most probable or global constant , using the central tendency for the parameter to fill the missing value, using the mean or median of the parameters for all samples that belongs to the same class as the given tuple, and using the most likely value to fill the missing value. 2. Removal of noise: A random error in the dataset is called noise. It can be removed using data smoothing techniques such as binning, regression, and outlier analysis. Data Integration Data is usually stored at multiple data stores, and this requires data merging from these stores for future analysis. The merging may lead to inconsistencies, redundancies, etc., in the resulting dataset. 1. Entity identification problem: This is a problem with integration when the objects and schema from different sources have to be matched. The structure of the data must be considered to ensure that the functional dependencies and referential limitations of the source system are consistent with those of the destination system. 2. Correlation and redundancy analysis: A feature is said to be redundant if it can be “derived” from another feature or feature set. Inconsistency is naming the attribute in different sources that may also lead to redundancy in the data after integration. Correlation analysis is a technique that can be used to detect the redundancies. This technique measures how strongly two attributes influence each other. 3. Tuple duplication: After integration of data from multiple data stores, there may be duplicate tuples in the dataset. This may be due to incorrect data entry or updating of some of the tables and not all related tables. 4. Data value conflict detection and resolution: Attribute values in different locations may vary due to differences in representation and encoding.

314

S. Rai et al.

Data Reduction Data reduction is a technique for obtaining a smaller dataset from a large volume without losing its integrity. Numerosity reduction, dimensionality reduction, and data compression are different types of data reduction techniques. Data Transformation Data transformation is a technique in which the data is transformed into a different representation that is suitable for mining. The different types of data transformation are aggregation, attribute construction, concept hierarchy generation, discretization, normalization, and smoothing.

2.2 Message Passing Interface MPI is a standard specification for libraries that is used to pass messages. Message passing is useful in a multiprocessor architecture where the processors communicate with each other for communication and synchronization. A simple message passing model consists of multiple processors that are connected to each other via an interconnection network. Each system has its own local memory where the data is stored. The interconnection network is used for message passing between the processors. Each processor is assigned with one process to maximize the performance. A process is assigned with an identifier which is called the rank and the size specifies the total number of processes created. The user specifies the total number of processes required for the execution of the program in parallel. The communication between the processes is done through MPI library routines. Any processor with basic MPI environment can use the libraries of MPI and run the program in parallel on different systems on different set of data. It supports both collective and point-to-point communication. According to [14], MPI is suitable for algorithms that are iterative where nodes are dependent on other nodes and require data exchange to proceed.

3 Research Methodology The main objective is to develop sequential and parallel algorithms for removal of duplicate files and to compare the efficiency of the algorithms developed. Hence, data preprocessing is carried out using both sequential and parallel approaches. This work focuses on the comparison between the efficiency of sequential and parallel approach for preprocessing the dataset. Same set of data was taken into consideration for comparing the efficiency between the two approaches. A summary of research design is given below: 1. Approach: The approach used is quantitative approach. 2. Methodological paradigm: Positivism/post-positivism. 3. Strategy: An empirical study is conducted.

Preprocessing of Datasets Using Sequential and Parallel Approach . . .

315

4. Data collection method: Secondary data from Open Government Data Platform (OGD) India [15]. 5. Variables in the study: Independent variable is dataset. The dependent variable is performance of the approach. 6. Extraneous variables: None.

3.1 Data Preprocessing using Sequential Approach Figure 1 is a flowchart representing the flow of control for data preprocessing using a sequential approach. A list of files is read from the directory and checked for duplicates. Only one copy of the files is moved to new location. The files in the new location are opened for obtaining the reduced dataset. The algorithm is given in Algorithm 1. Algorithm 1 dataPreprocessSequential() Description: Pre-process the data to obtain a reduced data set. Input: CSV files Output: Reduced data set 1: i←0 2: fileList[i←i+1]← List of files in the directory 3: noOfFiles←i 4: i←0 5: while i= noOfFiles do 6: j←0 7: while j= noOfFiles do 8: same←fileCmp(fileList[i], fileList[j]) 9: end while 10: if !same then 11: copyFileToDifferentLocation(fileList[i]) 12: end if 13: end while 14: i←0 15: newFileList[i←i+1]← List of files in the new directory 16: noOfFiles←i 17: i←0 18: while i= noOfFiles do 19: generateReducedDataSet(newFileList[i]) 20: displayReducedDataset(newFileList[i]) 21: end while

316

S. Rai et al.

Fig. 1 Flowchart representing the sequential approach for data preprocessing

3.2 Data Preprocessing Using Parallel Approach Figure 2 is a flowchart representing the flow of control for data preprocessing using a parallel approach. A list of files is read from the directory. Each file is assigned to different processes to check whether a duplicate copy exists and only one copy of the files is moved to new location. The same process opens the file from the new location for obtaining the reduced dataset. The algorithm for parallel approach is given in Algorithm 2.

Preprocessing of Datasets Using Sequential and Parallel Approach . . .

317

Algorithm 2 dataPreprocessParallel() description: pre-process the data to obtain a reduced data set. Input: CSV files Output: reduced data set 1: i← 0 2: fileList[i← i+1]← List of files in the directory 3: noOfFiles←i 4: i←0 5: n←createNProcesses(noOfFiles) 6: par for each process ←0 to n do 7: assignProcess[process←process+1]←fileList[i←i+1] 8: j←0 9: while j=noOfFiles do 10: same←fileCmp(fileList[i], fileList[j]) 11: end while 12: if !same then 13: copyFileToDifferentLocation(fileList[i]) 14: newFile←open the copied file from the new location 15: generateReducedDataSet(newFileList) 16: displayReducedDataset(newFileList) 17: end if 18: end par for each

4 Results and Analysis 4.1 Dataset Used The input taken for the implementation of the two approaches is a real-life dataset. There are 60 files containing the information regarding Swachh Bharath Toilet construction at different states and districts. Figure 3 shows the sample dataset file.

4.2 Running Environment The sequential program was implemented using C language and run on Visual Studio 2012 in Windows 10, 64-bit operating system, x64-based Intel Core i5 CPU at 2.30 GHz processor, with 4 GB RAM. The parallel program was written using MPI and run on the same system that was used to run sequential program.

318

S. Rai et al.

Fig. 2 Flowchart representing the parallel approach for data preprocessing

4.3 Results Table 1 shows the execution time for preprocessing the data using sequential and parallel approach. The experiments are performed by increasing the number of files in each run to observe the execution time taken to remove the redundant files using sequential and parallel approach. The transactions with missing values are deleted from the files before checking for redundancy. Figure 4 shows the effectiveness of using a parallel approach compared with sequential approach. Parallel approach is 66.57% faster than sequential approach while finding the duplicate elements.

Preprocessing of Datasets Using Sequential and Parallel Approach . . .

319

Fig. 3 Sample input dataset Table 1 Execution time using sequential and parallel approaches No. of files Execution time using Execution time using parallel sequential approach (in ms) approach (in ms) 10 20 30 40 50 60

Fig. 4 Execution time of sequential versus parallel approach

0.198 0.324 0.458 0.698 0.761 0.849

0.032 0.088 0.136 0.218 0.264 0.361

320

S. Rai et al.

5 Conclusion and Future Enhancement Data preprocessing is a major step in data mining area. It helps in retaining required dataset for mining interesting patterns. Data preprocessing should be carried out based on the requirement of the dataset for further processing. Data mining can be carried out in both sequential and parallel approach using different programming models. From the analysis of the two approaches, it is observed that the parallel approach is 66.57% faster when compared to sequential approach. The experiment can be performed to compare the time taken by various parallel programming approaches such as CUDA and OpenCL. In the experimentation performed, the transactions with missing values are manually deleted from the files. A parallel program may be employed to handle the missing values in the input files.

References 1. García, S., Luengo, J., Herrera, F.: Data Preprocessing in Data Mining, vol. 72. Springer (2015) 2. Kaisler, S., Armour, F., Espinosa, J.A., Money, W.: Big data: issues and challenges moving forward. In: 2013 46th Hawaii International Conference on System Sciences, pp. 995–1004. IEEE (2013) 3. Cuda. http://www.nvidia.com/object/cuda_home_new.html. Last accessed 31 Dec 2020 4. The differences between MPI, GPU, and Hadoop. https://stackoverflow.com/questions/ 10237443/mpi-vs-gpu-vs-hadoop-what-are-the-major-difference-between-these-threeparallel. Last accessed 31 Dec 2020 5. Hadoop. https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoopmapreduce-client-core/MapReduceTutorial.html. Last accessed 31 Dec 2020 6. Dean, J., Ghemawat, S.: Mapreduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008) 7. Holmes, A.: Hadoop in Practice, vol. 3. Manning, New York (2012) 8. Shakya, S.: Process mining error detection for securing the iot system. J. ISMAC 2(03), 147– 153 (2020) 9. Anand, J.: A methodology of atmospheric deterioration forecasting and evaluation through data mining and business intelligence. J. Ubiquit. Comput. Commun. Technol. (UCCT) 2(02), 79–87 (2020) 10. Effrosynidis, D., Symeonidis, S., Arampatzis, A.: A comparison of pre-processing techniques for twitter sentiment analysis. In: International Conference on Theory and Practice of Digital Libraries. pp. 394–406. Springer (2017) 11. Kumara, B.A., Kodabagi, M.M., Choudhury, T., Um, J.S.: Improved email classification through enhanced data preprocessing approach. Spat. Inf. Res. 1–9 12. Shehab, N., Badawy, M., Arafat, H.: Big data analytics and preprocessing. In: Machine learning and big data analytics paradigms: analysis, applications and challenges, pp. 25–43. Springer (2021) 13. Han, J., Kamber, M., Pei, J.: Data Mining Concepts and Techniques, 3rd edn. The Morgan Kaufmann Series in Data Management Systems, vol. 5, issue (4), 83–124 (2011) 14. Chen, W.Y., Song, Y., Bai, H., Lin, C.J., Chang, E.Y.: Parallel spectral clustering in distributed systems. IEEE Trans. Pattern Anal. Mach. Intell. 33(3), 568–586 (2010) 15. Open Government Data Platform (OGD) india. https://data.gov.in/catalog/daily-data-ruralsanitation-coverage-under-swachh-bharat-mission. Last accessed 31 Dec 2020

Blockchain Technology and Academic Certificate Authenticity—A Review K. Kumutha and S. Jayalakshmi

Abstract Blockchain technology has the abilities that are decentralized, distributed, secure, faster, transparent, and non-modifiable. These are more beneficial than the existing technologies. Academic certificates issued by educational institutes are most significant for the students at the time of recruitment, attending government exams, applying for higher education, and also for getting visa to go abroad. The certificate issuing method is not very transparent and verifiable. Hence, there are a lot of chances for making fake certificates. This problem is solved by using the recently emerged blockchain technology with the feature of high data security and immutable data storage in distributed ledger to fight back the document fraud and forge. This research is aimed to enhance the document verification process by using blockchain technology. And also this paper centers around the broadening information on blockchain and recognizing the benefits, hazards, and therefore, the related difficulties regarding the effective execution of application supported blockchain technology with principles and rules for educational certificate verification. Finally, this review paper has proposed the academic certificate authenticity system by using blockchain technology to avoid the utilization of fake documents. Keywords Blockchain · Digital certificates · Blockchain in education · Technical challenges

1 Introduction Blockchain innovation may be a rising innovation that offers highlights like decentralized, straightforward, and sealed information stockpiling. It could also be utilized to tackle issues, for instance, absence of trust, misrepresentation, high exchange cost, sharing, security and evaluating reliability of an expected entertainer in an exchange [1]. Consequently, blockchain innovation may be a hopeful innovation to forestall K. Kumutha (B) Tagore College of Arts and Science, VISTAS, Chennai, India S. Jayalakshmi Department of Computer Applications, VISTAS, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_28

321

322

K. Kumutha and S. Jayalakshmi

the misrepresentation exercises in our present certificate issuing and verification system or advanced declaration framework. In general, all the academic universities are recorded their students’ final degree certificates and courses within their institute’s information database. Due to increasing demand, there is a continuing threat of scholars taking shortcuts during a successful market of forged degrees and credentials. It is not a specific stretch to seek out fake universities and degree mills that operate purely to form money, copycat Web sites, also as sites and individuals that issue and print fake degrees and academic credentials. Most of the Web sites can issue fake degrees and certificates. This clearly shows that there is a lack of authenticity and validity testing system. Students, who submit fake certificates when applying for job or any government exams or getting admissions for higher studies in foreign universities, may perform poorly in business and demonstrate a willingness to commit fraud for individual gain. These fake certificates may delay the admission process in foreign academic institutes (according to a recent survey by UK’s National Qualification Agency, it found that only one in four university admission staff feel condiment spotting fake qualification documents) [2]. Most of the institutes use QR code to check the authenticity of the academic achievements and do the digital track for verifying certificates. This review paper focused on the certificate verification by using blockchain technology to avoid the fake documents. This paper contains five sections deal about the concept of blockchain, research methodology which is employed to collect the varied article associated with BT in education, research objectives, and eventually the overview o of literature review. And also analyze the some technical challenges and future trends of blockchain technology. Thus, the aim of this research is to identify a blockchain framework for implementing data security in requirements of academic certificate verification. The intention of the blockchain framework is to eradicate the problem of fake certificates and forge in academic documents.

2 Literature Review This exploration comprises of insightful research papers from supposed diaries, gatherings, and books which are comprising of 35 notable assets top world colleges articles by IEEE papers, Frontiers of Computer Science—Springer, Frontiers of Computer Science—Springer, Scopus, Journal of NCA, Computer Communications, Transactions on Emerging Telecommunications Technologies, Journal of Supercomputing, ACM, Elsevier, and Science Direct. The reason for this writing audit is to distinguish the current instruments, approaches, philosophies, and really understudy’s information and blockchain that are reliably working in the industry. In writing audit, we watched its most recent innovative patterns and analysts and industrialists who are all the while taking a shot at it and improving this innovation dependent on the current weaknesses. Our examination depends on four basic research inquiries so as to discover the proper research bearings in the space of blockchain and its applications. They looked into writing permits depicting standards and the

Blockchain Technology and Academic Certificate …

323

establishment of blockchain innovation, just as its applications in various genuine use cases. It merits referencing that practically all papers show a requirement for additional exploration in the field. A great deal of creators accepts that blockchain has huge potential, yet at the same time, there are a ton of snags and difficulties for genuine execution. For instance, there is not a single, overall acknowledged meaning of blockchain; in this way, creating normal principles and guidelines (both specialized and legitimate) is profoundly important. Normalization would be the initial move toward planning bound together methodologies in blockchain programming improvement and security practices, without which a sensible work of blockchain is not feasible.

2.1 What is Blockchain? Blockchain, or a sequence of blocks, may be a specific data-recording structure that operates during a decentralized way. Each block contains a transaction data and a hash of the previous block, and new data can only be inscribed on a replacement block appended to the chain. This suggests all the blocks before it are hashed and cannot accept any alterations, edits, or changes without these being detected. Each transaction in the blockchain structure (Fig. 1) has its hash, date, timestamp, transaction, and previous hash of the block. The user can submit a transaction request to the blockchain network. Then, that transaction is validated by one of the peers using mining process. Once the transaction is approved, its hash key is generated and distributed to all the peers in the network. Merkle tree is commonly used as hashing algorithms as it gives easy hash and dehashes options.

Fig. 1 Blockchain structure

324

K. Kumutha and S. Jayalakshmi

2.2 How Does Blockchain Works Blockchain technology may be a peer to see and distributed ledger system of combined several computers. A blockchain network is actually an Internet of computers or nodes over the Web. A block can have transaction data, timestamp, transaction root hash, and nonce value. Whenever a replacement block is often created, the miner any node within the blockchain network can verify it by solving a cryptographic mathematical puzzle or problems and gain rewards [3]. Then, this node distributes the newly created block to other nodes within the network to validate. This has accomplished by means of consensus algorithm proof of work (PoW). Then, the transaction within the newly created block is valid then only that is added to the blockchain network. It is a linked list-like data structure that maintains details of data and its transactions via a peer-to-peer network publically. Each movement of data is secured with hashing SHA-256 algorithm, and then all the transaction summary will be grouped and kept as blocks of data. Then, the blocks are joined with hash value of previous block and so on and secured from tamper-proof. This entire functionality of the blockchain will produce secure and non-modifiable record of the transactions that happened across the P2P network.

2.3 What are the Main Advantages of Blockchain? The main advantages of blockchain technology as shown in Fig. 2 are process reliability: The immutability is the key feature which ensures that any transactions or block contents cannot be modified which assures high level of data security. Traceability: The design of blockchain provides history of permanent data that can be easily traced at any time. Once the transaction is added into the blockchain, it cannot be rewritten but can be located with the help of hash key. Fig. 2 Main advantages of blockchain

Blockchain Technology and Academic Certificate …

325

Security: Blockchain network provides a unique identity for the peers which ensure that the owner of the account himself is processing the transactions. Since the data is recorded within the blockchain as encrypted form, it becomes difficult to hack the data. Faster processing: Verification of certificate originality with the help of blockchain technology is faster when compared with traditional method of verification; the processing time is reduced to minutes or seconds.

2.4 Key Weakness Related with Blockchain Technology Blockchain technology is an emerging solution, which still suffers from some inherent challenges and issues, as summarized in Fig. 1. Consumption of power: The main disadvantage of blockchain is consumption of power. Whenever the miner needs to approve the new transaction into the blockchain, it consumes more power to validate that transaction. Adding a transaction in a real-time ledger requires more energy. Cost: The initial capital cost of blockchain is very high. The result of study shows that the costs are a very big issue. The typical cost of transaction is $75 to $160. Most of the cost is spent for consumption of power [4]. Lack of Awareness and Understanding The chief test related with blockchain is an absence of consciousness of the innovation, particularly in areas other than banking, and an across the board absence of comprehension of how it functions. As the blockchain biological system develops and distinctive use cases rise, associations in all industry areas will confront a complex and possibly disputable exhibit of issues, just as new conditions.

2.5 Blockchain Application in Academic Certificate Authenticity Blockchain is a recently growing concept and usage of this in academic fields is really new. But this study identified an enormous number of applications and patterns to implement the requirements of academic system. The comparative analysis report is framed as Table 1. This shows the feature, blockchain framework, and mode of the accessibility of academic system. Most of the applications used coin-based framework and public blockchain to verify the academic certificates. The EduCTX application is supporting only higher education certificates and implemented with permissioned blockchain [5]. Table 1 illustrates the few education institutes such as schools and universities who are creating and some as of now have created blockchain-based authentication frameworks to give endorsements utilizing blockchain. University of Nicosia is the first academic university to use this technology to authenticate the certificates [6, 7].

326

K. Kumutha and S. Jayalakshmi

Table 1 Blockchaın-based applications in education Application

Feature

Blockchain platform/implementation/accessibility

Coin/gas based

Year

Edgecoin

Fraud protected ensured great arrangements

Dapps/public

Yes

2016

Tutellus

Solve the current Dapps and smart contracts TUT token educational as the gas/public costs for college level students

Yes

2016

Blockcerts

Produce secure, validate and issue e-certificate

MIT Laboratory, Wallet Bitcoin and Ethereum/public

Yes

2016

GradBase

E-educational record verification system

Bitcoin QR code/public

Yes

2016

EduCTX

Supporting only Ark private/consortium network/private No higher education certificates

2017

TeachMe Please (TMP)

Database of learning institutions for both online and offline schools

TMP used permission-less blockchain, as Ethereum or EOS/public

Yes

2017

SuccessLife

The world-leading seminar and workshop organizer

Ethereum blockchain SXL Token/ public

Yes

2018

Sony Global Education (SGE)

Secure and share Partnership with IBM/public the record of a student

Yes

2018

Origin-Stamp

Secure time-stamping for ensuring the security

Bitcoin blockchain technology/public

Yes

2018

Echolink

Organizes a relationship between educational entities

Eko token permission-less blockchain/public

Yes

2019

Blockchain Technology and Academic Certificate …

327

This literature study depicts those projects in Table 1 are not publically available. But have also searched coin-based projects which are related to academic certificate verification system, if there are any white papers available in order to find the implementation strategy. But this study finds Ethereum coin Ether-based applications related to academic degree certificate system to protect from fraud activity. This review paper identifies four projects related to educational certificate verification via Google Scholar and online resources. Among these, only “Blockcerts” application is open source which is developed by MIT [8, 9]. Three other projects, “EduCTX”, “Gradbase” records on Bitcoin blockchain to verify the certificates based on coins [10]. Most of the papers dictate the prototype of “Blockchain for Education” and blockchain-based certificate system and published their finding as research paper.

3 Related Works This study clearly mentioned that the student’s degree certificate and other performance abilities are stored in blockchain which definitely provide high security and real-time access those data within minimum cost. This literature study is analyzed based on some authenticate factors such as operation security, data security, network security, and privacy and find answers for the research questions and that are evaluated in this paper as follows.

3.1 RQ1: What are the Challenges Today to Trait the Fake Degree Certificates Problem? It is clear that the method of verifying the validity of instruction certificates comes with flaws. These may reveal the overall patterns of ways in which universities, education providers, and fellow students may be harmed, yet additionally, businesses that hire deceitful students, their staff, and clients. • The growing size and complexity of the market • Highly inefficient, confusing, and expensive check • Different testimonial interpretations are confusing to everyone. The most important job screening challenges are minimizing hiring time, increasing overall energy in the screening process, verification of information, high prices of complete checks, and lack of power to conduct or support international screening. As centralized businesses and approaches focus on leaks and cracks in the current system, decentralized solutions with blockchain technology schools continue to take drugs a lot more often in strategic conversations.

328

K. Kumutha and S. Jayalakshmi

3.2 RQ2: What are the Solutions and Applications Suggested with Blockchain Technology for Educational System? A variety of blockchain applications since from Bitcoin to research on blockchain could have a huge impact on the way researchers build their reputation and become recognized. These applications have been developed for different purposes. Most of the articles focused on supply chain and digital certification verification to keep up with for long time with secure data. This paper focuses on applications that have been developed for educational purposes, and those are analyzed its aim of the application and implementation mechanism and challenges which are required to resolve in future functionality are summarized as shown in Table 2. This review process on different articles and papers depicted the applications for educational system can be classified into different categories such as a decentralized publically distributed system for educational student details, student’s performance report, and reward a tool for chase and verification of authorized degrees: Enterprise form, accreditation, and degree verification system, academic certificate authentication system, issuing, and verifying digital certificates, and universities can incorporate an efficient blockchain education program, secured university results system, learning outcome and metadiploma and student data privacy and consent, lifelong learning, protecting learning objects, examination review, enhancing students interactions in online learning [11, 12]. The most of applications were studied in this review were centered on students degree certificates verification management. The system provides the digital certificate platform by making use of a secret session key. It generates the key and uses it to authenticate the user [13]. This system makes use of private key optimized digital identity for digital certificate generation and authentication [14]. The system can be used by an institution for its official Web site. The purpose of the system was to design an online certificate system based on verification which can be used by the institution [15]. A unique ID is generated using facial particular region, which is used for one to one verification of documents [16]. The system makes use of QR code for authentication of digital certificates. The server database is used for the record of all generated QR codes. Using the client–server model, the system is developed which generates the certificates in batches and not individually [17]. However, a few of the reviewed papers focus on e-learning process to enhancing students’ interactions by rewarding them with virtual currencies. This paper deals about auditing and sharing of exam papers to validate the student’s academic achievements [18, 19]. Most of the papers enclosed for this literature review conferred the applications which are used to verify the authority of issuing, keeping, and distributing students’ educational certificates. Still, these artifacts focused on blockchain applications to maintain student details, sharing the degree certificates and mark statements (learning outcomes) which are earned by students. The concept of applications focuses on student’s evaluation and academic abilities [20]. Some of these articles present the blockchain-based applications that are used by companies to verify the student’s academic achievements and professional skills [21, 22].

Blockchain Technology and Academic Certificate …

329

Table 2 Sum-up the comparison of various applications, implementation mechanism, and challenges Title

Authors

Implementation of the application benefits

Challenges in future process

A Distributed System Sharples, M., for Educational Record, Domingue, J Reputation and Reward

Micropayments Kudos currency to access records

No separate verification service Vulnerable to proofing attacks

learning outcome and meta-diploma

Bin Duan,Ying Zhong, Dayu Liu

POA Consensus Mechanism (Prove of Accreditation)

Lack of consensus mechanism Not clear picture of authenticity

Using blockchain as a tool for tracking and verification of official degrees

Miquel Oliver, Joan Back-End service Moreno, Gerson through the key Prieto, David

Lose of key need a new wallet Student will be responsible to save in a secure vault

How Universities Can Rajarshi Mitra Incorporate an Efficient Blockchain education program

Blockchain Education Network (BEN) to Kerala Blockchain Academy (KBA)

Universities curriculum only via partnerships Not clear picture about authenticity

Blockchain for Student Data Privacy and Consent

Gilda, S., and Mehrotra, M

Hyperledger Fabric and Lack of consensus Hyperledger Composer mechanism nested authorization The certificate is vulnerable to manipulation

Certificate Verification System using Blockchain

Nitin Kumavat Swapnil Mengade, Dishant Desai

(IPFS) SHA-2S6 algorithm Ethereum platform

University can be added only by the owner of the smart contract Not clear method of authority of experts

EUniCert: Ethereum based digital certificate verification system

Trong Thua Huynh1, Dang-Khoa

EUniCert based on UniCoin digital currency

Requirements for an employer to verify the certificate A student cannot authorize Not clear method of authenticity

A Blockchain-Based Accreditation and Degree Verification

Aamna Tariq, Hina Permissioned Binte Haq, Syed blockchain Taha Ali

Not for students or employers Need a partnership with multiple universities Not clear method of authenticity (continued)

330

K. Kumutha and S. Jayalakshmi

Table 2 (continued) Title

Authors

Implementation of the application benefits

Secured University Results System using Block Chain Features

Prashik Thul, Administrator and Tushar Raut, Kunal Student Administrator Yadav

Not for the employer and hides the privacy of student Lack of consensus mechanism

Degree Verification over Blockchain

BlockchainTech Private Limited, Karachi, Pakistan

Vulnerable to proofing attacks Need for basic information security measures No clear method of authenticity of parties

Proof of Existence (PoE) consensus mechanism

Challenges in future process

This article includes category of protecting learning resources from damage and unauthorized access. The projects studied in this review are related to academic certificate verification and learning objects protection which has high data security, credit transfer, and low cost [23]. This paper gives clarity on academic competitions and protection of the ownership rights of learning resources [24, 25]. This review shows that the out of thirty-five papers stated education-based applications that have been already formulated and are now being used. This study clearly mentioned that the student’s degree certificate and other performance abilities are stored in blockchain which definitely provide high security and real-time access those data within minimum cost [26, 27].

3.3 RQ3: How to Inspect and Find Innovation Ways to Propose a Replacement Technology? Through this survey, the academic certificate verification system based on blockchain technology is proposed. Figure 3 depicted the proposed model for certificate verification system based on blockchain to prevent the fake certificate. The universities and education institutes can upload degree data into the blockchain network. The students are given link to their degree data using QR code. Wherever the students need to applying for the job, there is a need of the employer can verify that the degree is valid or not using this application. These results to reduce the processing cost and save time since within a few minutes employer can verify the certificates. Since the features immutability and transparency of the recorded data in the blocks on the blockchain, network will overcome the fraudulent activities in the digital certificate verification process [28]. Blockchain innovation is ideal as another framework to make sure about, share, and check learning accomplishments [20]. On account of accreditation, a blockchain

Blockchain Technology and Academic Certificate …

331

Fig. 3 Weakness of blockchain technology

Fig. 4 Model of blockchain-based certificate verification system

can keep a rundown of backer and collector of each declaration, along with the record signature (hash) in a public information base (the blockchain) which is indistinguishably put away on a large number of PCs around the globe. Computerized declarations which are accordingly made sure about on a blockchain hold huge focal points over “customary” advanced endorsements. This implies the above system Fig. 4 takes into account the mark of a report to be distributed, without expecting to distribute the record itself, along these lines saving the security of the archives. Various situations such as student’s academic certificates, learning resources, applying for job, there is in need of verifying the proof and background details about the certificates. In this model, the issued certificate is stored as encrypted

332

K. Kumutha and S. Jayalakshmi

Table 3 Sample ledger entry Transaction ID

Timestamp

Sender

Asset-digital certificate

Receiver

#

dd-mm-yy hh:mm

Student, universities, Certificate details as education institute, encrypted code online platform, issuer

Employer and students

#

dd-mm-yy hh:mm

Student, universities Certificate details as education institute, encrypted code online platform, issuer

Employer and students

Note that while this takes into account the endorsement to be completely coordinated to a guarantor or beneficiary, it does not ensure against either the backer or collector mimicking someone else or foundation. Forestalling character extortion will probably require public key libraries which fill in as confirmed arrangements of which people own which public keys, which will probably be kept up by sellers and public foundations as a help [14, 15]

code in block which is created by using hyperledger fabric blockchain platform for permissioned institute and organizations. Where each block created when issuing the certificate with the hash value, timestamp, and owner details of that certificate, etc., and the sample ledger entry is showed in Table 3 [13]. Despite the fact that the advantages of using blockchain in the academics, this exploration subject is still in a starter stage and the appropriation of norms and guidelines is basic to extend its utilization. Consequently, the choice to utilize this technology on educational institutes has to be alert and check the availability of the blockchain experts and technical support as well developer to implement this application.

4 Technical Challenges In today’s world, at most of the industries and application developers are focusing to increase their productivity through the blockchain technology because of its nature such as transparency, immutability, and accountability. In order to bring out, this emerging technology as stand out required experts in technical support and also there is an uncertainty as how blockchain technology does is desirable for business regulative strategy. There are some technical issues while try to implement the blockchain technology that are security and reliability due to its open source the change in code, bugs, new exposure may cause huge loss of users. Another issue is storage-space, Bitcoin supports seven and Ethereum blockchain can supports twenty transaction per second so if block size increase, the existing blockchain is not feasible to have huge storage. Since its transparency, the users has to maintain their hash key in order to direct access the data, it is difficult to remember. And also this technology is required to full fill the communication gap between blockchain application developers and users.

Blockchain Technology and Academic Certificate …

333

5 Conclusion This review analyzed the reasonable applications based on blockchain technology activities, and furthermore introduced a correlation among the highlights that exist in various educational applications. For analysts, blockchain can possibly be trying to concentrate all the more intently on subjects like identity management, document management, certificate verification, health care, insurance, e-voting, supply chain management, property management, etc. Even physical anti-counterfeiting features avoid tampering chances, this blockchain solution help in intelligence and identifying the culprits in the systems as well as facilitating fast and convenient authentication. Most of the institutes are using QR code to check the authenticity of the academic achievements and do the digital track for verifying the certificates. The problems on traditional centralized database used in educational institutes are solved with help of decentralized new type of blockchain database. However, the application of blockchain technology in education domain is more beneficial but this topic of research is still in its early stages and rules are needed to enlarge its use in education domain. Even though the blockchain technology has more challenges in implementation on academics, but the utilization of the blockchain must be large in predominance than its disadvantages.

References 1. Bruce Dorris, J.D.: Report to the Nations, Global study on occupational fraud and abuse. Assoc. Certified Fraud Exam. (2018). https://doi.org/10.5121/ijnsa.2019.11502 2. https://www.indiatoday.in/education-today/featurephilia/story/how-students-and-employerscan-spot-and-eliminate-fake-degrees-1725931-2020-09-27 3. Huynh1, T.T., Pham2, D.-K.: Eunicert: ethereum based digital certificate verification system. IJCNC J. (2019) 4. Jerril Gilda, S., Mehrotra, M.: Blockchain for Student Data Privacy and Consent International Conference. ieeexplore.ieee.org (2018) 5. Mesropyan, E.: 21 Companies Leveraging Blockchain for Identity Management and Authentication, vol. 8, No. 4-2 (2018). ISSN 2088-5334 6. Perry RE (2017) Blockchain technology: from hype to reality. https://doi.org/10.1016/j.jii. 2020.100125 7. Piscini, E., Guastella, J., Rozman, A., Nassim, T.: Blockchain: Democratized Trust. Distributed Ledgers and the Future of Value. Deloitte University Press (2016) 8. Watanabe, H., Fujimura, S., Nakadaira, A., Miyazaki, Y., Akutsu, A., Kishigami, J.: ‘Blockchain contract: securing a blockchain applied to smart contracts. In: Proceedings of IEEE International Conference Consumer Electronics (ICCE), Jan 2016, pp. 467–468 9. MIT Media Lab: What we learned from designing an academic certificates system on the Blockchain (2016) 10. Turkanovi´c, M.: EduCTX: a blockchain-based higher education credit platform. IEEE Access 6, 5112–5127 (2018) 11. Durant, E., Trachy, A.: Digital Diploma debuts at MIT. Available: https://news.mit.edu/2017/ mit-debuts-secure-digital-diploma-using-bitcoin-blockchain-technology1017 (2017) 12. Donald, C.: 10 ways blockchain could be used in education. OEB Insights. Available at https:// oeb-insights.com/10-ways-blockchain-could-be-used-in-education (2016)

334

K. Kumutha and S. Jayalakshmi

13. Zheng, Z., Xie, S., Chen, X., Wang, H.: Blockchain challenges and opportunities: A survey. In: Proceedings of IJWGS, vol. 14, pp. 352–375 (2018) 14. Wang, S., Yuan, Y., Wang, X., Li, J., Qin, R., Wang, F.-Y.: An overview of smart contract: Architecture, applications, and future trends. In: Proceedings of IEEE Intelligent Vehicles Symposium (IV), pp. 108–113 (2018). 15. Androulaki, E., Barger, A., Bortnikov, V., Cachin, C., Christidis, K., De Caro, A., et al.: 443.webvpn.fjmu.edu.cn/10.1007/978-3-319-67729-3_17 (2018) 16. Konstantinidis, G., Siaminos, C., Timplalexis: Blockchain for business applications: a systematic literature review (2018). 443.http://doi-org-443.webvpn.fjmu.edu.cn/10.1007/978-3-31993931-5_28 17. Blockcerts: The Open Standard for Blockchain Credentials (2018) 18. David, M.: Will Blockchains Revolutionize Education? (2018) 19. Xu, B.: Exploring blockchain technology and its potential applications for education. J. Softw. Eng. Intell. Syst. Umair Aslam (2018). ISSN 2518-873989 20. Fenwick, M.: Legal education in the blockchain revolution (2017) 21. Elizabeth, D., Alison, T.: Digital Diploma debuts at MIT (2017) 22. Lyon, France. A distributed system for educational record, reputation and reward. In: Proceedings of the European Conference on Technology Enhanced Learning, Sept 2016, pp. 490–496 23. Yang, X.: The application model and challenges of blockchain technology in education (2018) 24. Gräther, W., Kolvenbach, S., Ruland, R., Schütte, J., Torres, C., Wendland, F.: Blockchain for education: lifelong learning passport. In: Proceedings of 1st ERCIM (2018) 25. Ocheja, P., Flanagan, B., Ogata, H.: Connecting decentralized learning records: a blockchain based learning analytics platform. In: Proceedings of ACM International Conference Series, pp. 265–269 (2018) 26. Sharples, M.: The blockchain and kudos: A distributed system for educational record, reputation and reward (2016). https://doi.org/10.1007/978-3-319-45153-4_48 27. Dinesh Kumar, K., Senthil, P., Manoj Kumar, D.S.: Educational certificate verification system using blockchain. Int. J. Sci. Technol. Res. 9(03) (2020). ISSN 2277-8616 82. IJSTR©2020 28. Inamorato dos Santos, A.: Blockchain in education—European Commission’s JRC report preview. In: Blockchain in Education Conference (2017). https://doi.org/10.2760/60649

Word Significance Analysis in Documents for Information Retrieval by LSA and TF-IDF using Kubeflow Aseem Patil

Abstract The capital investment for automated analysis of electronic documents has enhanced rapidly since the growth of text categorization and classification. In recent times, various works have been done in the context of text mining and information retrieval. This paper proposes a new method for information retrieval using latent semantic analysis (LSA) and tf-idf also known as term frequency—inverse document frequency. The proposed method helps to identify the most and least significant words from sentences of the paragraph obtained from a document by splitting the sentences to make the most significant words a query of each sentence. If the word is of the highest priority, then the document will be saved in the drive as a star file. The star file consists of the highlighted less significant words in that document and an array of higher probability words for the respective word at the top of its sentence. The proposed system has been implemented by using a machine learning pipeline, Kubeflow. Based on the analysis of the estimated parameters and calculations of the output graph slope in the system, the model gave a hearty probability of 0.9427 that refers to the accuracy of the system. Keywords Latent semantic analysis · TF-IDF · Bag of words · Word significance probability · Term frequency · Information retrieval

1 Introduction A number of internet-based and cloud-based research works are developed with the advancement of computer and information technology, and this makes difficult to search for and categorize the available research works for a specific issue. Therefore, it is important to regularly organize such large numbers of research papers as related topics, so users can quickly and conveniently locate their interesting research papers. Usually, it takes time to locate academic papers on a particular subject. For instance, researchers spend a long time on the Internet searching their interesting articles and are exhausted because they cannot easily locate what they are looking for because the A. Patil (B) Department of Electronics Engineering, Vishwakarma Institute of Technology, Pune, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_29

335

336

A. Patil

papers are not organized in their respective subjects. It is really difficult to quickly capture every research paper and classify research papers with similar subjects in terms of content, considering that the relationships among the papers to be analyzed and classified are very difficult. For such a large number of research papers, it is important to use an automated processing system to quickly and accurately identify them. In the information recovery process, tf-idf is a numerical statistical representation that determines the importance of a word in a series or a corpus of a text. TF-IDF is short for term frequency-inverse document frequency [1]. It is typically used to balance information extraction, text mining and user modeling. The value tf-idf grows proportionately to the number of times that a word appears in the document and is compensated for by the number of documents in the corpus which contain the word. In general, tf-idf is one of today’s most common systems for weighing words. Search engines commonly use variants of the tf-idf weighting method for the calculation of the importance of the document provided the user interview as a key tool. For filtering stop words in various subject fields, including text summarization and classification, tf–idf can effectively be used. The tf-idf for each query term is considered as one of the simplest ranking functions. Variants of this simple model are much more sophisticated within the ranking functions. The problem is solved by classifying the texts with a strong algorithm using machine learning and natural language processing. The documents and the user application are written in natural language in text retrieval. Even though most of the attention was paid to text retrieval in the information retrieval community, the progress of natural language processing techniques was minimal. When a person has to buy movie tickets, for example, a sentiment classifier will predict the general feeling of a movie rather than going through all the long reviews manually. With tf-idf, each term is calculated by dividing the frequency by the number of documents in the corpus containing the phrase, instead of representing a document term by its raw frequency (number of such events) or its relative frequency (actually determined number of words divided by document length). We have to make sure to prevent an error in the process of evaluating texts, the ultimate solution of this weighting method is to use the terms most commonly used in documents that are also most oftenly used in all documentation. A term-document matrix can be used by LSA that defines the nature of terms in documents; this matrix comprises a sparse matrix whose rows match terms and columns match documents. A common example of a matrix element weights is tf-idf: The weight of the matrix element is proportionate to the amount of times that terms appear in each document, where uncommon terms represent their relative significance. The weight of an element is proportional to the number of times that the terms are used in each document. Machine learning on previously classified datasets is considered to be almost accurate in the supervised learning techniques [2]. These pre-classified datasets are often domain-specific, so it can work for only a specific domain. Such datasets are first converted to macro models in which documents are represented as vectors and the intermediate representations are then fed into the algorithm for machine learning. LSA has increased the usage of homonymy terms as one of the major problems in the retrieval of information by believing the data is partly distorted by the randomness of the choice of word. In this paper, I have also introduced latent

Word Significance Analysis in Documents for Information Retrieval …

337

semantic analysis that shall act as a guide for the words when determining the relevance and significance of those words in the document. If by chance the word is weak, a suggestion shall be provided as a replacement for that specific word in that sentence. Latent semantic analysis (LSA) is a tool that has been added for examining relationships between a number of documents and the terms found in them through the creation of a collection of factors relevant to documents and terms in the context of the natural language processing, in particular distributional semantics distribution.

2 Literature Review Kubeflow is an open-source, free learning platform designed to orchestrate complex working flows on Kubernetes with machine learning pipelines (e.g., data analysis and model training using TensorFlow or PyTorch). Kubeflow provides a toolkit to build on Kubernetes to deploy, scale and manage complex systems from end-to-end ML stack percussion. Features like running JupyterHub servers that enable multiple users to concurrently contribute to a project have become an indispensable asset to Kubeflow. The Kubeflow’s prime features are comprehensive project management and in-depth monitoring and analysis. Kubeflow pipelines is a multi-step ML workflow platform based on Docker containers to build, deploy and maintain. Kubeflow provides a number of different components to build your ML training, tuning for hyper parameters and serving multiplatform workloads. This will be of utmost significant while preprocessing and training the model based on the data in hand for the system. Kubeflow is quite hard to use as compared to Tensorflow. Tensorflow has many advantages like having a very high performance, very deep flexibility, powerful and so on. However, it is very hard to debug as compared to Kubeflow. Debugging plays a major role while handling models based on natural language processing and machine learning algorithmic systems. Here, we shall use Kubeflow rather than Tensorflow for the training, testing and deployment of the model.

3 Implementation of the System The implementation of the system can be illustrated using a block diagram, as shown in Fig. 1. In the proposed system, a document is added either in the form of.txt or.pdf as the input to the model. It is first preprocessed and then sent to the sentence tokenization phase. In the sentence tokenization phase, each sentence is written separately on a different line. The sentences are then split by using a function doc_sent_tokenize() to separate each word in the sentence by a comma and display them as an array of words called as the bag of words. After the bag is made, it is time to calculate the term frequency of every word in the sentence, inverse document frequency of the entire

338

A. Patil

Fig. 1 Detailed block diagram of the implementation of the proposed system

document and the probabilistic frequency as mutual information of the document for further information retrieval. The mutual information is further sent to the tf_idf() function for further term frequency calculations using the probability value received from the previous phase. It then passes the information to the LSA for further analysis. I have introduced latent semantic analysis (LSA) which acts as a combination of a reader, parser and computational tool. The LSA acts as central root of the system. It finds the relationship between the terms. Some words may sound the same and, however, have different meanings (polysemy). It finds the possible similarity of the term in hand and the terms in the dataset by means of its singular value decomposition matrix value. It acts as an application tool to compare documents with a low dimensional space with its SVD value. Based on the comparison, it makes a choice of the appropriate word for that highlighted weak/strong word in the sentence. The latent semantic analysis tool has three major phases, namely the semantic space, the dimensionality reduction phase and the information retrieval phase. In the semantic space, similar words with respect to the highlighted weak/strong word of the sentence in the document are found from the dataset in which the model was trained on. If the word significance probability is less than 0.85, it shall be declared a weak word, and an array of stronger words of value greater than 0.85 shall be displayed above the highlighted word in the sentence. In the dimensionality reduction phase, the reduced dimensionality preserves the most significant aspects of the data as it analyzes information in order to locate latent significance from the term data. The latent semantic space is defined by the singular decomposition of values (SVD) that makes it possible for

Word Significance Analysis in Documents for Information Retrieval …

339

Fig. 2 Reconstructed diagram of latent semantic analysis (LSA) for the system

any rectangular matrix to simplify into three distinct value units. Finally, in the information retrieval phase, the words are read from the document after each and every sentence in the document is split. Then, the suggested word in the array is replaced by the weak word [3]. After all the weak words are replaced with the suggested words from the array displayed above each highlighted word on the document, the overall word significance probability is calculated so as to send the actual and earlier results in the form of information to the concept extraction phase. The following block diagram shows the process of the latent semantic analysis particularly for the system, as shown in Fig. 2. A concept to describe the meaning of a keyword or sentence in a text or a web page can be given by the tf-idf function. The tf-idf is the product of two measures, frequency of terms and the inverse frequency of documents. The precise values of the two numbers are calculated in various ways. There are couple of numerical statistical values used to calculate the word significance in the document and also for information retrieval. They are as follows: 1. 2. 3.

Term frequency (TF) Inverse document frequency (IDF) Probabilistic frequency (P()).

3.1 Term Frequency (TF) In text mining, NLP and information retrieval, a term frequency (TF) value shows just how often a word is used in a database [4]. Terms refer to words or sentences in the natural language scope. Since every document differs in length, a term may

340

A. Patil

appear more often in longer than shorter documents. In the paper, word frequency sometimes is divided as a normalization tool by the total number of words. Term frequency indicates the context within the overall document of a specific word. In the sense of the frequency of the inverse document, this value often is stated. For example, for the analysis of the keyword density, the term frequency value is consulted. In the document length, the frequency of a keyword is presented. Around the same time, logarithms ensure that words are not weighted too heavily, which happens more often. In addition, terms which are very frequently weighted in the language (e.g., conjunctions, adjectives, adverbs, etc.). The ranking of such words can be strengthened in two ways, so that the subject terms appear more frequently. To do that, we would have to delete all stop words in the text in the document like (‘the’, ‘is’, ‘are’) and so on before calculating the frequencies. To produce the raw count of terms in the document, the term frequency tf(x, doc) shall be calculated. There are couple of conditions to be followed when finding the value of tf(x, doc): 1. 2. 3. 4.

The ‘frequency’ in binary form: tf(x, doc) = 1, if t occurs in d else 0. Document length term frequency modified, tf: doc/(number of words in the document). The log frequency: tf(x, doc) = log (1 + tf, doc). Improved frequency, e.g., frequency separated by the raw frequency of the term most common in the document to avoid a partition into lengthy original documents.

Let us say the corpus of the documents is defined as ‘n’, then the total number of documents in corpus n can be defined as |d|, which means the number of documents that have the term x in them. The equation of the value of the term frequency can be defined as shown in Eq. 1. tf(x, doc) =

number of terms of the term × in the document doc total number of words in document doc

(1)

3.2 Inverse Document Frequency (IDF) The IDF is a weight that shows how common a word is used. The more frequently documents are used, the lower the ranking. Weaker the value, the less important the word is. For almost all English readings, for example, the word will have a very low IDF score because it contains very little ‘subject’ detail. The term ‘coffee’ therefore has an IDF value higher than the term ‘the’. The inverse frequency of the document measures the depth of information the word provides, i.e., whether it is common or uncommon in the document in hand. In general, IDF is used for raising the number of words uniquely used in the document in the hope that high knowledge words which characterize the paper are spread and that words that are not very significant in a paper are suppressed [3, 5, 6]. The DF specifies how much a certain word appears

Word Significance Analysis in Documents for Information Retrieval …

341

in the document set. In multiple materials, it measures the frequency of the term, not just a text. Words with a high DF meaning do not matter as they are commonly used in all documents. The IDF, which is a reverse of the DF, is also used in all documents to determine the value of terms. IDF is the inverse of the frequency of data that calculates the information retrieval value of term x. When we determine the value of IDF, it is very small for most of the terms like ‘stop words’ (because stop words like ‘is’, ‘the’, ‘and’ are found in almost all documents and n/df gives this term a very weak value), where df is the document frequency factor. It is basically the inverse logarithmic fraction of the documents that contain the word (by dividing the total number of documents into the number of documents which contain the term and then using the log of that quotient), as shown in Eq. 2: idf(x, d) = log

N |{doc ∈ d : x ∈ doc}|

(2)

3.3 Term Frequency–Inverse Document Frequency The tf-idf value is basically the dot product of the term frequency value and the inverse document frequency. It can be calculated as shown in Eq. 3: tf - idf(x, doc, d) = tf(x, doc) · idf(x, d)

(3)

A high frequency of tf-idf (in the given text) and a low frequency of the word are obtained in the entire document set, and weights thus tend to filter out specific terms. Because the ratio inside IDF is often greater than or equal to 1, idf (and tf-idf) are equal to or greater than 0. The ratios within the logarithm are approximately 1, taking idf and tf-idf closer to 0, as a word appears in more documents. The idea behind tf-idf also includes non-conditions. The definition of idf used to be referenced in earlier days usually by the user. The user of the documents argued ‘if two documents share a very uncommon quote, it should be more strongly weighted than a quote from a wide range of documents.’ In order to execute an item that matches videos and whole phrases, it was also extended to ‘visual expressions.’ In all instances, however, the idea of tf-idf was not more powerful than a basic tf system (no idf). If you apply tf-idf to quotes, you cannot find an improvement over a simple quote-count weight without an idf variable.

3.4 Probabilistic Frequency Sparse data is one of the major concerns in statistical analysis and corpus linguistics. If the sample used to estimate a language model’s parameters is small, several potential

342

A. Patil

language events in the actual information can never take place. To make sure that the sparse data is used accurately and potentially fit, we find and use the probabilistic frequency value. The cumulative likelihood estimation increases the likelihood of observable events and assigns zero probability to unknown events [7, 8]. This makes the overall probability calculation inadequate for direct estimation of P(x|d). One way of reducing the zero probabilities is to combine the P(x|d) maximum probability model with a model which is less sparse as the P(x) margin. A linear combination of both probabilities can be made to construct a different probability function using linear interpolation. By determining how likely the term ‘x’ as the relative frequency of the document is contained in a document ‘doc’, we can show it as an equation as shown in Eq. 4. P(x|d) =

|{doc ∈ d : x ∈ doc}| N

(4)

where, idf = −logP(x|d). In other words, the inverse frequency of the document is the logarithm of the relative frequency of the document. This probabilistic interpretation takes the same form as the interpretation of itself. Applying these notions of information to information retrieval problems, however, results in difficulties in trying to identify the correct event spaces for the necessary probability distributions: not only records, but inquiries and terminology must be taken into account. The term frequency and the inverse frequency of documents can be described using theory of information; it basically allows us to understand why the collective information quality of a document makes a difference to their product. The equation of the probability of the assumption for distribution can be formulized as shown in Eq. 5. P(doc|x) =

1 |{doc ∈ d : x ∈ doc}|

(5)

The equation above is basically the heuristic version of the tf-idf equation. However, by using conditional entropy to Eq. 5 and by adding logarithm to it, we get |{doc ∈ d : x ∈ doc}| |d| + log|d| = idf(x) + log|d|

H (C|D = x) =

(6)

Here, H() represents the conditional entropy of Eq. 4. Also, C and D are variables or random terms that are of the similar nature. D represents the number of similar terms from the document, whereas C represents the terms from the dataset that was used to pre-train the model [2, 8]. Consider some mutual information M that is similar from the document and the dataset, which is basically the remaining value of the conditional entropy of the number of documents with the conditional entropy found in Eq. 6, as seen in Eq. 7.

Word Significance Analysis in Documents for Information Retrieval …

M(D; C) = H (C) − H (C|D) =



px · idf(x)

343

(7)

x

Here, px represents the unconditional probability of the document before it is preprocessed using conditional entropy. The final phase of finding the probability needed is by expanding the probability from Eq. 7, as shown in Eq. 8. M(D; C) =



px|doc · pdoc · idf(x)

x,doc

=



x,doc

=

tf(x, doc) ·

1 · idf(x) |d|

1  tf(x, doc) · idf(x) |d| x,doc

(8)

The summation of the tf-idf terms and records retrieves the mutual relevant information between files and terms, with the basic characteristics of the joint delivery being taken into account. Therefore, every tf-idf is configured with the term x that is document-paired. This equation will help us in finding the relevance and significance probability of the word in each sentence of the inputted document. For example, if we take the word ‘major’, there are couple of words that may have a higher significance probability as compared to the term ‘major’. The system calculates the probability of the replacement terms for the term ‘major’ in the sentence ‘The bomb explosion was major.’ along with their match score next to the close words found, as seen in Fig. 3.

Fig. 3 Training the model on the words in the dataset by using the word ‘major’

344

A. Patil

3.5 LSA with Dimensionality Reduction LSA is one of NLP’s most common techniques for the mathematical analysis and interpretation within the text. LSA is an unsupervised method of learning based on two key elements: 1. 2.

The distributive paradigm that says terms with similar significance often occurs together. Singular value decomposition.

The dimensionality reduction matrix is a sparse matrix whose rows are the words and columns correspond to the documents [6]. LSA may use a term-document matrix that defines the occurrences of terms in documents. The term frequency–inverse document frequency (tf-idf) is a typical example for the weighting of the matrix elements: The weight of the matrix element is directly proportionate with the number of times in which the terms appear in each document when uncommon terms are weighted to represent their relative importance. This matrix is also common in conventional syntactic and semantic models, but it is not usually represented directly as a matrix as the matrix does not always use their statistical relationships. LSA considers a lowrank approximation to the term-document matrix after constructing the occurrence matrix. For such approximations, there could be different reasons: 1. 2. 3.

In this case, an approximation is the estimated low-rank matrix. The original term-document matrix is assumed to be too high for the machine resources. The initial matrix of the terms in the documents is believed to be noisy. In comparison with the ‘real’ matrix, the initial term-document matrix is considered to be too sparse. That is, in each document, the original matrix only lists the words in fact, while all terms are relating to each document—usually a lot larger collection, because of their synonymy.

Let there be two orthogonal matrices P and Q and β as the diagonal matrix, therefore, the singular value decomposition matrix X can be represented as shown in Eq. 9. X = Pβ Q

(9)

The matrix products give us correlations of words and documents that shall help in finding the equivalent synonyms for the term to be found in the sentence of the document, as seen in Eqs. 10 and 11.  T  X X T = Pβ Q T Pβ Q T   T  = Pβ Q T Q T β T P T = Pβ Q T Qβ T P T = Pβ T P T

(10)

Word Significance Analysis in Documents for Information Retrieval …

345

T    Pβ Q T X T X = Pβ Q T  T   = Q T β T P T Pβ Q T = Qβ T P T Pβ Q T = Qβ T β Q T

(11)

By reducing the minimal error of both the equations and squaring the dot products of the two orthogonal matrices, we get an approximation of the simple decomposition equation matrix X k that helps us in finding the nature of the similar word with the highest matching score terms from the dataset of rank ‘k’, as can be seen in Eq. 12. X k = Pk βk Q kT

(12)

4 Results The system was implemented using a random documentary on the Internet as the input for the system. The document chosen has documentation wordings, as shown in the figure below. In the figure above (Fig. 4), the system will use the dataset that has been provided before the training of the model, to examine each word’s significance probability and display an array of words if the word is less significant. In Fig. 5, we can see that, the system has examined the words in the file and has added an array of words (bag of words) above the word that has been highlighted in yellow (those words with less significance). We can also see in Fig. 6 that the probability rate is the highest at the cluster at the very top right side of the graph. By

Fig. 4 Random document file used to test the model using the pre-trained dataset

346

A. Patil

Fig. 5 Output of the system for the random documentary that was added as the input (Fig. 4.)

Fig. 6 Based on the output of the random documentary, a graph is plotted based on the training of 850 words with the document’s words given as the input

Word Significance Analysis in Documents for Information Retrieval …

347

measuring the slope of the graph, the accuracy of the system turned out to be 0.9427, which was higher than what was expected (expected was > 0.90).

5 Conclusions In a list of documents specific to the list, tf-idf is an easy and simple algorithm for matching words. From the data obtained, we can see that tf-idf returns words that are very important to a specific query. If a user has entered a paragraph or given a document as an input (.pdf or.txt), tf-idf with LSA will find terms that have a higher probability of significance than each of the words in each sentence in that paragraph. In addition, tf-idf encoding is straightforward, making it suitable for building the basis for more complex algorithms and query information retrieval. In future, I shall implement the usage of another sophisticated algorithm, called the genetically modified algorithm based on genetic experiment, replication, duplication and reproduction, that shows better results as compared to the basic tf-idf weighing scheme. I will use this algorithm and combine it with the system I have made by adding the genetic algorithm as a back pipeline to the system, so as to giving it more accurate and precise readings than produced in the developed system.

References 1. Wang, N., Wang, P., Zhang, B.: An improved TF-IDF weights function based on information theory. In: 2010 International Conference on Computer and Communication Technologies in Agriculture Engineering, Chengdu, pp. 439–441 (2010). https://doi.org/10.1109/CCTAE.2010. 5544382. 2. Huang, X., Wu, Q.: Micro-blog commercial word extraction based on improved TF-IDF algorithm. In: 2013 IEEE International Conference of IEEE Region 10 (TENCON 2013), Xi’an, pp. 1–5 (2013). https://doi.org/10.1109/TENCON.2013.6718884 3. Shian, Q., Fayun, Li.: Improved TF-IDF method in text classification. New Technol Library Inf Serv 29(10), 27–30 (2013) 4. Zhu, D., Xiao, J.: R-tfidf, a variety of tf-idf term weighting strategy in document categorization. In: 2011 Seventh International Conference on Semantics, Knowledge and Grids, Beijing, pp. 83– 90 (2011). https://doi.org/10.1109/SKG.2011.44 5. Patil, L.H., Atique, M.: A novel approach for feature selection method TF-IDF in document clustering. In: 2013 3rd IEEE International Advance Computing Conference (IACC), Ghaziabad, pp. 858–862 (2013). https://doi.org/10.1109/IAdCC.2013.6514339 6. Kuncoro, B.A., Iswanto, B.H.: TF-IDF method in ranking keywords of Instagram users’ image captions. In: 2015 International Conference on Information Technology Systems and Innovation (ICITSI), Bandung, pp. 1–5 (2015). https://doi.org/10.1109/ICITSI.2015.7437705

348

A. Patil

7. Martineau, J., Finin, T.: Delta TFIDF: An improved feature space for sentiment analysis. In: Proceedings of the Third AAAI Internatonal Conference on Weblogs and Social Media. https:// www.aaai.org/ocs/index.php/ICWSM/09/paper/view/187/504 (2009) 8. Behl, D., Handa, S., Arora, A.: A bug Mining tool to identify and analyze security bugs using Naive Bayes and TF-IDF. In: 2014 International Conference on Reliability Optimization and Information Technology (ICROIT), Faridabad, pp. 294–299 (2014). https://doi.org/10.1109/ICR OIT.2014.6798341.

A Detailed Survey on Deep Learning Techniques for Real-Time Image Classification, Recognition and Analysis K. Kishore Kumar and H. Venkateswerareddy

Abstract The portion of machine learning (ML) is deep learning (DL). Machine learning (ML) is the study of computer algorithms. It constructs a model using training data, often referred to as sample data for prediction. Artificial intelligence (AI) is a sub-branch in the field of computer science (CS). With the help of training data, ML algorithms construct a model, often referred to as sample data for prediction and decision-making. Programming is often needed to do something with computers, but by implementing a model generated by machine learning algorithms it can prevent programming and to do what programming can do without programming assistance. Machine learning algorithms can be used widely in various real-world applications such as e-mail filtering, computer networks, natural language processing, search engines, telecommunications, Internet fraud detection and DNA sequence classification. Three types of learning algorithms are present: supervised, unsupervised and reinforcement. Machine learning ML is a widely used multidisciplinary field which uses various training models and algorithms to predict, classify and analyse any statistical data by the use of computer science algorithms. This paper is going to address deep learning techniques such as single-shot detector (SSD), scale-invariant feature transform (sfit), histogram of oriented gradient (HOG) and many more. The main aim is to detect cybercrimes through the assistance of the above-mentioned techniques. Keywords Classification · Deep learning · Machine learning · Techniques · Feature selection · Crime identification

K. Kishore Kumar (B) · H. Venkateswerareddy Department of CSE, Vardhaman College of Engineering, Hyderabad, India e-mail: [email protected] H. Venkateswerareddy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_30

349

350

K. Kishore Kumar and H. Venkateswerareddy

1 Introduction Deep learning comprises a new, present-day procedure for picture handling and information investigation, with promising outcomes and huge potential. As deep learning has been effectively applied in different areas, it has beginning late entered likewise the astronomical of crime location [1]. Deep learning (otherwise called deep organized learning) is essential for a more extensive group of AI techniques dependent on counterfeit neural organizations with portrayal learning. Learning can be administered, semi-managed or unaided. Deep learning creates a new, up-to-date system for image processing and facts investigation, through promising outcomes and huge potential [2]. Deep learning, often called DL, is the deep structured learning and is the portion of machine learning ML techniques that uses artificial neural networks with various demonstrative learning techniques. Learning can be categorized into three ways as shown in Fig. 1. Deep learning uses multiple layers to extract hidden features from the given input data. For example, consider image processing containing two layers, the first is the low-level layer and the second is the higher-level layer. The low-level layer is used to identify edges while high-level layer is used to identify human faces and digits. A ground-breaking type of AI that empowers PCs to tackle perceptual issues, for example, image and discourse acknowledgement, is progressively making a passage into the organic sciences. These profound learning strategies, for example, deep counterfeit neural organizations, utilize numerous handling layers to find examples and structure in huge informational indexes. From the data that the resulting layers expand, each layer takes a concept and the higher level is the more dynamic concepts that are found. Deep learning does not rely upon earlier information preparing and consequently extricates highlights. To utilize a basic model, a deep neural organization entrusted with deciphering shapes that would figure out how to perceive straightforward edges in the primary layer and afterwards add acknowledgement of the more perplexing shapes made out of those edges in resulting layers. There is no rigid principle for the numbers of layers that are expected to establish deep learning, yet most specialists concur that more than two are required. The word deep means more layers [3]. Crime is one of the greatest and overwhelming issues in our general public and its anticipation is a significant task. There are colossal quantities of violations submitted as often as possible. This requires monitoring all the violations keeping an information base for same which might be utilized for future reference. The current difficulty presented is keeping up with a suitable crime data set and investigating this knowledge to aid in the future in predicting

Deep Learning

Supervised

Fig. 1 Deep learning classification

Semi-Supervised

Un-Supervised

A Detailed Survey on Deep Learning Techniques …

351

and mitigating failings. The purpose of this undertaking is to analyse a data set containing various crimes and to forecast the type of crime that may occur in the future based on different conditions. In this task, the strategy of machine learning and information science is used for the crime forecast of any city crime informational index. The crime information is removed from the official entryway of any city police. It comprises crime data like area portrayal, sort of crime, date, time, scope, longitude. Before preparing the model information, pre-processing will be finished after this component choice and scaling will be done so precision get will be high. The Knearest neighbour (KNN) characterization and different calculations will be tried for crime forecast with better precision and will be utilized for preparing. Representation of data set will be done as far as the graphical portrayal of numerous cases for instance at which time the crime rates are high or at which month the lawbreaker exercises are high. The reason for this task is to provide the thought of how machine learning can be utilized by the law authorization offices to distinguish, foresee and address crimes at a lot quicker rate and subsequently diminishes the crime percentage. It is not confined to any particular country, and this can be utilized in different states or nations relying on the accessibility of the data set. This article presents a singleshot detector, scale-invariant feature transformations (SIFT), convolutional neural networks (CNN), histogram of oriented gradient (HOG), recurrent convolutional neural networks (R-CNN), fast R-CNN (Fig. 2; Table 1).

Fig. 2 Deep learning architecture applications in various fields

352

K. Kishore Kumar and H. Venkateswerareddy

Table 1 Feature extraction techniques and object identification classification S. no.

Method

Approach

Remarks

1

Viola–Jones object Computer vision and detection framework image processing based on haar features

Used in

Machine learning approach

Feature defining first

2

Scale-invariant feature transform (SIFT)

Computer vision

Machine learning approach

Feature defining first

3

Histogram of oriented gradients (HOG) features

Computer vision and image processing

Machine learning approach

Feature defining first

4

Region proposals (R-CNN, fast R-CNN, faster R-CNN, Cascade R-CNN

Computer vision

Deep learning approach

End-to-end object detection

5

Single-shot multibox Computer vision detector (SSD)

Deep learning approach

End-to-end object detection

6

You only look once (YOLO)

Computer vision and image processing

Deep learning approach

End-to-end object detection

7

Single-shot refinement neural network for object detection-Refinedet

Computer vision

Deep learning approach

End-to-end object detection

8

Retina-Net

Computer vision and image processing

Deep learning approach

End-to-end object detection

9

Deformable convolutional networks

Computer vision and image processing

Deep learning approach

End-to-end object detection

2 Literature Survey 2.1 Recognition of Face with the Help of CNN CNN-based approach has been used by the author to recognize human faces for the proposed system used as a hybrid neural network approach compared with other face recognition approaches, and at the same time, this has given the best results compared with its competitor [4].

A Detailed Survey on Deep Learning Techniques …

353

2.2 Single-Shot Detector for Object Detection For object identification, single-shot detector (SSD) has given better results compared with two-stage object detectors [5]. Histogram of oriented gradients (HOG): Vehicle orientation can be tracked by the histogram of oriented gradients (HOG) [6].Object detection may play vital role in crime detection [7]. Deep learning algorithms can be used to detect city crimes [8]. It can be detected and predicted with the help of deep learning methods [9]. Cybercrime is major issue in present situations [10]. Crime intentions can be identified with the help of deep learning approaches [11]. Phishing websites can be identified with help of deep learning methods [12].

3 Related Work 3.1 Deep Learning Techniques: 1.

Convolutional Neural Network

In calculation (specifically, practical investigation), convolution is a numerical procedure on two functions that delivers a third function expresses as its style is a*b is to communicate how the state of one is adjusted by the other. The term convolution refers to both the outcome work and to the way towards processing it. It is characterized as the necessary of the result of the two functions after one is turned around and moved. The essential is assessed for all estimations of the move, creating the convolution work [13]. Convolutional neural networks (CNNs) extract the largest features compared with other feature extraction techniques [4]. Figure 3 shows that the recognition of peacock with convolutional neural networks has so many deep layers. Procedure: Step1: Input—Peacock Step2: Feature extraction with any learning method Step3: output recognition

2.

Single-Shot Detector

It is easier to know about target detection before learning about a single-shot detector. Detecting or identifying object recognition is a development of PC practice related to PC vision and image planning that oversees the recognition in cutting-edge pictures and chronicles of events of semantic objects of a specific class (e.g. individuals, structures or vehicles). All around educated spaces regarding thing revelation fuse face

354

K. Kishore Kumar and H. Venkateswerareddy

Fig. 3 Deep learning layered object detection

acknowledgement and walker recognizable proof. Thing revelation has applications in various areas of PC vision, including picture recuperation and video perception. Object detection can be used widely in many emerging areas such as computer vision and image processing, particularly in computer vision it would be used to track object motion for example identifying moving ball in football or cricket. Object identification can be done with the help of an object class. The class of objects contains its features which help in detection of object. Single-shot detection is also frequently referred to as one-shot learning as the acronym of SSD, the name means that it attempts to learn every characteristic object in one brief with few features compared to other learning algorithms [5]. Bayesian Framework for Single-Shot Detection p(I |It , O f g ) p(O f g ) p(O f g |I, It ) = R= p(Obg |I, It ) p(I |It , Obg ) p(Obg ) The particulars shown in Table 2 explain the various object detection approaches with its behaviours of machine learning approach to find any object features initially then training the model can happen whereas in the deep learning approach. 3.

Histogram of Oriented Gradients

It is one of the item location techniques utilized in PC vision and image processing to distinguish an object in a nearby image it considers slopes of its name recommends. This method is similar to that of edge course histograms, descriptors of scale-invariant part transition and shape settings, however, vary in that it is figured for enhanced

A Detailed Survey on Deep Learning Techniques … Table 2 Deep learning various architectural approaches

355

S. no.

Deep learning architecture

Field applied

1

Deep neural networks—DNN

Computer vision, audio recognition

2

Deep belief networks—DBN

Machine vision

3

Recurrent neural networks—RNN

Speech recognition

4

Conventional neural networks—CNN

Natural language processing

5

Artificial neural networks—ANN

Machine translation

accuracy on a thick structure of reliably dispersed cells and uses covering area contrast normalization. Algorithm: Step-1 Computation of gradient Step-2 Binning of orientation Step-3 Identification of descriptor blocks Step-4 Normalization of blocks Step-5 Object orientation

The major thought behind the histogram of situated points descriptor is that local item appearance and shape inside an image can be depicted by the scattering of power inclines or edge headings. The image is isolated into few related areas called cells, and for the pixels inside each telephone, a histogram of incline headings is requested. The descriptor is the connection of these histograms. For improved accuracy, the local histograms can be separation normalized by calculating an extent of the force across a greater territory of the image, called a square, and a while later using this motivation to normalize all telephones inside the square. This normalization achieves better invariance to changes in light and shadowing. Crowd descriptor has several central issues of interest over various descriptors. Since it deals with neighbourhood cells, it is invariant to numerical and photometric changes, besides object heading. Such switches would simply appear in greater spatial areas. Moreover, as Dalal and Triggs found, coarse spatial testing, fine heading assessing and strong neighbourhood photometric normalization permit the individual body improvement of walkers to be dismissed to the extent that they keep a by and large upstanding position. The Crowd

356

K. Kishore Kumar and H. Venkateswerareddy

descriptor is as needs to be particularly suitable for human recognizable proof in images. 4.

Scale-Invariant Feature Transform (SIFT)

It is one of the element ID calculations utilized in PC vision which can recognize nearby elements in the images. It has wide applications in article movement discovery, in short structure, it tends to be composed as filter. Central issues of articles zone unit first extricated from an assortment of reference images and hang on in a data partner degree object is perceived in an exceptionally new image by severally assessment each element from the new image to the current data and discovering applicant coordinating choices upheld Euclidian distance of their element vectors. From the total arrangement of matches, subsets of central issues that concur on the thing and its area, scale and direction inside the new image territory unit are known to filtrate savvy matches. The assurance of reliable groups is performed rapidly by exploitation partner degree conservative hash table usage of the summed-up Hough revamp. Each group of three or a ton of alternatives that concede to relate degree item and its motivation is then dependent upon more cautious model check and later anomaly’s territory unit disposed of. Finally, given the precision of the task and the variety of likely fake matches, the probability that a specific arrangement of choices indicates the existence of partner degree items is recorded. With high certainty, item coordinates that pass these tests are known as accurate (Fig. 4).

SCALE-INVARIANT

FEATURE DETECTION

MATCHING OF FEATURE AND INDEXING

IDENTIFICATION OF CLUSTER

VERIFICATION OF MODEL

OUTLIER DETECTION Fig. 4 SSD stages

A Detailed Survey on Deep Learning Techniques …

357

Algorithm: Step-1 Scaled face can be detected Step-2 Localization of key points Step-3 Apply interpolation on detected points Step-4 Avoid low contrast points and eliminate edge responses Step-5 Orient estimation and apply key descriptor

5.

R-CNN

R-CNN [14] has been implemented to avoid the issue of selecting large regions. In this technique, for object detection, only selective regions can be considered. It uses any search algorithm to achieve that goal (Fig. 5). Algorithm: Step-1 Identify selective regions out of huge regions Step-2 Apply algorithm to combine similar regions Step-3 Produce desired object with the help of selected regions

Fig. 5 R-CNN representation

358

K. Kishore Kumar and H. Venkateswerareddy

4 Crime Detection with Deep Learning Techniques As deep learning is the most renowned innovation, it is utilized in different applications. One such application that is being utilized is in crime recognition and avoidance. The concept behind large numbers of these practices is that violations are normally disconcerting; it only takes the ability to find an immense amount of data to discover designs that are useful for law authorization. A few years ago, this kind of knowledge analysis was mechanically difficult, but the assumption is that new changes in AI would be possible. Crime location effectively uses existing data from the site of crime and hoodlums to assess designs. Through reviewing this data on past corruption, a person can predict when and where potential misbehaviour is most relevant. The increasing use of cutting-edge frameworks to follow violations will accelerate the manner in which offences can be detected and foreseen. Crime review is an important viewpoint in the area of significant learning since it is a piece of enormous data that should be dealt with productively for now. A proposed solution to this could be knowledge mining procedures. Computerized information assortment has cultivated the utilization of information mining for interruption and wrongdoing discovery. Certainly, banks, enormous enterprises, insurance agencies, gambling clubs and so forth are progressively mining information identified with their clients or workers taking into account distinguishing possible interruption, extortion or even crime. In this review, the investigation of methods that can be utilized for distinguishing and foreseeing crime is done to limit the pace of crime. The increment of web access and utilization of keen advances make new assault strategies for cybercrime to conventional clients and associations. Malware is one of the riskiest security dangers in web today. To manage the malware assault, new strategies have been created to identify malware from our PC frameworks. In this article, a diagram of the various kinds of malware programs is utilized by the programmers to enter into PC frameworks and cell phones for wrecking, changing or taking individual data from conventional clients. At that point, a design investigation and correlation of most of the suggested malware location strategies depend on in-depth learning to ensure the safety, trustworthiness and accessibility of client’s information.

4.1 Various Malwares 1. 2. 3. 4. 5.

Virus Trojan horse Logic bomb Micro- and macrovirus Worm and rootkit (Table 3).

A Detailed Survey on Deep Learning Techniques …

359

Table 3 Malware identification deep learning models S. no.

Model

Type

Target

1

CNN—Convolutional neural networks

Malicious

PC operating system

2

RNN—Recurrent neural networks

Malicious

PC operating system

3

DNN—Distributed neural networks

Malicious

Mobile operating system

4

DBN—Deep bidirectional neural networks

Malicious

PC operating system

5

MDNN—Multimedia deep neural networks

Malicious

PC operating system

5 Conclusion and Future Scope In this article, different types of deep learning methods would be desirable to identify crime detections. This article focused on machine learning and deep learning algorithms and also differentiated ML algorithms with DL algorithms. Deep learning techniques such as single-shot detector (SSD), scale-invariant feature transform (SFIT), histogram of oriented gradient (HOG) and many more. The main aim is to detect cybercrimes with the help of the above-mentioned deep learning techniques and to meet the target collected with various types of computer tacks. In future work, this survey article is to take any specific DL method to identify cybercrimes.

References 1. Kamilaris, A., Prenafeta-Boldú, F.X.: Deep learning in agriculture: a survey. Comput. Electron. Agric. 1(147), 70–90 (2018) 2. Kamilaris, A., Prenafeta-Boldú, F.X.: Deep learning in agriculture: a survey. Comput. Electron. Agric. 147, 70–90 (2018) 3. Schmidhuber, J.: Deep learning. Scholarpedia 10(11), 32832 (2015) 4. Lawrence, S., Giles, C.L., Tsoi, A.C., Back, A.D.: Face recognition: a convolutional neuralnetwork approach. IEEE Trans. Neural Netw. 8(1), 98–113 (1997) 5. Zhang, S., Wen, L., Bian, X., Lei, Z., Li, S.Z.: Single-shot refinement neural network for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4203–4212 (2018) 6. Rybski, P.E., Huber, D., Morris, D.D., Hoffman, R.: Visual classification of coarse vehicle orientation using histogram of oriented gradients features. In: 2010 IEEE Intelligent Vehicles Symposium, pp. 921–928. IEEE (2010) 7. Saikia, S., Fidalgo, E., Alegre, E., Fernández-Robles, L.: Object detection for crime scene evidence analysis using deep learning. In: International Conference on Image Analysis and Processing, 11 Sept 2017, pp. 14–24. Springer, Cham 8. Chackravarthy, S., Schmitt, S., Yang, L.: Intelligent crime anomaly detection in smart cities using deep learning. In: 2018 IEEE 4th International Conference on Collaboration and Internet Computing (CIC), 18 Oct 2018, pp. 399–404. IEEE

360

K. Kishore Kumar and H. Venkateswerareddy

9. Azeez, J., Aravindhar, D.J.: Hybrid approach to crime prediction using deep learning. In 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), 10 Aug 2015, pp. 1701–1710. IEEE 10. Nikhila, M.S., Bhalla, A., Singh, P.: Text imbalance handling and classification for crossplatform cyber-crime detection using deep learning. In: 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 1 Jul 2020, pp. 1–7. IEEE 11. Navalgund, U.V., Priyadharshini, K.: Crime intention detection system using deep learning. In: 2018 International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET), 21 Dec 2018, pp. 1–6. IEEE 12. Yang, P., Zhao, G., Zeng, P.: Phishing website detection based on multidimensional features driven by deep learning. IEEE Access 11(7), 15196–15209 (2019) 13. Kim, P.: Convolutional neural network. In: MATLAB Deep Learning, pp. 121–147. Apress, Berkeley, CA (2017) 14. Wang, H., Li, Z., Ji, X., Wang, Y.: Face R-CNN. arXiv:1706.01061 (2017)

Pole Line Fault Detector with Sophisticated Mobile Application K. N. Thirukkuralkani, K. Abarna, M. Monisha, and A. Niveda

Abstract Electric pole lines serve as a major source of power distribution and it is important to ensure the proper working of these electric pole lines. The faults occurring in these electric pole lines are encountered by the lineman manually without the use of any proper safety device. Hence, this project pole line fault detector with sophisticated mobile application uses digital disruptive technology to detect, identify and locate the fault occurring in these electric pole lines without human intervention. This system safeguards the lineman from the industrial reverse current while working in industrial areas. The system consists of transmitter, receiver, and a mobile application for the detection, identification, intimation, and rectification of the faults using compact and user-friendly device fitted across multiple electric pole lines. A mobile application is used to monitor the rectification of the fault at the earliest. Keywords Electric pole line · Reverse current · Mobile application · Lineman Voltage · Current · Transmitter · Receiver

1 Introduction Electricity is the most required energy source in this contemporary sophistication which made people’s life a lot easier. For supplying electricity from the power station transformers and electric pole lines are necessary. Electric pole lines run through several natural conditions thereby causing various electrical faults due to lightning, bird, trees and so on. These faults should be found out and rectified appropriately as soon as possible for proper power supply quality. Fault detection is essential for safe transmission of electric power through electric pole lines from transformers. To setup, a recent connection or to connect any slacken wire or any problems arising in the electric pole lines has to be fixed by the lineman by putting their own lives in danger. K. N. Thirukkuralkani (B) · K. Abarna · M. Monisha · A. Niveda Electronics and Instrumentation Engineering, Sri Ramakrishna Engineering College, Coimbatore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_31

361

362

K. N. Thirukkuralkani et al.

Even in today’s world, whenever there is a fault in the electric pole lines due to which we lose power supply at homes, we consumers are in a situation to contact the nearest electricity board to register a complaint regarding the fault. The officials after verification allocate an electrician to rectify the fault in the particular area. The field electrician after coming to the concerned area find it difficult to locate the exact pole line where the fault has occurred. While working in industrial area, they are unaware of the industrial reverse current which is of great threat to the lineman who works on these electric pole lines. This project comes as a one-step solution which encounters all these problems.

2 Literature Survey Priya [1] in this paper, the author explains the monitoring of electric pole lines using a set of current and voltage transformers. The values that are obtained are displayed in a cloud platform. The values are continuously logged and are saved for data retrieval at any time by users. Nandakumar [2] in this paper discusses about the different power losses that occur in the transmission lines and the methods in which these power losses are monitored. The paper also discusses about the potential strategies for minimizing the power loss. Wasiq Raza [3] in this paper deals with the protection device which operates using a password. It eliminates the miscommunication between the maintenance staff and electric substation staff by providing them predefined passwords which ensures the safety of the maintenance staff by avoiding unauthorized access. Aikaterini [4] in this paper, the authors explain the ways in which unexpected reenergization of power lines occur and the undesirable earth that takes place due to the improper monitoring of these power lines. This system uses an ultrasonic proximity detector board to sense the presence of voltage or current in these power lines. DA Doshi [5] in this paper, the authors explain the ways of detecting the location of fault failure using power line communication. Only specified and authorized person will be able to access and rectify the faults. Aikaterini D.Baka [6] in this paper, the author uses ultrasonic motion detector to monitor noticeable shift by the electricians working on the electric lines. The device is used to detect the unusual shift in sound frequency by comparing it with pre-defined frequency. Prashant Kumar Arun [7], the paper focuses on different methods for live wire maintenance of transmission lines. The paper discusses about new precautionary measures to ensure worker’s safety. Mawle [8], this paper discusses all the line methods that are being used for maintenance to avoid the outages of the power lines. Cong Wei Pan Zhencun Zheng Gang [9], in this paper selection and location of fault in distribution system have two aspects while identifying single phase to ground fault. This fault can be solved by injection current signal theory which is done manually. To avoid high cost and man power the proposed injection signal

Pole Line Fault Detector with Sophisticated Mobile Application

363

detectors are fixed along line. The signal with special frequency will detect the fault and send the result to the server by GSM—SMS. IuHua [10], in this paper the concept of smart grid and internet of things are briefed, the application of smart grid is pointed out. These typical application links, includes wind power prediction, condition monitoring of overhead transmission lines, power monitoring, smart home, and asset management are elaborated. Benazir Fateh & Manimaran Govindarasu [11], in this paper the wireless network is meant to deliver the capable of real-time physical measurement for ideal preventive. The hybrid hierarchical network architecture is composed of a mixture of wired, wireless, and cellular technologies which will guarantee low-cost real-time data monitoring. The line monitoring frame work using WSN is indeed feasible using available technologies like asymmetric sensor data generation; unreliable wireless line behavior, non-uniform cellular coverage, etc. This result shows that wireless link bandwidth is limiting factor for cost optimization. Pituk Bunnoon [12], in this paper, proposes the state of the art of fault detection approach in a power system. This is classified into two types interesting in fault detection. The fault theory can explain and identified by electricity power system and fault types which is explained in detail. The detection of fault is classified by types of approaches and case of fault. Bashier M. Tayeb [13], in this paper when unpredicted faults occur and some protective systems are required to forestall the propagation of the faults and safeguard the system against abnormal operation. The employment of backpropagation (BP) neural network as another method for fault detection, classification, and isolation in an exceeding cable system. Manohar Singh [14], in this paper, discuss about a technique to detect and classify the different shunt faults on transmission lines for quick and reliable operation of protection schemes were achieved by application of evolutionary programming tools. PSCAD / EMTDC software is used to simulate different operating and fault conditions on high voltage transmission line. Yadaiah [15], in this paper, the author explains the methodologies for incipient fault detection in power transformer for off-line and online. An artificial neural network is used to detect off-line faults and wavelet transforms are used for online fault detection. The dissolved Gas Analysis to detect incipient fault has been improved using artificial neural network and is compared with Rogers’s ratio method with available samples of field information. In all the above-mentioned paper, there was no proper technology or effective methods that were used to detect the reverse current from industries. The proposed model has a handheld device that will be provided to the lineman in order to locate the exact fault in the electric pole line. It also has a mobile app to intimate the exact location and the reason for failure of the lineman.

364

K. N. Thirukkuralkani et al.

3 Methodology See Fig. 1. Master Unit The master unit consists of current and voltage transformers which monitors the current and voltage parameters. The bridge circuit is used to convert AC to DC. The MSP430 acts as the main controller board. LoRa modules continuously receive and transmit the data [1, 2]. The GPS is used to track the location where the fault has occurred. The LCD module displays the parameters of voltage, current, battery power. The buzzer is used for indication purposes in case of abnormal voltage or current fluctuations. The ESP module is used as the communication medium between the controller and the mobile application (Fig. 2).

Fig. 1 Block diagram of the master unit

Fig. 2 Block diagram of the line unit

Pole Line Fault Detector with Sophisticated Mobile Application

365

Receiver Unit The line unit is the hand-held device which receives the data from the transmitter module and intimates the fault conditions to the lineman. This unit also consists of controller and the LoRa module for long-range communication covering a distance of about 10 km. A mobile app is integrated by means of a ESP module through which the lineman receives a message regarding the fault condition and the location where the fault has occurred. A buzzer is used to intimate the faulty condition.

3.1 Current and Potential Transformers The current transformers used here are of industrial standards which can measure up to 50A. These current sensors are connected to the R, Y, B phases and continuously sense the current values. These current sensors are connected to the transmitting device fitted to the electric pole line. The voltage transformer also known as potential transformer [4] is used for voltage sensing. The input voltage is 240 V and the output voltage is 12 V and is calibrate data transformer ratio of 115:6. These voltage transformers continuously sense the voltage values and are displayed in the LCD.

3.2 Bridge Circuit A bridge circuit is established for the conversion of AC values from the voltage and current sensors to DC values in order to feed the values to the controller which operates at 5 V DC supply. The bridge circuit consists of a set of resistors and capacitors.

3.3 Controller A MSP430 controller board issued as the main controller for the transmitting and receiving module. This board is known for its powerful 16-bit RISCCPU, 16-bit registers, and constant generators that contribute to maximum code efficiency. The controller is programmed to obtain the values from the input current and voltage sensors compare it with the threshold values and platform the necessary actions of fault intimation using alerting devices [3, 4].

366

K. N. Thirukkuralkani et al.

3.4 Liquid Crystal Display and Buzzer The alerting devices used here are buzzer and LCD 16*2 the different voltage and current values are displayed in the 16*2 LCD [1] and they are continuously monitored. If the values g obeys on the threshold values then there is an alerting sound from the buzzer. These alerting devices are fitted in both the transmitter device and the handheld device respectively.

3.5 Lora Module The LoRa module is abbreviated as Long Range. It is a communication module used for ultra-long distance spread spectrum communication. The LoRa module can be used for an average distance that covers about 10 km. This project used LoRa as the main communication module. This module acts as the communication bridge between the transmitter device and the hand-held device. The LoRa communication system is the emerging technology used for long-range communication without delay or any hindrance.

3.6 ESP The ESP module used here is a serial WIFI wireless transceiver module used for the development of mobile applications. The ESP module is used as the communication medium between the controller and the mobile application. It is located in the master unit where it receives the data from line unit and transmits the fault with exact location to the respective officer. We can predefine how it should display the fault occurred in the transmission line. This module allows controllers to connect with Wi-Fi network and make simple TCP/IP connections using Hayes-style commands. The active WIFI connected in the ESP module makes it easier to display the respective current and voltage values in the mobile application.

3.7 GPS The Global Positioning System is used to track the location. In this project, the location of the fault is displayed in the mobile application which helps the line-man to reach the particular area where the fault has occurred without any residential consumers informing the lineman regarding the fault.

Pole Line Fault Detector with Sophisticated Mobile Application

367

4 Working The pole line fault detector with sophisticated mobile application comes with three parts the transmitter device, the handheld device, and the mobile application. The transmitter device is connected to the electric pole lines with a set of current and voltage sensors monitoring the current and voltage values in all the three phases R, Y, B and displays the respective values in the transmitter device fitted to the electric pole lines. When there is a fault occurring in these electric pole lines a message is sent to the nearest EB office using the mobile application (Fig. 3). The lineman comes to the particular area to check on the fault where he uses the handheld device to find the exact pole where the fault has occurred. The transmitter device continuously monitors the electric pole lines detects the faults and intimates through the mobile app on an android platform and identifies the exact pole line using the handheld device [6–8]. If the lineman is working in an industrial area, he is notified regarding the industrial reverse current before he climbs the electric pole line to rectify the fault. A circuit breaker is also used to automatically drip off the circuit. This is the overall working of the system.

Fig. 3 Flowchart of overall circuit

368

K. N. Thirukkuralkani et al.

5 Results and Discussion In Fig. 4 the transmitter device displays the street number as 1, pole number as 1 and the voltage across R phase as 212 V, Y phase as 206 V, and B phase as 215 V and current readings across R phase as 10 amps, Y phase as 2.55 amps and B phase as 0 amps. It also displays the fault condition in case of any fault occurrence. In Fig. 5 the device fitted to the transformer displays the voltage across R phase as 224 V, Y phase as 226 V and B phase as 216 V. This device monitors any abnormal changes in the incoming voltage from the substations. In case of any abnormal voltage the hand-held device receives a message regarding the transformer fault. This device Fig. 4 Transmitter device fitted to electric pole line

Fig. 5 The device fitted to the transformer

Pole Line Fault Detector with Sophisticated Mobile Application

369

is also been connected to the circuit breaker so if there is any flow of reverse current from the industries during generator on/off it will initiate tripping operation so that the lineman is safeguarded from any kind of electric shock or mishaps. In Fig. 6 the mobile application displays the location of the fault as 100 ft road, Gandhipuram, Cbe, and the electric pole where the fault has occurred is indicated using a pole identification number 17 and the voltage value is 440 V which is normal and current value as 20 A and hence the fault condition is Nil. In Fig. 7 the location of the fault is displayed as Sanganoor road, Ganapathy, Cbe. The pole identification number is 2 and the type of fault is that has been detected in the low voltage condition (Fig. 8, Tables 1 and 2).

6 Graphical Representation • • • • •

The prototype was tested for different values of voltage and current. The voltage values varied from 230 V to 240 V The current values varied from 1.0 A to 2.6A according to the load. The prototype was tested for different loads such as motor, bulb loads, etc., The graph is plotted for voltage versus. Current which exhibits a non-linear rise of values in the R, Y, and B Phases See Fig. 9.

Fig. 6 Mobile app with fault intimation

370

K. N. Thirukkuralkani et al.

Fig. 7 It displays the location of the fault and fault condition

Fig. 8 Working model of the total project

7 Merits and Demerits • • • •

It is an effective system, in providing safety to the working staff. It can be easily installed. The initial cost of setting up the system is high. It requires frequent maintenance

Pole Line Fault Detector with Sophisticated Mobile Application

371

Table 1 Tabulation for fault condition and fault status S. No.

Fault condition

Fault status

1

240

Normal voltage

2

> 240

High voltage

3

< 240

Low voltage

4

15

Normal current

5

> 15

High current

6

< 15

Low current

Table 2 Tabulation of current and voltage values of R, Y, and B Phases

R Phase

S. No.

voltage (V)

Current (A)

voltage (V)

Y Phase Current (A)

voltage (V)

Current (A)

1

230

1.2

231

1.3

230

1.0

2

230

1.3

232

1.4

232

1.0

3

232

2

232

1.6

234

1.2

4

234

2.4

235

1.8

234

1.3

5

238

2.6

238

2

236

1.6

6

240

2.6

240

2.2

240

2.0

Fig. 9 Graphical representation

B Phase

372

K. N. Thirukkuralkani et al.

8 Conclusion and Future Scope The motivation behind this project was the difficulties faced by lineman in finding the faults and locating them in the electric pole lines. Transmission lines in our country are subjected to faults due to various factors and it is very difficult to identify and maintain it in short interval of time. This may cause many electrically induced accidents and hence must be prevented. This project finds a solution for this problem by implanting a set of units on various points on electric pole lines and measuring the instantaneous values continuously. The fault can easily be detected, identified, and located using this arrangement. This system can help the authorities to maintain the transmission lines easily and can avoid accidents of the lineman by giving them accurate knowledge about the type fault and the reverse current from industries. The mobile application serves as bridge to attain the good will of the people by rectifying the fault within short span of time with the intervention of the lineman and higher authorities. The future scope of this project is that we can replace the circuit breakers with smart Wi-Fi based circuit breakers which automatically switches on/off using the values logged on a cloud platform. It can be interfaced with a voltage regulator which also monitors the voltage drop across the power lines. Acknowledgements We would like to extend our heartfelt gratitude to god almighty and the management of Sri Ramakrishna Engineering College, Coimbatore for giving us the resources to carry forward with our project. We thank our department of Electronics and Instrumentation Engineering for validating our project and also, we would like to thank India Innovation Challenge Design Contest 2018 for recognizing our project.

References 1. Priya, P., Karupusamy, R., Ragava Raja, R., Pradeep, B.: “Design of wireless electricity pole line multi-fault monitoring system,” Int. J. Rec. Technol. Eng. (IJRTE), pp. 2277–3878 (Dec 2019) 2. Nandakumar, G.S., Sudha, V., Aswini, D.: Fault detection in overhead power transmission. Int. J. Pure Appl. Math. 118, 377–381 (2018) 3. Wasiq Raza, M.D., Naitam, A.: “Electric lineman protection using user changeable password based circuit breaker”, Int. J. Res. Sci. Eng. 3(2) (Mar–Apr 2017) 4. Bhanuprakash, M.E., Arun, C., Satheesh, A.: “Automatic power line fault detector”, Int. J. Adv. Res. Comput. Commun. Eng. (IJARCCE). 6(Special Issue 4), March (2017) 5. Doshi, D.A., Khedkar, K., Raut, N., Kharde, S.: Real time fault detection in power distribution line using power line communication. Int. J. Eng. Sci. Comput. 6(5), 4834–4837 (2016) 6. Baka, A.D., Uzunoglu, N.K.: “Prevention of injuries among electricians due to unexpected reenergization of power lines”, Institute of Electrical and Electronics Engineers (IEEE), pp. 34–39 Mar–Apr (2016) 7. Arun, P.K., Sharma, A.: “New techniques and approaches for live wire maintenance of transmission lines”, Int. J. Res. Appl. Sci. Eng. Technol. (IJRASET), 3(VI) June (2015)

Pole Line Fault Detector with Sophisticated Mobile Application

373

8. Mawle, P.P.,Dr. Parate, K.B., Dr. Bulverde, P.G.: “ Over head transmission lines live line maintenance technique based on condition monitor in India”, Int. J. Sci. Spiritual. Bus. Tech. 3(2), 2277–7261 June (2015) 9. Wei, C., Zhencun, P., Gang, Z., Jing, H.: “Study on single phase to ground fault site location method based on injection of signals and GSM short message”, School of Engineering, Shandong University, P. R. China, and Published by November (2015) 10. Hua, I., Zhang, J., Fantao, L.: “Internet of things technology and its applications in smart grid”, TELKOMNIKA Indonesian J. Electric. Eng. 12(2) (2014) 11. Fateh, B., Govindarasu, M.: “Wireless network design for transmission line monitoring in smart grid”, Institute of Electrical and Electronics Engineers (IEEE), Transaction On Smart Grid. 4(2) (2013) 12. Bunnoon, P.: Fault detection approaches to power system : state of-the art article review for searching a new approach in the future. Int. J. Electric. Comput. Eng. 3(4), 553–560 (2013) 13. Tayeb, E.B.M., Rhim, A.A.A., Omer.: “Transmission line faults detection, classification and location using artificial neural network”, International Conference & Utility Exhibition on Power and Energy Systems: Issues and Prospects for Asia (ICUE), pp. 1–5 (2011) 14. Singh, M., Dr. Panigrahi, B.K., Dr. Maheswari, R.P.: ”Transmission line fault detection and classification”, International Conference on Emerging Trends in Electrical and Computer Technology Proceedings of ICETECT, pp. 15–22 (2011) 15. Yadaiah, N., Ravi, N.: “Fault detection techniques for power transformers” IEEE/IAS Industrial & Commercial Power Systems Technical Conference, pp. 1–9 (2007)

Learning of Advanced Telecommunication Computing Architecture (ATCA)-Based Femto Gateway Framework P. Sudarsanam, G. V. Dwarakanatha, R. Anand, Hecate Shah, and C. S. Jayashree Abstract A case study of designing advanced telecommunicatıon computing architecture (ATCA) framework using femtocells. A small cell is smaller than the expected base station, explicitly intended to broaden the information capacity, speed, and proficiency of a cell arrange. These low force radio access hubs can be sent inside or outside, and utilize authorized, shared, or unlicensed range. The femtocell gateway architecture is designed for a small range such as 10 m to a few kilometers. Small cells can be utilized to give in-building and open-air remote help. Mobile operators use them to expand their service coverage and additionally increment network limits. Small cells are downsized, low force, lightweight remote access base stations that are found regularly inside homes, workplaces, and shopping centers. The small cell solution is comprised of a clusters of small cells, the access points intended for in-building home or undertaking use, and a little arrangement of core system components for interconnection between the small cells cluster and the inheritance core system. The small cells cluster relies on an IP level architecture that organizes into a system several elements of the conventional UMTS. It gives both the NodeB and RNC functionalities. The small cell arrangement is comprised of one or a few cluster of small cells, in addition to this arrangement of components shared between the groups. A “cluster” is characterized as the group of small cells (up to 64,000) associated with a “small cell gateway” furnishing the interworking with the mobile packet core. ATCA depicts a high data transfer capacity, high network, and chassis-based architecture designed principally to appeal to the P. Sudarsanam (B) · G. V. Dwarakanatha · C. S. Jayashree BMS Institute of Technology and Management, Bangalore, India e-mail: [email protected] G. V. Dwarakanatha e-mail: [email protected] R. Anand CMR Institute of Technology Bengaluru, Bangalore, India e-mail: [email protected] H. Shah Nokia Solutions and Network, Bangalore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_32

375

376

P. Sudarsanam et al.

telecommunication industry. AdvancedTCA or ATCA is an open standard from the PCI industrial computer manufacturers group (PICMG) that characterizes particulars for high-performance communication framework. Keywords Small cell · Universal mobile telecommunication system (UMTS) · Cluster · Sun Netra X4250 · Advanced telecommunication computing architecture (ATCA)

1 Introductıon In telecommunications, a femtocell is small, the most well-known type of small cells is femtocells. Small cells are downsized, low force, and lightweight remote access base stations that are found normally inside homes, workplaces, and shopping centers. Small cells can be utilized to give in-building and open-air remote. Portable administrators use them to expand their administration inclusion as well as increment arrange limit. Small cells are remote transmitters and beneficiaries intended to give organize inclusion to smaller regions. The utilization of femtocells permits to arrange inclusion in places where the sign to the fundamental system cells may be excessively powerless. They can be integrated into a home residential gateway or in a standalone format. These small units are installed by consumers. The small cell solution is made up of a cluster of small cells, and the access points designed for in-building home or enterprise use, and a small set of core network elements for interconnection between the small cell cluster and the legacy core network [1]. The small cells cluster is based on a flat IP architecture that collapses many functions of the traditional UMTS network into a device. It provides both the NodeB and RNC functionalities. The multi-standard small cell access points (home cell and enterprise cell) and metro cell outdoor use 3GPP interfaces to connect to neighboring eNodeBs, and other small cell access points and existing core network (CN) elements [2]. Home NodeB underpins radio administration while the portal keeps up the center systems usefulness. Femtocell adds the security functionalities, and design can be improved by including the capacities like authentication, authorization, and accounting works in femto gateway. On the Sun Netra X4250 and advanced telecommunication computation architecture (ATCA) hardware platforms, the small cell gateway is deployed. ATCA is a small cell gateway hardware platform. For both 3G and 4G support, the ATCAv2 hardware platform is available. It is high bandwidth, high availability, and efficient and versatile, built for cost-effectiveness [3].

2 Literature Survey LTE femtocells support for improving the performance of 3GPP specifications. It is designed with different modulation techniques such as OFDMA and SC-FDMA

Learning of Advanced Telecommunicatıon Computing …

377

modulation used in downlink as well as uplink communication. Even both FDD and TDD topologies for designing a TD-LTE femtocell and FD-LTE femtocell architecture. Current networks, such as 2G and 3G, are used by central devices and their base stations are operated by RNCs. Within a single macrocell coverage area, there are thousands of femtocells. Thus, a single RNC needs to control on the order of hundreds to thousands or tens of thousands of femtocells. It is not possible to handle or control so many FAPs using the current network control entities. In order to boost the performance of the overall device, FAP connectivity should also be different from that of current 3GPP network connectivity for femtocell deployment. The different femtocell architecture such as 3G(UMTS) femtocell architecture, lu-h interface, LTE femtocell architecture, and WiMAX femtocell architecture are used in current generation networks [4].

3 Methodology Femtocells are smaller and low-controlled base stations (small base stations). Femtocells are a financially savvy answer for filling any holes in inclusion and rapidly expanding limits in high-thickness, high-traffic regions. Femtocells cover a span of 50–100 m (for example, a huge house). The small cell is built around, a large number of access points (metro indoor and outdoor, home enterprise). The small cells access points are home cell supports 4 users and with an approximate cell radius of 30– 60 m, enterprise cell supports up to 32 users, with an approximate cell radius of 100–150 m, metro cell indoor supports up to 32 users with an approximate cell radius of 100–150 m (based on the construction of building in which it is located), metro cell outdoor supports up to 32 users with the extended cell range feature (cell radius up to 2 km. Femto arrangement objects focused on a limit that essentially improves execution, producing about upgraded quality of experience (QoE) for end clients by empowering quicker, progressively dependable information associations, and higher information throughput on both W-CDMA and LTE systems. Femto solutions can be deployed almost everywhere with enhanced visual integration compared to traditional towers [5]. They can be mounted easily on walls, lamp posts, poles, or even on the side of a building, extending coverage, and capacity to indoor locations. Femtocell base station uses a broadband Internet Protocol connection as “backhaul”, namely for connection to CN. Here, the operator should perform authentication using the unique Id between the femto gateway and the femto access point for security. And to provide quality of service to the user, the operator must provide a guarantee that no packet loss and delay should happen. And femtocell reduces traffic and load on the network by using IP as a backhaul and also it helps the operator by high maintenance cost and also from the tower deployment burden. By providing these benefits the operator can get better service at a lower rate. One kind of broadband Web convention associations is a digital subscriber line (DSL). The DSL associates a DSL transmitter-recipient (“handset”) of the femtocell base station profoundly organizes. Since the LTE network has a flat architecture,

378

P. Sudarsanam et al.

it is extended to the femtocell network. The multi-standard femtocell access points use 3GPP interfaces to connect to neighboring eNodeBs and other femto solution access points and existing mobile core network (CN) elements while offering a 3GPP complaint air interface. The architecture integrates seamlessly into the operator’s existing LTE and UMTS network, including interfaces with other elements as well as existing management systems, thereby incorporating standard interfaces with existing applications, portals, and services. Femto solution access points interconnect with the core network elements with the addition of a SeGW, a femtocell gateway between the femto solution access points and the W-CDMA/LTE core network elements [6]. Core network (CN) elements for connection to a standard core network through the PS/CS interfaces (applicable for 3G support) and S1-MME interfaces.

3.1 3G Femtocell Network Architecture Figure 1 explains how 3G femtocell network architecture works [4] from the mobile or user equipment connection to home NodeB happens over the Uu interface. This interface is the standard W-CDMA radio interface between the UE and HNB. A femtocell is a mini-base station that can be installed on the customer premise, whether residential or enterprise, as a standalone entity. The primary use behind the femtocell is to give better cellular coverage within the building. The primary reason for the femtocell is to give the better transmission, which is a test within buildings without

Fig. 1. 3G femtocell network architecture

Learning of Advanced Telecommunicatıon Computing …

379

femtocells. A femtocell network system consists of a femto access point (FAP) as part of a universal terrestrial radio access network (UTRAN) and a femtocell gateway to support and control the user plane access to the femto user equipment. To provide the communication between femtocells with the main core network, the mobile backhaul system uses the user’s Ethernet broadband connection, whether digital subscriber line (DSL), cable, fiber, or another such link to provide a cost-effective and available connection for the femtocell that can be used as a standard for all applications. Iuh interface consists of RANAP, HNBAP, SCTP on the control plane, and UP [7]. This provides the interface toward the FGW. This is the interface that provides the link between the femto and the femto gateway. And also, this interface consists of an HNB application protocol, for the HNBAP deployment, it provides a high level of scalability. The functionalities of the control plane and the user plane are provided by the interface, i.e., Iuh interface. And for the user plane, radio access bearer management, security, and mobility management are the functionalities of the user plane. The HNB gateway connects the large number of HNBs with the Iu interface that is divided into two interfaces, the IuPS, this interface toward SGSNs, this is handled by IuPS signaling gateway, and for the IuPS user plane is handled by packet gateway. And the IuCS interface toward 3G MSCs, to handle the IuCS radio network control plane and user plane is handled by the voice gateway. The core network part is further divided into two domains, i.e., circuit-switched (CS) domain and the packet-switched (PS) domain. Circuit-switched networks are connection-oriented networks. To transfer the message from the source to the destination, a path will be established to transfer the entire message from the source to the destination. The message sent by the source it is received in order, the message will be sent in order. Circuit-switched networks are implemented at the physical layer. It is more reliable, at the source only data is processed and transmitted. Packet switching networks are connectionless networks. To route the data from source to destination, it divides the message and it is grouped into several units these are called packets. These packets will route individually from the source to the destination. In the circuit-switched, a path is established and here there need not establish a path from source to destination. Separately each packet is routed. It is flexible and the packets follow a different path for the different data. It is designed for data transfer and the packet switching is implemented at the network layer. Not at the source only data is processed and transmitted, but at each switching station, it is processed and transmitted. The message of each packet is received out of order. The operator’s core network part is used for the exchange of information between the LAN’s and also it provides a path and it interconnects the network. The elements that are present in the core network part are MSC, VLR, and media gateway. In the packet switching part, the elements are gateway GPRS support node (GGSN) [8], serving GPRS support node (SGSN). The main functionality of MSC is to handle the signaling messages, manages, and tear down the phone calls, and it is used to route the voice calls and SMS, conference calls, and also releases the end-to-end connection. During the call, it handles to manage mobility and hand over purpose. The main functionality of VLR is, it contains the information of the mobile subscriber

380

P. Sudarsanam et al.

and also the location, like the location of the subscriber present in the service area and it is a database of mobile stations (MSs). The information is either received from home location register (HLR) or it is collected from MSC. For the subscriber who wants to connect, the authorization will be done for each subscriber to use CN, and the main purpose is to handle the incoming calls. It contains the subscriber identity number and subscribers phone number. It also informs the HLR that a subscriber has arrived in a particular area and tracks in which area the subscriber is located. The functionality of the media gateway is to redirect telephone calls from one part of the network to another. The radio network controller (RNC) delivers CS traffic, that is voice and SMS to the core network that is MSC over the IuCs interface. Here, the purpose of RNC is, the main responsible is, it controls all the NodeBs which are connected to it. At the radio resource management, the RNC will be carried. It will perform the role of mobility management and encryption from the mobile and to the mobile before the data is sent. A security gateway is a compact solution that provides encryption for untrusted portions of communications networks. Providing complete network protection through encryption. The functionality of the security gateway is, it protects user privacy and also protects network security [9]. It is installed in the operators network and it establishes IPsec tunnels with HNBs, IPsec tunnels, and the functionality is for delivering all voice, messaging, and packet data services between HNB and CN and it forwards the traffic to gateway. The main feature it provides is high availability, and also if there is a fault with the switching it can complete in seconds, and also it supports plug and represents components and in-service platform upgrades. When the user wants to communicate the call goes from the femto access point to the security gateway and the femto gateway, through the femto gateway to the core network.

3.2 Small Cell Gateway on ATCA Platform The small cell gateway system uses carrier-grade ATCA architecture. Redundancy is used in the design. Advanced telecommunication computing architecture (ATCA) hardware is a chassis type system based on the standards specified by PCI industrial computer manufacturers group (PICMG). This hardware platform for the small cell gateway product. The ATCAV2 platform is now available for both 3G and 4G support. The ATCAv2 platform is mainly composed of a rackable 11U height shelf equipped with: For internal communication, one pair of switching boards will be provided and for external physical connectivity support using the RTM. ATCAv2 supports up to 6 clusters, and each single one is managed independently from others.

Learning of Advanced Telecommunicatıon Computing …

381

Fig. 2 ATCAv2 shelf diagram

3.3 ATCAv2 Figure 2 is the ATCAv2 shelf, the shelf is fully PICMG 3.0 (ATCA) standard compliant. NEBS certified and ETSI compliant. A total of 14 slots ATCA housing used slots for up to 12 single boards computer (e.g., Bono or Tome) supporting HeNBGW [10] application. One pair of switching boards providing internal communication and external physical connectivity support using the RTM. These are called small cell gateway switching (BGS) boards. One or several (up to six) application board pairs, supporting the small cell gateway application. These are called small cell gateway application (BGA) [11] boards. Slots for two switching boards (e.g., Malban10 or MalbanX) supporting I/O Dual star 1G and Dual start 10G fabric backplane implementation. Two shelf management boards working in active/standby mode, management based on standard IPMI and radial IPMB bus implementation for high availability, integrated alarm collecting blade. Top and bottom fan trays for push/pull cooling management consist of 330 W/slot capacity and up to 70-80CFM per slot.

3.4 ATCAv2 Shelf (Front) Figure 3 is the front view of ATCAv2 shelf [11] and also it has 14 slots, i.e., each cluster consists of 2 Bono’s (active and standby state).

382

Fig. 3 ATCAv2 (front) shelf diagram

M01 to M06: Application boards. M07 to M08: Switching boards. M09 to M14: Application boards. M15: Fan tray top. M16: Fan tray bottom. M17: Shelf manager top board. M18: Shelf manager bottom board. M19: Alarm board. M20: Front air filter.

3.5 ATCAv2 Shelf (Rear) Figure 4 is the rear view of ATCAv2 shelf: M61 to M66: unused. M67 to M68: RTM of BGS boards for external connectivity. M69 to M74: unused. M75: Alarm access. M80 to M81: Power entry module (capacitor).

P. Sudarsanam et al.

Learning of Advanced Telecommunicatıon Computing …

383

Fig. 4 ATCAv2 (rear) shelf diagram

P01: rear safety ground. M90: rear air filter. Backplanes dual star topology.

3.6 Fabric and Base Dual Star Topology The fabric backplane is10Gbps and it handles telecom traffic. 1Gbps is the base interface and manages O&M traffic. Hardware management through the shelf management board is an IPMI bus. ATCA offers very good capabilities for shelf management. When the event occurs like overvoltage, over-current, or when the temperature increased then the shelf manager will receive events from field replaceable units (FRUs), when an event occurs the shelf manager reacts to an event by increasing fan speed or shutting down the board.

384

P. Sudarsanam et al.

Fig. 5 Dual star topology diagram

3.7 Dual Star Topology Figure 5 is the dual star topology means, for a channel connection to all node boards and the other hub board, each hub board supports. Each node board supports two channels (one toward each hub board) [12].

3.8 Bono Blades (Application Blades) Figure 6 is a Bono blade on an ATCA hardware platform, the following are the features of the Bono blade, Dual 6 Core × 86–64 single board computer. Dual CPU is from Intel Westmere @ 2 GHz, 32 GB of volatile memory DDR3 ECC 1066 GHz RDIMM, Performances- ~ 200 SpecInt Rate base 2006 (with turbo and multithreading), OS Linux Redhat6, 300 GB Hard drive (AMC format daughterboard), Ethernet I/O Backplane, Fabric (1GEth or 10GEth per plane) and base (1GEth per plane). Each cluster consists of two Bono blades, one is in an active state and another one is in standby state [11].

3.9 Malban Blades (Switching Blades) Figure 7 is a Malban blade in the ATCA platform, the following are the features of the Malban blade [11]:

Learning of Advanced Telecommunicatıon Computing …

385

Fig. 6 Bono blades

Fig. 7 Malban blades

Core features are 1Intel Merom Core 2 Duo processor (19 SpecInt Rate 2006), 2RDIMM slots, max 8 GB DDR2-400 ECC, Redhat RHEL6 Linux OS, hardware diagnostic, Ethernet switch, base switch (O&M), it handles O&M traffic and13 ports GEth 1000Base-T to Base back plan and one 10GEth uplink to RTM (RTM supporting 1 × 1GEth for O&M I/O). Fabric switch handles telecom traffic/200Gbps switching and 10GEth × 12 ports to fabric backplane. Here, two 10Geth uplinks to RTM supporting 12 × 1GEth for telecom I/O (RTM) [11].

386

3.9.1

P. Sudarsanam et al.

Femto Gateway Test Framework

This demonstrates the end-to-end system communication occurs between the simulators and femto gateway. Femto gateway test framework consists of femtocell (minibase station) or femto access point, femto gateway, and CN elements. Femtocell access point combines all functions of UMTS, i.e., NodeB and RNC into a single physical network element. It connects to the Internet via the gateway. The IP network is connected with the core network via gateway. The femto gateway acts as an intermediate between the simulators femto access point and the core network [13]. FGW will be integrated on the ATCA hardware platform. Also, it provides 3G femtocell and 4G femtocell functionalities. The HeNBGW is implemented using the addition of SeGW. The HeNBGW can be used with the integrated security gateway (SeGW) to secure the communication. Using the IPsec tunnels, its secure communication between HeNB and femto gateway. The communication happens between the simulators and the FGW through the IP network and the femto gateway. The installation on the gateway can be done manually or automatically by executing the installation scripts. After the build is successfully installed on the gateway, then call verification will be done by executing call flow scripts on the gateway for both 3G and 4G. Figure 8 is an example of the basic script execution process, how the test coordinator performs the operation on gateway, and lists the operations to be done on gateway from the test host server. From the test host server, the test co-ordinator logon to the gateway to perform the testing operations on the femto gateway. The test co-ordinator from the test host server performs the 3G/4G operations on cluster, there will be seven clusters, and each cluster consists of two bono blades such as active and standby state. From the server, the test co-ordinator login to bono, using the Ssh command logins to the gateway. By copying the builds from the server to the test

Fig. 8 Basic script execution diagram

Learning of Advanced Telecommunicatıon Computing …

387

host server and from the test host server copy the files on the gateway occurs often. The process of installation on the gateway can be done manually or using scripts, like 3G installation and 4G installation scripts. After the successful execution of installation scripts on the gateway, need to test the basic call script on the gateway, if the installation script got fail then need to debug the error and fix it. The following are the steps to perform the installation on gateway. Build installation process on gateway is used for the test co-ordinator to perform the installation on both the blades bono1 and bono2 (active and standby) and to ensure that the installation is performed by the superuser or a user with superuser privileges. The procedure for the build installation: 1. 2.

3. 4.

5.

6. 7.

8. 9.

Copy the build file < build > . tar. gz from the server to both the bonos (bono1 and bono2) Enter the command tar -zmxvf FGW- < version > . tar. gz in the downloaded location to extract the files. After the extraction of the package, all files will be extracted in their respective folders with default files. Enter the following command stopbsg, if any existing build or process are running. After the extraction of the package, enter the following commands in the directory Cd DELIVERY/FGW/FGW- ./BSGInstall The ./BSGInstall script performs the following actions: Creates the required directories for installation and various directories, which is used for postinstallation. Configure the attributes in OAM_PreConfig.xml using the FGW_Config tool provided as follows. Populate “FGWConfig_All.ini” file with correct values by entering the values manually or copy existing ones from backup or read the values from testbed configuration file (read the values from this file) using scripts. FGWConfig_All.ini is available in /opt/conf and FGWConfig.exe is available in /opt/bin. Here, the testbed configuration file contains all the interface IP addresses, switch, bonos, malban, SeGW IP addresses that are configured will read the values of this testbed file using the scripts. Execute the steps below to clean up the OAM_PersistentDB.xml file cd /opt/confmv OAM_persistentDB.xml OAM_PersistentDB.xml_bckp Run the config tool. (./FGWConfig.exe) Execute the following command ./FGWConfig.exe This command will load the configuration present in FGWConfig_All.ini and generates OAM_PreConfig.xml. After that execute the command startbsg Check the status and processes on bono1 and bono2 by giving the commands status and showproc. Status command shows the version, IP’s configuration, state (active/standby). Showproc command shows the running on the bonos.

388

-

10.

3.9.2

P. Sudarsanam et al.

Patch installation process on the gateway: Copy the build from the server to the test host server Now SCP the file from the test host to both the Bonos (Bono1 and Bono2) Copy the install_patch.sh into /opt/scripts directory Change the permissions $chmod 755 /opt/scripts/install_patch.sh $dos2Unix /opt/scripts/install_patch.sh Now, the following command to be executed to install the patch $/opt/scripts/install_patch.sh /home/admin/FGW-version.tar.gz After the successful patch installation executes the basic call test. After the successful build installation and patch installation and basic call execution, sanity testing will be done. If there is an issue with the patch install, build install, and basic call to run sanity testing. Femtocell Interfaces IP Connections

Figure 9 explains the femtocell end-to-end system connection, with the core network elements and the femto IP network is connected through the femto gateway. The gateway uses carrier-grade ATCA architecture. Redundancy is used in the design. FGW between the femto and the core network, end-to-end communication happen through the gateway and the switch. On switch, IP addresses are configured for the

Fig. 9 Femto gateway interfaces IP network connection diagram

Learning of Advanced Telecommunicatıon Computing …

389

communication and to send and receive the information from the femto to femto gateway and femto gateway to the core network. CP and UP both are the interfaces for communication, to send and receive the data [14]. Control plane (CP) is used for signaling or control messages between the network and the UE. And it is also used for the radio access bearer control and for the connection between the UE and network. Control plane aggregation, the gateway provides a set of control plane aggregation capacities. User plane (UP) is used for traffic or user path. For transferring the user data, the UP is responsible, such as voice or application data. GW offers a user plane focus function alongside the control plane capacity. There are two domains such as circuit-switched and packet-switched. To route the data from source to destination, it divides the message and it is grouped into a number of units these are called packets. These packets will route individually from the source to the destination. In the circuit-switched, a path is established and here there need not establish a path from source to destination.

4 4G Gateway The ATCAv2-based small cell gateway supports the following functions for 4G support [15]: 4G GW software architecture. MST—MME SCTP Terminator call processing. HeNBGrp—HeNBGrp Instance process monitor. HST—HeNB SCTP Terminator. DS—Directory Services. STATSD—PM collector. Explains the working of the LTE 4G gateway, it consists of the following functionalities. MMESCTP Terminator: MMESCTP terminator is primarily responsible for establishing and monitoring the SCTP association with one specific MME. SCTP messages received from the MME are classified into one of two groups: UE associated signaling messages—forwarded to the relevant HeNB group process. Non-UE associated signaling, including S1AP Handover Request messages— forwarded to the Directory Services process. The functionality of HeNBSCTP terminator is primarily responsible for accepting the SCTP associations from HeNBs and more importantly to load balance the HeNBs among the HeNB group process instances. This process is primarily responsible for terminating the SCTP associations with HeNBs. The process is responsible for load balancing across HeNB group instances. All messages toward the HeNB goes through this process. The functionality of is HeNB group, this process is responsible for managing a set of HeNBs and all the UEs connected via these HeNBs. This is the call processingrelated process within the 4G small cell gateway acts as virtual eNB to the MME and

390 Table 1 HeNB group information table

P. Sudarsanam et al. HENBGRPI > show supported commands

Description

Show mme

Display mme data in detail

Show mme henbgrp

Display mme data in henbgrp

Show mmer

Display mme region

Show henb

Display base HeNB context

Show henb henbgrp

Display HeNBGRP HeNB context

Show gwdata

Display HeNBGW data

as virtual MME to the HeNB. Storage of HeNB configuration data, HeNB management, UE signaling context management and S1 interface handling, allocation, and maintenance of virtual UE identifiers (virtual eNB UE S1AP IDs and virtual MME UE S1AP IDs), maintenance of MME context, GUMMEI List, and MME relative capacity, virtual eNB S1 AP IDs storage, handling OAM configuration and management [16] (Table 1).

4.1 Directory Services This process is primarily responsible for handling MME associations and non-UE associated signaling. The directory services process is responsible for maintaining the mapping of HeNBs to HeNB group process instances [10]. The ENB UE S1AP Id range of each HeNB group instance in the HeNB gateway is configured in the directory services process. As the directory services holds the HeNB group instances [17] for each HeNB and knows the mapping of ENB UE S1AP Ids to HeNB group instances it is responsible for handling all non-UE associated signaling (e.g., paging and reset). Responsible for routing handover requests to the appropriate HeNB group. Directory services are responsible to establish and maintain the S1 connection to all MMEs [16] (Table 2).

5 Conclusion This article concludes that femtocell network end-to-end system connectivity and testing. Femtocell network implemented using femto, security gateway, femto gateway, and core network. Femtocells can provide at a very low cost for high-quality network access to indoor users. Gateway is a high-performance solution, offering dedicated security processors along with high throughput performance. It is available in a variety of standards-based ATCA chassis and the function or the main use

Learning of Advanced Telecommunicatıon Computing … Table 2 Directory service ınformation table

391

DS > ds show supported command

Description

Alarm

Display alarm status

mme conf

Display MME configured data in detail

Mme

Display MME summary data

mme det

Display MME learn data in detail

Mmer

Display MME region

gwdata

Display HeNBGW data

of FGW is to provide better mobility and operation administration and maintenance (OAM) work for UEs. A run of the mill ATCA-based framework utilizes redundancy which means the chassis contains numerous switching and packet processing blades that give reinforcement in case of a component failure. The FGW is integrated on the ATCA platform, and the FGW is intermediate between the simulator UE and simulator core network. The process is to build installation and patch installation to be done on femto gateway manually or by executing scripts. The debugging process happens if the script has failed or aborted. The call flow occurs between UE, femto gateway, and core network components, and whatever test cases are to be conducted to verify that the call is successful or not, is to be tested after the successful installation phase by running the call flow scripts on the gateway. For the study of advanced telecommunications computıng architecture (ATCA) in LTE communication, the latest frameworks establish various aspects of HetNet communication. Moreover, this framework supports different LTE communication using femtocells in future cellular networks. These frameworks can be revised with different mobility conditions of the network in future.

References 1. Gu, F.: “Network service selection in the femtocell communication systems,” IEEE International Conference on Computer Network, Electronic and Automation November 28, 2019 2. Neruda, M., Vrana, J.: “Femtocells in 3G mobile networks” 2009 16th International conference on Systems, Signals and Image Processing 3. Knisley, D.N., Airvana, Inc., Favichia, F., Alcatel-Lucent “Standardization of Femtocells in 3GPP2” IEEE Communication Magazine, September 2009 4. Singh, K., Thakur, S.: A review on femtocell and its 3G and 4G architecture. Int. J. Comput. Sci. Mobile Comput. 4(6), 1166–1176 (2015) 5. Chandrasekhar, V., Andrews, J.G.: “Femtocell networks: a survey,” IEEE Communication Magazine, September 2008. 6. Ahmed, A.U., Islam, M.T., Ismail, M.: A review on femtocell and its diverse interference mitigation techniques in heterogeneous network. Wireless Pers. Commun. 78, 85–106 (2014) 7. Wang, H.L., Kao, S.J.: “An efficient deployment scheme for mitigating interference in two-tier networks,” 2012 IEEE International Conference on Communication, Networks and Satellite

392

P. Sudarsanam et al.

8. https://www.cisco.com/c/en/us/products/wireless/ggsn-gateway-gprs-support-node/index. html 9. Chiba, T.: “Efficient route optimization methods for femtocell-based all IP networks,” 2009 IEEE International Conference on wireless and Mobile Computing, Networking and Communications 10. https://www.cisco.com/c/en/us/td/docs/wireless/asr_5000/21-2/HeNB-GW/21-2-HeNBGWAdmin/21-2-HeNBGW-Admin_chapter_01.html 11. Kawasaki, H., Matsuoka, S., Makino, A., Ono, Y.: ATCA server systems for telecommunications services. FUJITSU Sci. Tech. J. 47(2), 215–221 (2011) 12. https://www.interfacebus.com/Glossary-of-Terms-Network-Topologies.html 13. Garg, K.: Features of femto convergence server using SIP. Int. J. Eng. Res. Technol. (IJERT), 2(4) (Apr 2013) 14. Chen, J., Rauber, P., Singh, D., Sundarraman, C., Tinnakornsrisuphap, P., Yavuz, M.: Femtocells—architecture & network aspects, Qualcom (28 Jan 2010) 15. Vitthal, J.P., Jayarekha: Femtocell—Energy efficient cellular networks. Int. J. Recent Innov. Trends Comput. Commun. ISSN: 2321–8169 2(4), 867–872 16. Monitor the MME Service, MME Administration Guide, StarOS Release 21.3. https:// www.cisco.com/c/en/us/td/docs/wireless/asr_5000/21-3_N5-5/MME/21-3-MME-Admin/213-MME-Admin_chapter_0111011.pdf 17. Roychoudhury, P., Roychoudhury, B., Saikia, D.K.: “Hierarchical group based mutual authentication and key agreement for machine type communication in LTE and future 5G networks”, Sec. Commun. Netw. 2017, Article ID 1701243, p. 21 (2017) 18. Srinivasa Rao, V.: Senior architect & rambabu gajula, protocol signaling procedures in LTE, Radisys White Paper

Infected Inflation and Symptoms Without the Impact of Covid 19 with Ahp Calculation Method Nizirwan Anwar, Ahmad Holidin, Galang Andika, and Harco Leslie Hendric Spits Warnars

Abstract Indonesia is one of the corona virus-positive countries (Covid-19). The first case that occurred in the country happened to two residents of Depok, West Java. This was announced by President Joko Widodo directly at the Presidential Palace. Since it broke out in December 2019 until now, a new type of coronavirus has made people worry about coughing or sneezing. Imagine, the virus that causes Covid-19 is transmitted through droplets or droplets of particles from coughing, sneezing, or when talking. The discussion in this paper is to design a piece of information that will facilitate the decision of the Analytical Hierarchy Process (AHP) to see the development of infected viruses and OTG (People without Symptoms). This study uses the Analytical Hierarchy Process (AHP) method is a method of decision making by making pairwise comparisons between the selection criteria and also pairwise comparisons between available choices. So that the data displayed will be following the information will give the medical staff information according to the needs, interactive and more efficient. Keywords Analytical hierarchy process · Decision support systems · Web-based application · Covid-19 · Information systems · Modeling systems

N. Anwar Faculty of Computer Science, Esa Unggul University, Jakarta, Indonesia 11510 e-mail: [email protected] A. Holidin Master of Information Technology, University of Raharja, Tangerang, Indonesia 15117 e-mail: [email protected] G. Andika Functional Motor Vehicle Inspector Transportation Department of South Tangerang City, Tangerang, Indonesia H. L. H. S. Warnars (B) Computer Science Department, BINUS Graduate Program—Doctor of Computer Science, Bina Nusantara University Jakarta, West Jakarta, Indonesia 11480 e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_33

393

394

N. Anwar et al.

1 Introduction In the early 2020s, the world was shocked by the coronavirus outbreak (Covid19) which infected almost all countries in the world. WHO since January 2020 has announced the world into emergency related to this global virus [1, 2]. This is an extraordinary phenomenon that occurred on earth in the twenty-first century, the scale of which might be likened to World War II because large-scale events (international sporting events for example) were almost entirely postponed or even canceled [3]. This condition only happened during the world war, there was never any other situation that could cancel these events [4]. As of March 19, 2020, 214,894 people were infected with the coronavirus, 8,732 people died and 83,313 people were cured [5]. Older populations put the health and social systems in crisis because they are unable to overcome problems that the younger population can overcome. Only an initial assessment of this “fragility” in the community and long-term care can prevent this systemic crisis and, as a consequence, a new and modern health care system is mandatory [6]. Also, it is important to plan the organization of new hospitals with the assumption that a large proportion of the population and expertise are to treat and manage the percentage of seriously ill parents [7]. Specifically in Indonesia, the Government has issued a disaster emergency from 29 February 2020 to 29 May 2020 related to this pandemic virus with an amount of time 91 days [8]. Steps have been taken by the government to solve this extraordinary case, one of which is to socialize the Social Distancing movement. This concept explains that to be able to reduce or even break the Covid-19 infection chain, one must maintain a safe distance from other humans at least 2 m, and not make direct contact with others, avoiding mass encounters [9]. Covid-19 data management, specifically hazardous and infectious data without information that is important for information in health. Information submission will be more efficient with a visual model that is interesting to look at [10].

2 Materials and Methods AHP (Analytic Hierarchy Process) is a general theory of measurement used to find the ratio of scales, both from discrete and continuous pair comparisons. AHP decomposes complex multi-factor or multi-criteria problems into a hierarchy. The hierarchy is determined as a representation of a multi-level multilevel complex where the first level is the goal, followed by the levels of factors, criteria, sub-criteria, and so on down to the last level of alternatives. With hierarchy. A complex problem can be broken down into groups which are then formed into a hierarchy so that the conflict will appear more structured and systematic [11]. As seen in Fig. 1 the process of AHP steps started with the arrangement of the hierarchy form which defines the goal of the AHP, criteria, and if possible sub-criteria and

Infected Inflation and Symptoms Without the Impact of Covid … Fig. 1 AHP flow activities

395

Start Arrange the hierarchy from the Assessment criteria and alternaƟves DeterminaƟon of Logical Consistency Done

alternatives [12]. Moreover, The second step shows as the step to assess the criteria as been selected in the previous step including the sub-criteria and including the assessment of the alternatives [13]. The third step is to find the determination of the AHP as the decision-making process as refer to the goal of the AHP method is applied. Lastly, the fourth step shows the logical consistency, wherein the form of logic can be easy to understand by the user when making their decision [14]. Arranging the hierarchy from the debates resolved The issues being discussed are broken down into a no-doubt, namely proposals and alternatives, then arranged into a hierarchical structure like Fig. 2 [15]. It is as mentioned before where start with the goal which shows the meaning of applying this AHP for decision support system as a support decision-making process [16]. Moreover, the criteria are set conditions that

Fig. 2 AHP hierarchy structure

396 Table 1 Criteria information

Table 2 Weighting score

N. Anwar et al. Criteria Information

Criteria Weight (%)

C1

Fever

10

C2

Dry Cough

20

C3

Difficulty breathing or shortness 30 of breath

C4

Rapid test results

Type of Weighting

40

Weight

Absolute importance of

9

Approaching absolute from

8

Very important of

7

Approaching is very important from

6

More important than

5

Approaching is more important than

4

A little more important than

3

Approaching a little more important than

2

As important as

1

apply as part of the decision-making process and as an optional of decision making which is shown as a goal of the decision-making process. The sub-criteria is the deepening of criteria to make the criteria more detailed and comprehensive as part detail of the decision-making result. Finally, the alternatives as to the selection of sub-criteria as a detail of decision-making result [17]. Table 1 shows the five criteria which are as confirmation if someone has been infected with Covid19 with criteria such as fever, dry cough, difficulty breathing or shortness of breath, and the rapid test results. Including for each criterion has criteria weight where C1 has 10%, C2 has 20%, C3 has 30% and C4 has 40% respectively. The composition 10% + 20% + 30% + 40% = 100% has a total of 100% and the composition of 10, 20, 30, and 40% based on user requirement process which was applied to expert with the interview, Forum Discussion Group, and observation. Moreover, Table 2 shows the 9 weighting score as the normalized range of criteria which apply to the criteria with start weight is 1 as important till weight is 9 for absolute importance of.

3 Discussion Table 3 shows the Pairwise Comparison Matrix (PCM) for criteria C1 to C4 as seen in Table 1. This PCM was created based on column criteria weight in Table 1 with having criteria weight 10% for C1, 20% for C2, 30% for C3, and 40% for C4. The

Infected Inflation and Symptoms Without the Impact of Covid …

397

Table 3 Pairwise comparison matrix (PCM) for criteria Criteria

C2

C3

C4

C1

1.00

0.50

0.33

0.25

C2

2.00

1.00

0.67

0.50

C3

3.00

1.50

1.00

0.75

C4

4.00

2.00

1.33

1.00

10.00

5.00

3.33

2.50

Total

C1

PCM in the table was applied using Eq. (1) where the percentage of row position criteria in Table 3 was divided with a percentage of column position criteria in Table 3. For example for the first row in Table 1 for criteria has score C1/C1 = 10%/10% = 1, C1/C2 = 10%/20% = 0.50, C1/C3 = 10%/30% = 0.33, and C1/C4 = 10%/40% = 0.25 whereas seen at the first row in Table 3 has score 1.00, 0.50, 0.33, and 0.25. The next rows were applied with Eq. (1) as well and the result of the implementation of Eq. (1) is seen in Table 3. The last row in Table 3 is the total for each column which was applied using Eq. (2), for example for the 1st column has a total of 10.00 as adding a score of 1.00 + 2.00 + 3.00 + 4.00 = 10.00. PCM = row_criteria/col_criteria

(1)

Total_row = sum(column)

(2)

After the PCM process then we need to normalized the PCM and recognized as Normalization of Pairwise Comparison (NPCM) which was carried on using Eq. (2) each score in Table 3 was divided each total column score.

where :

NPCM = score /Total_column

(3)

Criteria weight = Avg(sum_row_score)

(4)

EV = (X 1..X n) ∧ (1/n)

(5)

Criteria weight = EV ∗ sum(EV)

(6)

Weighted sum value = sum(row score)

(7)

Ratio = Weighted sum value/Criteria weight

(8)

398

N. Anwar et al.

Table 4 Normalized Pairwise comparison matrix (NPCM) for criteria Criteria

C1

C2

C3

C4

Criteria weight (Eq. 4)

EV (Eigen Value)

Criteria weight (Eq. 6)

Ratio

C1

0.10

0.10

0.10

0.10

0.10

0.452

0.10

4.00

C2

0.20

0.20

0.20

0.20

0.20

0.904

0.20

4.00

C3

0.30

0.30

0.30

0.30

0.30

1.355

0.30

4.00

C4

0.40

0.40

0.40

0.40

0.40

1.807

0.40

4.00

Total

1.00

1.00

1.00

1.00

1.00

4.518

1.00

4.00

X1…Xn n

= multiplication all score in the same row = number of criteria

Table 4 shows the implementation of Eq. (3) upon each score in Table 3, where the composition for each column is the same 0.10, 0.20, 0.30, and 0.40. Meanwhile, the last row in column 4 was executed using Eq. (2) where all column score is summarized and they have similar score 1 (0.10 + 0.20 + 0.30 + 0.40 = 1) and score 1 proof as to the correct result. For the criteria weights as for confirmation ranking for each criterion were applied using either Eq. (4) or Eq. (6). The column criteria weight (Eq. 4) in table 4 shows the result of applying Eq. (4) as average per row and the column criteria weight (Eq. 6) in Table 4 shows the result of applying Eq. (6) where EV = Eigen Value to the power (ˆ) with 1/n, where n = number of criteria and in this paper n = 4, since there are 4 criteria such as C1, C2, C3, and C4. The result of column Criteria weight (Eq. 4) and column Criteria weight (Eq. 6) are the same and both have a total of 1 as summarization using Eq. (2) where 0.10 + 0.20 + 0.30 + 0.40 = 1. Meanwhile, column EV (Eigen Value) in Table 4 was executed using Eq. (5) which shows the multiplication of all scores in Table 3 in the same row and to the power (ˆ) by 1/n, where n = number of criteria where in this paper n = 4, since there are 4 criteria such as C1, C2, C3, and C4. The implementation of Eq. (5) upon all scores in Table 3 has resulted in column EV (Eigen Value) in Table 4 and has score 0.452, 0.904, 1.355, and 1.807 with a total = 4.518 where this total as summarized column EV (Eigen Value) using Eq. (2). For example in the first row of Table 4 has an EV (Eigen Value) score of 0.452 as a result of the implementation of Eq. (5) (X1..Xn) ˆ (1/n) where X1..Xn were come from the first row in Table 3 and the result is (1.00*0.5*0.33*0.25)ˆ1/4 = 0.042 * 0.250 = 0.452. Moreover, the column Ratio in table 4 was executed using Eq. (8) Ratio = Weighted sum value/ Criteria weight, where weighted sum value using Eq. (7) and Criteria weight using Eq. (5) or Eq. (6). All the ratio in column ration in table 4 has similar score since all the criteria column C1 to C4 has similar scores. For example, the first row in column ratio has a score of 4 where having weighted sum value with Eq. (7) has score summarization of 1st row (0.10 + 0.10 + 0.10 + 0.10) = 0.40 and divided with criteria weight either in column Criteria weight (Eq. 4) or column Criteria weight (Eq. 6) as 0.10, then 0.40/0.10 = 4.00. As usual, the total in column ratio has a summarization of all column content using Eq. (2).

Infected Inflation and Symptoms Without the Impact of Covid …

399

4 Database and User Interface Print Screen For the implementation, this AHP implementation was applied in web-based using Personal Home Pages (PHP) as server programming and database MySQL, and Fig. 3 shows the logical record structure for the implementation which using four tables such as Nilai, criteria, alternative, and rank. Table Nilai has 2 attributes such as Nilai_id and ket_Nilai as a definition for scoring purposes. Table criteria have attributes such as kriteria_id and nam_krit where attribute kriteria_id refers to criteria attributes in this paper such as C1, C2, C3, and C4 as seen in Table 1 including the information for each criteria as attribute nam_krit. Table Alternativ has 2 attributes as Alt_id and Nam_alt which refer to alt_id as patient identification and Nam_alt as the name of the patient. The arrow which moves shows that the primary key will move to the next table as a foreign key, where this foreign has function to relate between the tables. Figure 4 shows the User Interface (UI) for the login menu, where at the first each user should register their data such as username, password, name, gender, address, phone number, email, etc. However, in this case for prototyping purposes, we use the same login and password. The system will check the table which records the username and password and if they are approved then the user can enter the main menu of the application. Meanwhile, Fig. 5 shows the criteria where the user can entry, update dan delete the criteria including the weight criteria for each criteria. The content of Table 1 was entered in the application as shown in this application criteria C1, C2, C3, and C4 have weight criteria such as 0.10, 0.20, 0.30, and 0.40 respectively. Figure 6 shows the data alternative as the alternative which applies each person who becomes the alternative with data such NIK or (Nomor Induk Kependudukan) in Bahasa and this is Indonesian national citizen numbering, name, place, date of

Fig. 3 Logical record structure

400

N. Anwar et al.

Fig. 4 User ınterface login for the AHP implementation

Fig. 5 User ınterface data criteria for the AHP implementation

birth, and gender. The attribute ID_alternative will be created automatically orderly when there are new alternative data. In this alternative data, the same data criteria in Fig. 5 then the user can entry, update dan delete the alternative data. Furthermore, Fig. 7 shows the result of AHP running which shows the ranking for each alternative inputted process with criteria and the equations above.

Infected Inflation and Symptoms Without the Impact of Covid …

401

Fig. 6 User ınterface data alternative for the AHP implementation

Fig. 7 User ınterface menu ranking result for the AHP implementation

5 Conclusion Using AHP to help people and decision-makers to understand the process of decisionmaking will help the community to understand the ranking of covid19 patients in terms of 4 data criteria such as fever, dry cough, difficulty breathing or shortness of breath, and rapid test results. This AHP was implemented on web-based using server programmings such as PHP and Mysql database.

402

N. Anwar et al.

References 1. Yang, G., Tan, Z., Zhou, L., Yang, M., Peng, L., Liu, J., He, S., et al.: Effects of ARBs and ACEIs on virus infection, inflammatory status and clinical outcomes in COVID-19 patients with hypertension: a single center retrospective study. Hypertension, Hypertension 76(1), 51–58 (2020) 2. D’Amico, F., Baumgart, D.C., Danese, S., Peyrin-Biroulet, L.: Diarrhea during COVID19 infection: pathogenesis, epidemiology, prevention, and management. Cli. Gastroenterol. Hepatol. Cli. Gastroenterol. 18(8), 1663–1672 (2020) 3. Lloyd-Sherlock, P.G., Kalache, A., McKee, M., Derbyshire, J., Geffen, L., Casas, F.G., Gutierrez, L.M.: WHO harus memprioritaskan kebutuhan orang tua dalam menanggapi pandemi covid-19. BMJ 2020 23 Maret; 368: m1164. https://doi.org/10.1136/bmj.m1164. 4. Tay, M.Z., Poh, C.M., Rénia, L., MacAry, P.A., Ng, L.F.: The trinity of COVID-19: immunity, inflammation, and intervention. Nat. Rev. Immunol. 20(6), 363–374 (2020) 5. Wu, Y., Xu, X., Chen, Z., Duan, J., Hashimoto, K., Yang, L., Yang, C.: Nervous system involvement after infection with COVID-19 and other coronaviruses. Brain Behav. Immun. 87, 18–22 (2020) 6. Matricardi, P.M., Dal Negro, R.W., Nisini, R.: The first, holistic immunological model of COVID-19: implications for prevention, diagnosis, and public health measures. Pediatr. Allergy Immunol. 31(5), 454–470 (2020) 7. Capuano, A., Scavone, C., Racagni, G., Scaglione, F.: NSAIDs in patients with viral infections, including Covid-19: victims or perpetrators?. Pharmacol. Res. 104849 (2020) 8. Pascarella, G., Strumia, A., Piliego, C., Bruno, F., Del Buono, R., Costa, F., Agrò, F.E.: COVID19 diagnosis and management: a comprehensive review. J. Internal Med. 288(2), 192–206 (2020) 9. Brufsky, A.: Hyperglycemia, hydroxychloroquine, and the COVID-19 pandemic. J. Med. Virol. 92(7), 770–775 (2020) 10. Huang, G., Pan, Q., Zhao, S., Gao, Y., Gao, X.: Prediction of COVID-19 outbreak in China and optimal return date for university students based on propagation dynamics. J. Shanghai Jiaotong Univ. (Sci.) 25, 140–146 (2020) 11. Albahri, A.S., Al-Obaidi, J.R., Zaidan, A.A., Albahri, O.S., Hamid, R.A., Zaidan, B.B., Hashim, M., et al.: Multi-biological laboratory examination framework for the prioritization of patients with COVID-19 based on integrated AHP and group VIKOR methods. Int. J. Inf. Technol. Decis. Making 19(05), 1247–1269 (2020) 12. Mohammed, T.J., Albahri, A.S., Zaidan, A.A., Albahri, O.S., Al-Obaidi, J.R., Zaidan, B.B., Hadi, S.M. et al.: Convalescent-plasma-transfusion intelligent framework for rescuing COVID19 patients across centralised/decentralised telemedicine hospitals based on AHP-group TOPSIS and matching component. Appl. Intell. 1–32 (2021) 13. Alqahtani, A.Y., Rajkhan, A.A.: E-learning critical success factors during the covid-19 pandemic: a comprehensive analysis of e-learning managerial perspectives. Educ. Sci. 10(9), 216 (2020) 14. Garg, A., Ganesh, T.: An analytıcal hıerarchy process approach for covıd-19 rısk assessment study amıd the latest re-open and unlock phase ın Indıa. Int. J. Analytic Hierar. Proc. 12(3) (2020) 15. Badillo-Rivera, E., Fow-Esteves, A., Alata-López, F., Virú-Vásquez, P., Medina-Acuña, M.: Environmental and social analysis as risk factors for the spread of the novel coronavirus (SARSCoV-2) using remote sensing, GIS and analytical hierarchy process (AHP): Case of Peru. medRxiv (2020) 16. Cao, C., Xie, Y., Zhou, Y., Gong, Y., Gao, M.: Assessment of wechat work online teaching modes under COVID-19: based on AHP and fuzzy comprehensive evaluation method. Open J. Soc. Sci. 8(7), 349–358 (2020) 17. Pamungkas, T.S., Nugroho, A.S., Wasiso, I., Anggoro, T., Kusrini, K.: Decision support system for direct target cash recipients using the AHP and K-means method. RESEARCH: J. Comput. Inf. Syst. Technol. Manage. 3(2), 45–54 (2020)

Smartphone Application Using Fintech in Jakarta Transportation for Shopping in the Marketplace Diana Teresia Spits Warnars, Ersa Andhini Mardika, Adrian Randy Pratama, M. Naufal Mua’azi, Erick, and Harco Leslie Hendric Spits Warnars Abstract Technology-based mobile applications merge the transportation market as public passenger transportation has to benefit from market coupons. Users are also a little disappointed in the nature of many online application-based strategies. The survey technique is often used to collect consumer data by stratified random sampling between two different classes (users and non-users). This result shows the significance of ease of use and usability in improving attitudes. In this paper, the business process using a use case diagram and the database using the class diagram were modeled. Further, user interface of the mobile application was demonstrated as a convenience to understand our proposed mobile application. Keywords Mobile application · Smart fintech · Intelligent transportation · The smart marketplace · Mobile payment

1 Introduction Almost everyone knows the conditions in Jakarta, namely congestion or vehicle density. One reason is that the number of vehicles increases, and the presence of online transportation does not help reduce congestion but helps facilitate community travel [1]. To understand the conditions, it is required to interact with private, public, and online vehicle users. From the experiences, it can be sorted out how they feel when in Jakarta like this. It is required to understand whether this issue arises from government mistakes or mistakes in the community. Blaming one of the parties is not about fixing the problem. Understand what they need and how to create an outstanding mobile experience for them [2]. Therefore, find ideas about how to build D. T. S. Warnars · E. A. Mardika · A. R. Pratama · M. N. Mua’azi · Erick Information Systems Department, School of Information Systems, Bina Nusantara University, Jakarta 11480, Indonesia H. L. H. S. Warnars (B) Computer Science Department, Doctor of Computer Science, BINUS Graduate Program, Bina Nusantara University Jakarta, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_34

403

404

D. T. S. Warnars et al.

a better public transportation system in Jakarta with a smart payment system to solve the problem. A QR code and a smartcard bring up the payment system. Users easily make transactions through an application with a QR code for payment of public transportation. Before making a payment, the user must prepare a sufficient balance in the application or smartcard [3]. This application also provides collaboration through the points gained and shared from the use of age transportation for the marketplace so that it not only encourages but also supports the society. This application is intended to help people travel every day easily without feeling any more traffic jams. So they can travel comfortably and quietly, not even wasting too much time or too late to arrive at their destination. Currently, online transportation is like Grab, Gojek, My Blue Bird, and others. That could be an example that many are aware of this problem [4]. The design of this application that developed for users can easily be used. Even users who have never had experience in an application like this or do not understand how to use it before can learn to use it easily [5]. Because there are procedures for their use and can be implemented as soon as possible for their daily trips, mobile applications are IT software developed specifically for cellular operating systems installed on handheld devices. Sometimes, application design does not pay attention to elements of successful application design, such as the manufacturer and user’s varying views [6]. Effectiveness and efficiency are the two main aspects of making a good application design. There are applications for online transportation such as Grab, Gojek, My Blue Bird, and others. The technology created will be applied by the developer who wants to develop this application still the best choice for the community to reduce congestion in Jakarta or other regions [7].

2 Literature Review Recently digitalization has a substantial impact on the financial services industry [8]. Fintech is a new and promising innovation. Loans for P2P, e-wallet, bitcoin, T-commerce, mobile banking are all financial technologies that change our lives [9]. With this, the Fintech organization, especially startups, is reshaping the financial services industry, offering customer-focused services that can combine speed and flexibility [10]. Because it has changed various patterns related to our consumption, it popularized and progressed on cellular devices and services. It is possible to buy products that are needed anywhere and anytime for 24 h [11]. The application of new technologies that are present throughout the world is related to the revolution in wireless connectivity: mobile payments [12]. The aim of this study was to define the “QR code and smartcard” consumer factors for the reception of Fintech services. To achieve this goal, the research developed the model and acceptance of Fintech services by utilizing technology through mobile applications [13]. Also, concern for information privacy (CFIP) is a problem that is getting worse in the financial industry, and the individual’s ability as to moderate

Smartphone Application Using Fintech in Jakarta …

405

variable to test its impact on what will be used [14]. Together with the promotion and credibility conditions, social impacts have a significant impact on what will be used in Internet banking receipts [7]. An interactive framework of electronic payment processes, using the number of credit card accounts used and visual indicators to evaluate consumer data in order to assess the type of financial instrument [15]. In our previous study, a mobile application has been built to deal with system-based traffic density. Therefore, traffic jam prevention schemes have been introduced in this idea [16].

3 Proposed Systems The following characteristics must be present in the suggested method. Transactions must be performed between different users on the network in a protected format. This allows the user the flexibility to very easily move data over the network by compressing several files. This also analyzes the applications’ functionality and flow that already reside in the same domain as the software in this analysis. It aims to sort out the functionality and flow that will be implemented on the basis of the benefits that have been produced from the analyzed application in the framework under review. Smartphone applications using Fintech in Jakarta transportation are a mobile application for the community; the goal is to use public transportation or online every day. This smartphone system application is made simple, so users can use it easily and be designed to be exciting to use. Figure 1 explains how users register in the system, where they must enter their data to connect personal data to the system such as name, date of birth, gender, telephone number, email, and password. After registering, by entering the account identity, the user is allowed to join the login process. It does not have to be precise between email and password, since both are interrelated and cannot be isolated. Typically, since it is a unique identity, user email never changes, but passwords can be changed to preserve a security account as required. A number of smartphones, in particular Android-based mobile phones, typically enable users to activate location information by requesting the activation of the GPS feature before continuing to select transport. The presence of a GPS or Global Positioning System for this digital age makes the navigation process much more convenient. Users of gadgets will immediately determine their location. Users can understand how the transport functions and the location of the vehicle, which is there according to the destination, in the transportation selection method. After completion, the system automatically calculates the points obtained from ordering the transportation, and the user can see them directly. It is possible to swap the points earned from transport bookings by coupons that operate together on the market; the system would run automatically so that consumers do not have to wait long to get the coupon. The aim of designing a user interface design is to make user interaction as effective and simple as possible in terms of achieving user objectives in Fig. 2 user interface design. In order to build a system that cannot only run but can also be used and

406

D. T. S. Warnars et al.

Fig. 1 Use case diagram of the proposed application

adapted to user needs, the design process must be balanced between technological functions and visual elements (e.g., mental models). Figure 2a displays are the user interface registration form; after downloading this application, user will be asked to enter personal data such as name, date of birth, gender, telephone number, email, and password. Re-check all data and documents that the upload user is correct and correct. After that, click the send button to send data to the registration system. Users are required to register because to maintain the security of the account, and there is no fake account to make a fictitious order that can harm the driver. After registration, the user will see the overall application display. From Fig. 2b in the user interface, the home menu displays several menus, namely category transportation, history, points, and settings. Menu category transportation is to place an order and chooses the vehicle which wants it. History menu is to view the history of data after traveling. As for the menu point, this is the favorite part for users because here users can see the total points and coupons that will be obtained. Then the menu section in the settings is useful for changing the language or user data. From Fig. 3a user interface category transportation, if the user selects this menu, the user will be asked to choose the vehicle to be used to reach the destination. Users must fill in the address to pick up and see the available vehicle and the address where to want to go. After determining the address, the rate and distance will appear on the screen. Then the user places an order on the screen and automatically, and the application will detect the available vehicle. If the user wants to cancel the order,

Smartphone Application Using Fintech in Jakarta …

407

Fig. 2 a Home menu user interface, b Menu user registration user interface

the user presses the red cross button on the screen. Then if the order continues, the user will be given a view such as making a booking. Figure 3b feature for making payments is the user interface scan the QR code. This QR code payment system makes it easy for users to make transactions here, users can view balances, total payments, and the remaining balance after making payments. So users do not need to be confused to calculate how much balance is left in the application. How it works scan, the QR code is also straightforward to open the camera view and then navigate to the existing barcode. This feature is beneficial for simple living users without cash or debit cards but only carries existing gadgets and for those who do not understand using gadgets can also be through existing smartcards, and the system automatically works through the application. This history list can be seen from Fig. 4a user interface order history when the user has completed the journey with all data entered into the system. Not only is this history menu for displaying travel details, but it can also see travel points, so users know which vehicles get more points. When a user receives an issue from a system background from a driver or vehicle, it will automatically track user feedback sent through email in detail. For applications like this, this historical feature is important because the system can easily monitor it faster.

408

D. T. S. Warnars et al.

Fig. 3 a Menu transportation category user interface, b Menu scan QR code user interface

Furthermore, Fig. 4b in this application is the reward point feature. This feature is very favorite for teenagers to parents. Every user who travels through this application can get points and can be exchanged for promo coupons. This application is filled with promos that can be exchanged through a point; then users can spend on the marketplace. This feature is arguably the showcase owned by the application. This application offers all the promos that they have in a single section. The appearance of the user interface created on this application is effortless and can make it easier for users to use it. The criticism and suggestions are accepted to make the design of this application even better in the future. Figure 5 describes the database diagram. In this transportation service, the system also requires a database diagram to understand in detail what is needed and developed. The proposed smartphone system application builds a database model design such as the relationship between user applications and data settings to be entered, how to process relationships with others, thus making this application run well and structured. This system is an essential part of running an application because this is a detailed design of a system. Figure 5 shows that the relationship between classes in the system can be intertwined and understand the process required in each class. Unique data in the information system obtained from previous data collection becomes a reference for application managers as material for decisions to centralize

Smartphone Application Using Fintech in Jakarta …

Fig. 4 a Menu order history user interface, b Menu point reward user interface

Fig. 5 Class diagram of proposed database model design

409

410

D. T. S. Warnars et al.

or expand the business, determine, and recruit drivers or successor new services, and service development. The aim is to provide secure and fast access, selective information about key factors in carrying out the company’s strategic objectives for executive management and then provides company policies in general or policies that are intended at the level below, which will then be translated more specifically by the level below it in the information system. Using information systems is to evaluate strategies for the company’s overall improvement, such as cooperating with other firms, assessing rivals, and making strategic policies.

4 Conclusion In this analysis, the presence of futuristic communication technology amid society is very challenging, and even in terms of elections transportation to do mobility needs. Service progress-based transportation communication technology can make it easier for people to travel. Different related groups, such as drivers and other businesses, began to compete in the realization of convenient and safe transport. This is important because it will directly attract the public to use transportation services to establish comfortable and reliable transportation. User protection would be assured based on the requirement of drivers to participate in different training programs held by similar parties. This application works in conjunction with one of the digital insurance platforms. This collaboration will provide significant benefits for users because it can protect the form of guaranteed safety from accidents, criminal acts, and so on to the maximum for users ranging from pickup to arriving at the destination. In the future, human-based transportation is expected to reduce congestion, air pollution, and parking by mobilizing more people in fewer vehicles. Then, this online application-based transportation provides an alternative fulfillment of citizens’ mobility rights. Finally, a system like this can survive in the future and make Jakarta better than now.

References 1. Puschman, T.: Digitization of the financial services industry. J. Bus. Inf. Syst. 59(1), 69–76 (2017) 2. Hoon, L.S., Dong-, L.: Fintech-conversation of finance industry. J. Korea Convergence Soc. 6(3), 97–102 (2015) 3. Lee, I., Shin, Y.J.: Fintech: ecosystem, business models, investments decision and challenge. J. Bus. Horizon. 61(1), 35–46 (2018) 4. Young, E.: Mobile easy payment services in the fintech era. J. Inf. Policy 22(4), 22–24 (2015) 5. Joo, Y.J., Chung, A.K., Jung, Y.J.: An analysis of the impact of Cyber University students ‘mobile self-efficacy, mobility on intention to use in mobile learning service linked to e-learning. J. Korean Assoc. Comput. Educ. 18(1), 55–68 (2015)

Smartphone Application Using Fintech in Jakarta …

411

6. Abdillah, L.: An overview of ındonesian fintech application. In: The First International Conference on Communication, Information Technology and Youth Study (I-CITYS2019), Bayview Hotel Melaka, Melaka (Malacca), Malaysia (2019) 7. Kesumastuti, T.M.: The process of adoption interest in using digital wallet in central Jakarta (case study on go-pay users). Int. J. Multicult. Multireligious Underst. 7(2), 277–286 (2020) 8. Riveong, D.J., Rachmad, S.H.: Internet users, market target and digital trading of MSMEs in Indonesia. In: 35th IARIW General Conference, Copenhagen, Denmark (2018) 9. Stefanus, A., Hartono, M.: A framework for evaluating the performance of supply chain risk in e-commerce. In: Proceedings of the International Conference on Industrial Engineering and Operations Management Bandung, Indonesia, March 6–8, 2018 (pp. 1887–1894). IEOM Society (2018) 10. Kimura, F., Chen, L.: Value chain connectivity in Indonesia: the evolution of unbundlings. Bull. Indonesian Econ. Stud. 54(2), 165–192 (2018) 11. Tampubolon, L.P.: The increasing role of the information technology in sharing economic activities. Int. J. Progressive Sci. Technol. 17(1), 321–329 (2020) 12. Hermawan, D.: The importance of digital trust in e-commerce: between brand image and customer loyalty. Int. J. Appl. Res. Manage. Econ. 2(3), 18–30 (2019) 13. Widjojo, R.: The development of digital payment systems in Indonesia: a review of go-pay and ovo e-wallets. Econ. Altern. 3, 384–395 (2020) 14. Fernando, E., Condrobimo, A.R., Murad, D.F., Tirtamulia, L.M., Savina, G., Listyo, P.: User behavior adopt utilizing fin tech services on online transportation in Indonesia (Scale Validation and Developed Instrument). In: 2018 International Conference on Information Management and Technology (ICIMTech) (pp. 114–118). IEEE (2018) 15. Utaminingsih, K.T., Alianto, H.: The ınfluence of UTAUT model factors on the ıntension of millennials generation ın using mobile wallets ın Jakarta. In: 2020 International Conference on Information Management and Technology (ICIMTech) (pp. 488–492). IEEE (2020) 16. Salim, S., Frederica, D.: How is the ımpact of non-cash payment system on sales of micro, small and medium enterprise?. In: International Conference on Management, Accounting, and Economy (ICMAE 2020) (pp. 188–191). Atlantis Press (2020)

Secured Student Portal Using Cloud Sunanda Nalajala, Gopalam Nagasri Thanvi, Damarla Kanthi Kiran, Bhimireddy Pranitha, Tummeti Rachana, and N. Laxmi

Abstract Nowadays, a student portal is necessary for every educational institution as insecurity is rapidly increasing these days. The central goal of this proposed work is to suggest a secured student data portal, i.e., to provide security to the students, faculty data, and the question papers of concerned examinations. It is the process of collecting the details of the student mainly the concerned mac address of the laptops and Wi-Fi routers and the image of them with time stamp while marking attendance. The attendance is marked from these devices, and the faculty can cross-check their photograph to know if there is any proxy attendance among the students. There are two modules: One is the attendance, and another is the examination module. The attendance has sub-modules like registration, marking attendance, faculty granting permission to the student in case of emergency, and review attendance. While the faculty upload the question papers for the respective examinations, students can access their relevant examinations for the day, and faculty can view the performance that is marked for the evaluations.

1 Introduction Cloud infrastructure is turned into a model consisting of services provided in a way that is comparable with utilities like coal, water, power, and telephony. The cloud services can be hosted anywhere, but the users can access resources depending on their utilities under this model [1–3]. Most of the paradigms of computation have promised the dissemination of this vision of utility computing. The paradigm that appears aimed at transforming the vision of computer utilities into reality is most present. It starts with a riskless concept: when someone else takes care of the S. Nalajala (B) · G. N. Thanvi · D. K. Kiran · B. Pranitha · T. Rachana Department of Computer Science Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India e-mail: [email protected] N. Laxmi Department of Electronics and Communication Engineering, Guru Nanak Institute of Technology, Ibrahimpatnam, R.R, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_35

413

414

S. Nalajala et al.

Fig. 1 Cloud computing

rearrangement of IT technology. The service that offers computing facilities is often referred to as infrastructure as a service. Platform as a service is the place where program development, maintenance, and implementation take place [4–6]. Many IT providers are promising to provide storage with a server that offers technology as a service, network as a service, and applications as services. These are called subscription-oriented services that include service-level agreements with approved output and uptime guarantees for their services. The providers of cloud services can charge depending on the nature of cloud service customers’ service assumptions [7– 9]. It brings elasticity and smooth scalability of IT services that are delivered as a service to end-users. It will allow companies to build and execute IT solutions more efficiently (Fig. 1). Properties and Characteristics: (a)

(b) (c)

(d) (e)

(f)

Scalability: The necessary property of a framework, an organization, or a cycle which shows its capacity to either deal with developing measures of work effortlessly or be promptly expanded. Elasticity: The capability to operate a measurable approach that provides the basis for real-time infrastructure adaptive introspection. Availability: The intensity to which a system, subsystem, or computer is in a designated operable and committable state at the beginning of a mission when the mission is called for at an unknown time. Reliability: The capability of a framework or part to operate with its necessary capacities under-expressed conditions for a predefined time frame. Manageability: Cloud computing services management. The management of networks is highly informed by telecommunications network management initiatives. Interoperability: Interoperability is the property of a product or system whose interfaces are fully understood to operate with other products or systems, current or future, without any limitation of access or deployment.

Secured Student Portal Using Cloud

(g)

(h) (i)

A.

415

Accessibility: Accessibility is a general concept used to define the degree to which as many individuals as possible have access to a product, system, service, or environment. Service Portability: Administrations utilizing any gadgets, anyplace, ceaselessly with versatility backing and dynamic transformation to asset varieties. Performance and Optimization: Application efficiency should be assured as the great computing force in the cloud. Cloud providers use powerful technology or other underlying tools to create a highly efficient and highly optimized system and then provide cloud customers with complete services (Fig. 2). Confidentiality

Confidentiality will be available only to authorized parties and make sure that user who has permission can only access. B.

Integrity Data will be protected from unauthorized changes and make sure that data is exact valid and correct. Integrity will make sure that how data is 1. 2. 3.

Stored Processed and Retrieved.

Fig. 2 Properties and characteristics

416

S. Nalajala et al.

Fig. 3 CIA data protection

C.

Availability Only the authorized users can access the systems and resources they need (Fig. 3).

1.1 DES Algorithm A symmetric-key block cipher created by an IBM team is the standard data encryption algorithm. Using 48-bit keys, this algorithm can take plaintext into 64-bit blocks and transform them into ciphertext [10–12]. This algorithm is a symmetric-key algorithm that uses both encryption and decryption using a single key. It will use separate keys for both encryption and decryption for asymmetrical algorithms. This algorithm protects by taking plaintext into blocks and using keys to convert them to ciphertext [13]. DES is a symmetric-key algorithm that uses a single encryption key. The drawback of this algorithm is that it uses brute force search to break it. Using 3DES decreases this problem at the expense of increased execution time. By using linear cryptanalysis, DES is unprotected against attacks [14, 15].

1.1.1

Enhanced Form of DES-Blowfish Algorithm

Blowfish is a technique depicted by Bruce Schneider. This algorithm is faster than DES algorithm, and also it will provide a good encryption rate. It is the first, secure block ciphers. This algorithm can be easily available for anyone to use. Block Size: 64 bits. Key Size: 32-bit to 448-bit variable size. Sub-keys: 18 [P-array].

Secured Student Portal Using Cloud

417

Number of rounds: 16. Number of substitution boxes: 4 [each having 512 entries of 32 bits each].

2 Literature Survey 2.1 Automatic Attendance Recording System Using Mobile Telephone Organizations manually take part by either name calling or passing a student signature attendance sheet to validate their attendance. This system is designed in a way that is paperless, simple, and marking attendance using the mobile of the teacher. Installed applications in the smartphone of the educator help to check the mobile telephone of students via Bluetooth link and pass media access control addresses of students’ mobile telephones to the mobile teacher, and the student’s presence can be checked.

2.2 Data Encryption Performance Based on Blowfish Encryption masks data when it is transferred or while it is stored, i.e., by using some algorithms based on certain passwords, converting the information from plaintext to ciphertext. Decryption uses the same algorithm and key to convert the ciphertext back to plaintext. This algorithm will consist of mathematical functions and operations in several blowfish algorithms in real-time mechanism that includes high-speed processing and small-size hardware. The original algorithm is fast and straightforward. There can be several benefits from the blowfish algorithm. It also includes different operators such as XOR, addition, and table lookup. The table consists of four (256–32 bits) S-boxes and a P-secured student portal using cloud in 11 ways to execute the encryption process, and key is a particular dumber that the algorithm normally uses array (lX × 32 bits). The blowfish algorithm is a cipher based on Feistel rounds, and the architecture of the function used is a simplification of the concepts used in DES to provide higher machine speed for the same protection.

2.3 Security Analysis of Blowfish Algorithm This algorithm is known as symmetric block cipher which has a block size of 64 bits and variable key lengths of sizes from 32 to 448 bits. The protection of blowfish investigation is represented in this article, which uses the avalanche criterion and coefficient of correlation. In a previous document entitled “Randomness Analysis on

418

S. Nalajala et al.

Blowfish Block Cipher using CBC and ECB Modes,” the randomness of the blowfish output is studied. Avalanche is one of the consequences of every encryption algorithm that it is a stronger property. In the plaintext or the key, the one-bit shift will create a substantial change in any event half of the bits in the ciphertext. This would make it harder to analyze the ciphertext when an effort is made to launch an attack. In other words, it also means that when attempting to come up with an attack, it is very much harder to do a ciphertext analysis. The correlation coefficient is often known to be one of the most critical aspects of block cipher security, which can also deal with the reliance on the input bits of the individual output bits. It will characterize how much the two impacts influence one another, for example, if one of them relies upon the other.

2.4 Transactions on Cloud Computing Cloud infrastructure has now been turned into a model of facilities that can be commoditized and distributed in the same manner as gas, water, power, and telephony utilities. In this model, consumers approach the services depending on the specifications regardless of where they have been hosted. Most of the paradigms of computing have promised to deliver this vision of utility computing. One of the most current new paradigms is cloud computing, which aims to transform the perception of “computer utilities” into reality. Cloud computing has also begun with a riskless concept: if anyone would take control of the IT technology setup to encourage end-users to tap into it, paying only for what is used. As demonstrated by its acceptance for the development and implementation of groundbreaking technologies in numerous realms that may include science, customer, social networks, health care, industry, finance, government, and big data, the computing paradigm is increasingly advancing. Many exchange magazines have been unequivocally occupied with mechanical development. In collaboration with other societies, the IEEE Networking Society, the IEEE Networks Committee, the IEEE Power & Energy Society, and the IEEE Consumer Electronics Society, the new transactions are handled by the IEEE Computer Society.

3 Theoretical Analysis This basic algorithm, adopted by the NIST, is a symmetric-key block by the IBM team. The algorithm takes plaintext and transforms it into ciphertext. This algorithm is an algorithm with a symmetric key that uses a similar key for data encryption and decryption. It would use separate keys for both data encryption and data decryption for asymmetrical algorithms. This algorithm is based on IBM cryptography researcher Horst Feistel’s block cipher, called LUCIFER. The DES algorithm utilizes the Feistel structure’s 16 rounds and uses a separate key for each round (Fig. 4).

Secured Student Portal Using Cloud

419

Fig. 4 Encryption

Steps for implementing this algorithm are: 1. 2. 3. 4. 5. 6.

This process starts with the 64-bit data submitted to an initial permutation (IP) function. The initial permutation (IP) will perform on plaintext. Now, the initial permutation (IP) creates two halves of the permuted block as left plaintext (LPT) and right plaintext (RPT). Each as left plaintext and as right plaintext passes through 16 rounds of the encryption process. Finally, the LPT and RPT are combined, and a final permutation (FP) will perform on the newly combined block. The result of this process will produce the intended 64-bit ciphertext. The encryption process is further divided into five steps (Fig. 5): Step 1: Key Transformation. This initial 64-bit key is transformed into a 56-bit key and will discard every 8th bit of the initial key. For every key, 56-bit key is available. From 56-bit key, a 48-bit sub-key is generated during every round using a process which is known as “key transformation.” Now the 56-bit key is portioned into two parts which are 28 bits. Two halves are circularly moved left by one or two positions, depending on the round. Step 2: Expansion Permutation. After initial permutation, two 32-bit plaintext areas called left plaintext and right plaintext were utilized. In this process, the right plaintext is expanded from 32 to 48 bits. Bits are transposed and hence called expansion permutation. This happens when the 32-bit RPT is portioned into 8 blocks, and every block will consist of 4 bits. Now, this 4-bit block of the previous step is enlarged to a corresponding 6-bit block, i.e., into a 4-bit block and a 2-bit block. Steps 3, 4, and 5. This process will result in expansion as well as a permutation of the input bit while creating the output. A total of 56-bit key is reduced into a 48-bit key by using key. Then, the expansion permutation process elaborates the 32-bit RPT to

420

S. Nalajala et al.

Fig. 5 Steps in DES

48 bits. Now the 48-bit key is XOR with 48-bit RPT, and the resulting output is forwarded for the next step (Fig. 6).

3.1 Possible Attacks on DES 1.

2.

Brute Force Attack The size of the key will determine the number of probable keys. For DES, questions will be raised about the sufficiency of its key size early on. It was the small key size, rather than theoretical cryptanalysis, that dictated the need for a replacement algorithm even before it was adopted as a standard. Cycle Attack The attack is very similar to the last attack, wherein this attack counts the iterations until the original text appears. This attack is not likely to be happening as it is very slow and does not work for large modulus.

3.2 Enhanced Blowfish Algorithm Blowfish is a technique predicted by Bruce Schneider. It is significantly faster and will provide a better encryption rate. This is one of the first of the secured block ciphers and is not subjected to any patents.

Secured Student Portal Using Cloud

421

Fig. 6 Encryption process

1. 2. 3. 4. 5.

Block Size: 64 bits. Key Size: 32-bit to 448-bit variable size. Sub-keys: 18 [P-array]. Number of rounds: 16. Number of substitution boxes: 4 [512 entries of 32 bits each].

3.3 Working C.1 Encryption. Step 1: Generation of sub-keys: Eighteen sub-keys (P [0], p [1] …P [17]) are used in encryption as well as decryption process, and the same sub-keys are needed for both the processes. Eighteen sub-keys will be now reserved in P-array with every array element being a 32-bit entry.

422

S. Nalajala et al.

It is initialized with the digits of pi(?). Step 2: Initialize Substitution Boxes: Four substitution boxes will be used {S [0] …S [4]} in both encryption and decryption process and each S-box having 256 entries {S[i][0] …S[i] [255], 0&lei&le4} where each entry is of 32 bits. It is initialized with the digits of pi(?) after initializing the P-array. Step 3: Encryption. The elements of the encryption function are: Number of Rounds: The process of encryption will consist of a total of 16 rounds, each round (RI) will take inputs from the plaintext, and in each round, it will generate a corresponding sub-key. C.2 Decryption. This process is the same as the encryption process. Using sub-keys are in the reverse {P [17]–P [0]} (Fig. 7).

3.4 Pseudocode of Blowfish Algorithm Begin It has 16 rounds Input 64, x, data element X is divided into 2–32-bit segments: XL, XR After that, I = 1 to 16: XL = XL XOR Pi XR = F (XL) XOR XR Switch to XL and XR. After the sixteenth cycle, switch to XL and XR again to adjust the final exchange. Thereafter, XR = XR XOR P17 and XL = XL XOR P18. Finally, reassemble the XL and XR to get the text. Decryption is same as encryption, except that P1, P2, …, P18 are used in repetitive sequence. Blowfish implementation requires speed, and it should open the loop (Figs. 8 and 9).

4 Experimental Analysis See Figs. 10, 11, 12, 13, 14, 15, 16, 17, 18, and 19.

Secured Student Portal Using Cloud Fig. 7 Blowfish working

Fig. 8 Encryption/decryption

423

424

S. Nalajala et al.

Fig. 9 Process flow

5 Conclusion In addition to this question paper leakage, the proxy rate can be reduced and provide the data with protection and everything would be virtual in all the question papers. The application thus created would help to prevent the possibility of a proxy. Reliability, time-saving, and simple monitoring are provided by the application. The scheme suggested saves manpower. The data can only be accessed by an authenticated and

Secured Student Portal Using Cloud

425

Fig. 10 Homepage for student

Fig. 11 Student dashboard

registered person. The proposed system tries to conduct experiments in a controlled way, and the data from the students will be safe. In the future, the project will be expanded. In the future, it could be introduced on the intranet. In the near future, the project could be revised because it is very flexible in terms of growth. The client is now able to plan and do the whole job in a much easier, precise, and error-free way with the proposed project.

426

Fig. 12 Registration of student

Fig. 13 Encrypted file

S. Nalajala et al.

Secured Student Portal Using Cloud

Fig. 14 Faculty dashboard

Fig. 15 Attendance request

427

428

Fig. 16 Question paper uploading

Fig. 17 Attendance marking

S. Nalajala et al.

Secured Student Portal Using Cloud

429

Fig. 18 Request raised

Fig. 19 To raise request

References 1. Mahmod, R.: Security analysis of blowfish algorithm. IEEE (2013) 2. Nalajala, S., et al.: Light weight secure data sharing scheme for mobile cloud computing. In: 2019 Third International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC). IEEE (2019) 3. Jamil, T.: Automatic attendance recording system using mobile telephone. IEEE (2011)

430

S. Nalajala et al.

4. Sunanda, N., Sriyuktha, N., Sankar, P.S.: Revocable identity based encryption for secure data storage in cloud. Int. J. Innov. Technol. Exploring Eng. 8(7), 678–682 5. Mousa, A.: Data encryption performance based on blowfish. IEEE 6. Bin, C.: Research on applications and security of cloud computing. IEEE (2019) 7. Buyya, R., Transactions on cloud computing. IEEE (2013) 8. Namasudra, S.: Cloud computing: fundamentals and research issues. IEEE (2017) 9. Alhenaki, L.: A survey on the security of cloud computing. IEEE (2019) 10. Puri, G.S.: A review on cloud computing. IEEE (2019) 11. Khan, Md. A.: Attendance management system. IEEE (2015) 12. Patidar, S.: A survey paper on cloud computing. IEEE (2015) 13. Meghana, T., Nalajala, S., Kumar, M., Jagadeesh: Privacy preserving using PUP-RUP model. In: International Conference on Intelligent Sustainable Systems (ICISS 2019). IEEE (2019) 14. Nalajala, S., Pratyusha, Ch., Meghana, A., Phani Meghana, B.: Data security using multi prime RSA in cloud. Int. J. Recent Technol. Eng. 7(6S4), (2019). ISSN: 2277-3878 15. Nalajala, S., et al.: Data security in cloud computing using three-factor authentication. In: International Conference on Communication, Computing and Electronics Systems. Springer, Singapore (2020) 16. Gudapati, S.P., Gaikwad, V.: Light-Weight key establishment mechanism for secure communication between IoT devices and cloud. In Intelligent System Design (pp. 549–563). Springer, Singapore (2021) 17. Kavitha, M., Krishna, P.V.: IoT-Cloud-Based health care system framework to detect breast abnormality. In Emerging Research in Data Engineering Systems and Computer Communications (pp. 615-625). Springer, Singapore (2020)

Expert System for Determining Welding Wire Specification Using Naïve Bayes Classifier Didin Silahudin, Leonel Leslie Heny Spits Warnars, and Harco Leslie Hendric Spits Warnars

Abstract Many programming languages can help work in the industrial world, one of them is using an Android-based application that can help solve a problem such as in decision support systems or determination process which that can benefit the company. In this case, the author tries to mine data from company engineers to find out how to determine the specifications of welding wires for welding materials, in which there are GTAW, FACW, GMAW, SMAW, and SAW processes. Decision support systems can assist users in doing their jobs properly and accurately. This decision support system has been created and existing data were analyzed using the Naïve Bayes Classifier method, which aims for classifying data on some welding materials, then the pattern can be used to obtain the welding wire used for welding materials. This system was designed with a use case diagram and was built with the Java programming languages Android and MySQL databases. The implementation phase is carried out using Black Box Testing. Asking engineers to decide to determine the specifications of welding wires for welding materials in the steel industry. In this case, the conclusion of this research is that the Naïve Bayes Classifier system can be used to identify welding wires for welding materials, and can be used optimally, and accurately. Keywords Decision support systems · Expert systems · Naive Bayes classifier · Information systems

1 Introduction Welding wires are thin metal rods that are ignited to produce a heated arc to combine pieces of metal (welding) for manufacturing soft cables using a hammer or packing D. Silahudin · L. L. H. S. Warnars Architecture Department, Faculty of Engineering, Bina Nusantara University, Jakarta 11480, Indonesia H. L. H. S. Warnars (B) BINUS Graduate Program, Computer Science Department, Bina Nusantara University, Jakarta 11480, Indonesia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_36

431

432

D. Silahudin et al.

using a high heat source. Welding wire can be found and purchased in various models and types and is commonly used for various welding operations and requires welding wire that is suitable and suitable for welding applications in which the metal material to be welded is adjusted to the welding wire used [1]. Welding materials provide fillers needed to make weld beads. Each welding material is determined by its name, the metal used, and optional attributes such as the diameter and length that you specify. Welding material is represented in the Model Tree by the welding material parameter stored with the model [2]. The development of technology in the increasingly advanced construction field cannot be separated from welding because it has an important role in metal engineering and repair [3]. The process of evaluating the selection of welding wire specifications for material welding still does not use standard mathematical calculations. In the absence of mathematical calculations, an engineer will find it difficult to determine the specifications of welding wires when he wants to determine the specifications of welding wires for welding materials, and in the presence of mathematical calculations, the process of evaluating the specifications of welding wires for welding materials will make it easier for engineers to determine the specifications of welding wires for welding according to their qualifications [4]. A simple way that can be done to determine the qualifications used by welding wire specifications for material welding is to look at the process in daily life. However, it is not easy to do it manually, there needs to be a method to solve the problem. In this study, processes will be defined as an application with the Naïve Bayes Classifier algorithm. It can be performed more systematically, coordinated, and reliably to establish the requirements of welding wires for material welding [5].

2 Research Method Figure 1 show where the research methodology carries out with preliminary studies using methods for user requirements such as literature review, interview, observation, and data collection [6]. By finding and reading some current articles relevant to our Fig. 1 Research methodology

Expert System for Determining Welding Wire Specification …

433

study definition, the literature review is carried out. High-level management and workers are interviewed about their current issue with the welding wire specification for material welding in the steel manufacturing industry [7]. After observing the steel manufacturing industry aims for the business process and finding the real issue. Data collection was also used to find out what kind of data, including the kind of welding wire, is used in the current steel manufacturing sector [8]. Later, an analysis is made for the current system in the steel fabrication industry by using use case business analyst which draws the current situation where the data from the system is gathered including people there using user requirement tool such as interview and observation [9]. Data gathering is used to recognize the current problem as knowledge acquisition. The system is designed using a use case diagram and class diagram, where the use case diagram is used to show the proposed model design whilst the class diagram was applied to show the relation for all the data which is used in the proposed system [10, 11]. Next, the expert system (ES) implemented is recognized as a decision support system (DSS) using Naïve Bayes classifier (NBC) using the mobile application as the user interface (UI) can be seen in the last presentation of this article [12]. Finally, testing steps are handled by distributing a questionnaire to all high management level in steel fabrication industry as user acceptance test in order to find out how the high-level management model is proposed [13, 14]. Naïve Bayes Classifier (NBC) has been recognized as a powerful method and applied to supervised learning to help people regarding with the decision making, prediction, diagnosing, interpreting, and so on and it is probability classification using Bayes theorem and combined with the term “Naïve” as each attribute is independent [15]. NBC worked robustly with data with different characteristics or recognized as outliners and NBC deals with an incorrect attribute by ignoring the process of data training [16, 17]. Some of the attributes to the NBC method are listed below: X H P(H/X) P(H) P(X/H) P(X)

Data with unknown classes Hypothesis data is a specific class Probability of H hypothesis based on X condition and recognized as a posterior probability Probability H hypothesis and recognized as a prior probability Probability of X condition based on H hypothesis and as opposite of P(H/X) Probability of X condition

Next, the equations of NBC are [7, 8]: P(H |X ) =

P(H |X ) + P(X |H ) P(X )

(1)

nc + m p n+m

(2)

Equation (2) is the NBC equation: P(Ai |V j ) =

434

D. Silahudin et al.

where nc as number items in data learning and v = V j , where a = ai ; p = 1/some different classes. Meanwhile, m is several attributes and n is several tuples in the data learning. Moreover, Eq. 2 is used to solve [9]: (1) (2)

value nc for every class. Calculation of score P (ai | Vj ) and score P (Vj ) as seen in Eq. (3). VMAP = arg max V j ∈ V P(V j )π P(ai |V j )

(3)

Doing of Eq. (4) calculation of P (ai | V j ) × P (V j ) for each v. P(Ai |V j ) =

(4)

(3)

nc + m. p n+m

(4)

v as the greatest multiplication result is used to determine or as a result of the classification using the NBC method.

3 Result and Discussion The classification using the NBC method for the specifications of welding wires in the steel fabrication industry can function optimally, and validly. Tables 1, 2, and 3 are the result of preliminary and data collection steps where Table 1 for the identification of 5 types of welding wire processes such as GTAW, FACW, GMAW, SMAW, and SAW. Moreover, some brands in each process in Table 2 are adopted from welding wire specifications for material welding. So that each process will facilitate the search for brands that have been provided and there are 21 brands in this case from brand code B1–B21. In Table 2 some brands are grouped into 21 different brands with brand code B1–B21. Moreover, Table 3 explains the rules in the existing system of 5 processes and 21 brands, where there are eight rules R1–R8 and applied with the rule-based system (RBS) where column IF as a condition of a combination of the brand in Table 2 whilst column THEN as answering each condition which refers to the process code in Table 1 and the Regulations established can be used for the construction of information Table 1 The selection of welding wire process, numbering (#), and process code

#

Process code

Process

H1

P1

GTAW

H2

P2

FACW

H3

P3

GMAW

H4

P4

SMAW

H5

P5

SAW

Expert System for Determining Welding Wire Specification …

435

Table 2 Brand coding classification No. Brand code Brands 1

B1

DW-Z100, CHT711, Dual Shield 7100, Weld 71 T-1

2

B2

MG-50 T, CHW-50C6SM, NSSW FGC-55

3

B3

TIGROD 12.64, TG-S50, NSSW FGC-55, Esab Weld 70S-6, TIGROD 12.64

4

B4

US-49/G-80, US-36/G55

5

B5

BL-76, CHE-58–1, LB-52

6

B6

DW-81B2, DW-81B2C

7

B7

MG-1CM

8

B8

TG-S1CML, TG-S1CM

9

B9

US-511/G-80, US-511/G-55

10

B10

CM-A96

11

B11

DW-91B3A, DW-91B3C

12

B12

TG-S2CML, TG-S2CM

13

B13

US-521/G-80

14

B14

DW-309, GFW-316L, K-316LT, GFW-309, GFW-316L

15

B15

TG-S308, WEL TIG 316L, WEL TIG 308, WEL TIG 309MoL, TG-S309

16

B16

GFW-82, GFW-Hs C276

17

B17

Sanicro 72HP, WEL TIG HC-4, WEL TIG 82, TECHALLOY 276

18

B18

Thermanit MTS3, TG-S9Cb

19

B19

Thermanit MTS3 / Marathon 543, US-9Cb/PF200S

20

B20

CM-A106

21

B21

CM-95B91, CM-9Cb

Table 3 The rule-based systems

#RULE

IF (Condition)

THEN

R1

B3, B8, B12, B17, B18

P1

R2

B3, B8, B12, B17, B16

P1

R3

B1, B6, B11, B14, B16

P2

R4

B1, B6, B11, B15

P2

R5

B2, B7

P3

R6

B5, B10, B20, B21

P4

R7

B5, B10, B19, B20

P4

R8

B4, B9, B13, B19

P5

path welder models [12]. The five processes are P1, P2, P3, P4, and P5 as seen in Table 1 as GTAW, FACW, GMAW, SMAW, and SAW, respectively. The three tables above were applied in order to process with the NBC algorithm. The level of accuracy of calculation of the system was applied using the NBC method which is calculated manually. Example calculations using the NBC classification

436

D. Silahudin et al.

can be applied to brands (B2) = MG-50 T, CHW-50C6SM, NSSW FGC-55. (B4) = US-49/G-80, US-36/G55. (B8) = TG-S1CML, TG-S1CM. (B11) = DW-91B3A, DW-91B3C. (B17) = Sanicro 72HP, WEL TIG HC-4, WEL TIG 82, TECHALLOY 276, (B20) = CM-A106. The stages for the NBC process are: 1.

Determination of the nc score for every class If a brand is part of a Process class, literally then nc have a value of 1, if not then a value of 0. Known: (a)

(b)

(c)

(d)

(e)

2.

Process 1 of GTAW Brand value per class (n) = 1 Brand score is shared with some process classes (p) = 1/5 = 0.200 Total brand (m) = 21 B2 = 0, B4 = 0, B8 = 1, B11 = 0, B17 = 1, B20 = 0 Process 2 of FACW Brand value per class (n) = 1 Brand score is shared with some process classes (p) = 1/5 = 0.200 Total brand (m) = 21 B2 = 0, B4 = 0, B8 = 0, B11 = 1, B17 = 0, B20 = 0 Process 3 of GMAW Brand value per class (n) = 1 Brand score is shared with some process classes (p) = 1/5 = 0.200 Total brand (m) = 21 B2 = 1, B4 = 0, B8 = 0, B11 = 0, B17 = 0, B20 = 0 Process 4 of SMAW Brand value per class (n) = 1 Brand score is shared with some process classes (p) = 1/5 = 0.200 Total brand (m) = 21 B2 = 0, B4 = 0, B8 = 0, B11 = 0, B17 = 0, B20 = 1 Process 5 of SAW Brand value per class (n) = 1 Brand score is shared with some process classes (p) = 1/5 = 0.200 Total brand (m) = 21 B2 = 0, B4 = 1, B8 = 0, B11 = 0, B17 = 0, B20 = 0

Calculate the value of P(ai |vj ) and process the score of P(vj ). ˙ this step, the i-th brand probability score for the jth process will be calculated. In Starting with the initial process class, namely GTAW, FACW, GMAW, and

Expert System for Determining Welding Wire Specification …

437

SMAW. (P1) variable as symbol of GTAW category which is used to calculate the probability, which applied (ai | vj ) including P (vj ) : (a)

(b)

(c)

(d)

(e)

3.

Process 1 of GTAW (B2|p1) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B4|p1) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B8|p1) = (1 + 21 + 0.2)/(1 + 21) = 1.009 (B11|p1)= (0 + 21 + 0.2)/(1 + 21) = 0.955 (B17|p1)= (1 + 21 + 0.2)/(1 + 21) = 1.009 (B20|p1)= (0 + 21 + 0.2)/(1 + 21) = 0.955 P(p1) = 1/5 = 0.200 Process 2 of FACW (B2|p2) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B4|p2) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B8|p2) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B11|p2)= (1 + 21 + 0.2)/(1 + 21) = 1.009 (B17|p2)= (0 + 21 + 0.2)/(1 + 21) = 0.955 (B20|p2)= (0 + 21 + 0.2)/(1 + 21) = 0.955 P(p2) = 1/5 = 0.200 Process 3 of GMAW (B2|p3) = (1 + 21 + 0.2)/(1 + 21) = 1.009 (B4|p3) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B8|p3) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B11|p3)= (0 + 21 + 0.2)/(1 + 21) = 0.955 (B17|p3)= (0 + 21 + 0.2)/(1 + 21) = 0.955 (B20|p3)= (0 + 21 + 0.2)/(1 + 21) = 0.955 P(p3) = 1/5 = 0.200 Process 4 of SMAW (B2|p4) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B4|p4) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B8|p4) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B11|p4)= (0 + 21 + 0.2)/(1 + 21) = 0.955 (B17|p4)= (0 + 21 + 0.2)/(1 + 21) = 0.955 (B20|p4)= (1 + 21 + 0.2)/(1 + 21) = 1.009 P(p4) = 1/5 = 0.200 Process 5 of SAW (B2|p5) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B4|p5) = (1 + 21 + 0.2)/(1 + 21) = 1.009 (B8|p5) = (0 + 21 + 0.2)/(1 + 21) = 0.955 (B11|p5)= (0 + 21 + 0.2)/(1 + 21) = 0.955 (B17|p5)= (0 + 21 + 0.2)/(1 + 21) = 0.955 (B20|p5)= (0 + 21 + 0.2)/(1 + 21) = 0.955 P(p5) = 1/5 = 0.200

Calculation of P (ai | vj) xP (vj) for every grade-v

438

D. Silahudin et al.

(a)

(b)

(c)

(d)

(e)

4.

Process 1 of GTAW = [P(P1) × [P(B2|P1) × P(B4|P1) × P(B8|P1) × P(B11|P1) × P(B17|P1) × P(B20|P1)] = 0.200 × 0.955 × 0.955 × 1.009 × 0.955 × 1.009 × 0.955 = 0.1773 Process 2 of FACW = [P(P2) × [P(B2|P2) × P(B4|P2) × P(B8|P2) × P(B11|P2) × P(B17|P2) × P(B20|P2)]] = 0.200 × 0.955 × 0.955 × 0.955 × 1.009 × 0.955 × 0.955 = 0.1603 Process 3 of GMAW = [P(P3) × [P(B2|P3) × P(B4|P3) × P(B8|P3) × P(B11|P3) × P(B17|P3) × P(B20|P3)]] = 0.200 × 1.009 × 0.955 × 0.955 × 0.955 × 0.955 × 0.955 = 0.1603 Process 4 of SMAW = [P(P4) × [P(B2|P4) × P(B4|P4) × P(B8|P4) × P(B11|P4) × P(B17|P4) × P(B20|P4)]] = 0.200 × 0.955 × 0.955 × 0.955 × 0.955 × 0.955 × 1.009 = 0.1603 Process 5 of SAW = [P(P5) × [P(B2|P5) × P(B4|P5) × P(B8|P5) × P(B11|P5) × P(B17|P5) × P(B20|P5)] = 0.200 × 0.955 × 1.009 × 0.955 × 0.955 × 0.955 × 0.955 = 0.1603

Determination v as results of the classification with greatest multiplication value.

Table 4 shows the v score value as determination of classification using Eq. (3) where the column process refers to process number (H1, H2, H3, H4, and H5) and process code (P1, P2, P3, P4, and P5) for process GTAW, FACW, GMAW, SMAW and SAW respectively. As the largest score v value, the highest score of 0.1773 indicates and it can be seen that the GTAW method with the highest score of 0.1773 is excellent. Table 4 The highest score value for result classification

Process

V score

GTAW (H1)(P1)

0.1773

FACW (H2)(P2)

0.1603

GMAW (H3)(P3)

0.1603

SMAW (H4)(P4)

0.1603

SAW (H5)(P5)

0.1603

Expert System for Determining Welding Wire Specification …

439

4 Design and Implementation In the design process, a Use Case diagram is mentioned in Fig. 2 was made to understand the communication among actors and the systems. Actors involved in this expert system are Members (Engineers who have registered), where they should register first before using the application. As seen in the use case diagram in Fig. 2, there are 3 steps in which engineers can see where the first is choosing criteria for input the process, secondly the engineers can run the result of the NBC algorithm. Finally, the engineer can run the diagnosis result. Using the Java Android programming language and supported by a database such as MySQL, the implementation of this study is to transform the diagram from above into an expert system software. The proposed results of the Decision Support Systems (DSS) as recognized as Expert Systems (ES) are eligible to decide the requirements of the ES specifications of the welding wire for welding of materials online by suggesting engineering consultation with ES. The following shows the type of the interface of the expert system which engineers have used to conduct consultations. The following illustrates the form of the expert system interface that has been used by Fig. 2 Use case diagram of the proposed model

440

D. Silahudin et al.

Fig. 3 a User interface of menu form registration, b User interface of menu form login

engineers to conduct consultations. Figure 3a shows the User Interface (UI) of menu form registration where the new user should enter their data such as name, email, phone number, address, gender, password, and email will be used as the username for the login process. Meanwhile, Fig. 3b shows the UI of the menu form login, where the user who wants to use the application should enter their registered email address as the username and registered password. The system will check the username email address and password into the database which had been registered first by the user when they did the registration. If the system cannot find the exact email address and/or password, then the system will deny the user to enter the system and they can click the link forget username or password and the other menu will show up and the user will be asked to enter their registered email address. Then the system will verify if the email address entered is registered and the confirmation containing the password renewal connection will be sent to the registered email address and the user will be asked to renew their password to be stored in the table database in their register. Figure 4a shows the UI of menu Naive Bayes Classifier wherein this menu user can click the button exit to back to the main menu and click the button Bayes and will be shown in the Fig. 4b which as UI for Naive Bayes Classifier running process where user can entry which brand that they want to process as determining of specification of the welding wire for welding the material. Figure 5 shows the UI of Naive Bayes Classifier diagnosis result as the result after the user selects some criteria as shown in Table 2.

Expert System for Determining Welding Wire Specification …

441

Fig. 4 a User interface of menu Naive Bayes classifier, b user interface of Naive Bayes classifier running process

5 Conclusion This article describes how the NBC algorithm was used to determine the requirements of welding wires for material welding in the steel fabrication industry. These expert systems, or sometimes recognized as Decision Support Systems (DSS), may assist engineers, especially in a company, to recognize the brand selection process for a project. The proposed implementation provides data and information for welding wire specifications for material welding in the steel fabrication industry. It can help engineers and get data more systematic, organized, and accurate. Tests performed by engineers, expert systems to establish welding wire requirements for welding materials were obtained on the basis of information from each method of different brands.

442

D. Silahudin et al.

Fig. 5 User interface of Naive Bayes classifier diagnosis result

References 1. Piao, Z., Zhu, L., Wang, X., Liu, Z., Jin, H., Zhang, X., Wang, Q., Kong, C.: Exploitation of mold flux for the Ti-bearing welding wire steel ER80-G. High Temp. Mater. Process. (London) 38(2019), 873–883 (2019)

Expert System for Determining Welding Wire Specification …

443

2. Yang, X., Hiltunen, E., Kah, P.: New nano-coated welding wire for ultra-high-strength steel (S960QC) and MAG robotized welding in arctic offshore construction. In: The 27th International Ocean and Polar Engineering Conference. International Society of Offshore and Polar Engineers (Jul 2017) 3. Ren, D.L., Xiao, F.R., Tian, P., Wang, X., Liao, B.: Effects of welding wire composition and welding process on the weld metal toughness of submerged arc welded pipeline steel. Int. J. Miner. Metall. Mater. 16(1), 65–70 (2009) 4. Baumann, F.W., Sekulla, A., Hassler, M., Himpel, B., Pfeil, M.: Trends of machine learning in additive manufacturing. Int. J. Rapid Manuf. 7(4), 310–336 (2018) 5. Chen, Y., Fang, C., Yang, Z., Wang, J., Xu, G., Gu, X.: Cable-type welding wire arc welding. Int. J. Adv. Manuf. Technol. 94(1), 835–844 (2018) 6. Chen, J., Zhang, D., Zhou, W., Chen, Z., Li, H.: Uneven spatial distribution of fatigue cracks on steel box-girder bridges: a data-driven approach based on Bayesian networks. Struct. Infrastruct. Eng. 1–12 (2020) 7. Zhang, L., Li, B., & Ye, J.: Power supply and its expert system for cold welding of aluminum and magnesium sheet metal. In: International Conference on Intelligent Computing, pp. 795–804. Springer, Cham (Aug 2019) 8. Zhang, K., Chen, Y., Zheng, J., Huang, J., Tang, X.: Adaptive filling modeling of butt joints using genetic algorithm and neural network for laser welding with filler wire. J. Manuf. Process. 30, 553–561 (2017) 9. Zhou, P., Zhou, G., Wang, H., Wang, D., He, Z.: Automatic detection of industrial wire rope surface damage using deep learning-based visual perception technology. IEEE Trans. Instrum. Meas. 70, 1–11 (2020) 10. Wang, Z.: Design and simulation of a welding wire feeding control system based on genetic algorithm. In: International Conference on Big Data Analytics for Cyber-Physical-Systems, pp. 1756–1760. Springer, Singapore (Dec 2020) 11. Wang, B., Hu, S.J., Sun, L., Freiheit, T.: Intelligent welding system technologies: state-of-the-art review and perspectives. J. Manuf. Syst. 56, 373–391 (2020) 12. Lan, H., Zhang, H., Fu, J., Gao, L., Pan, R.: Intelligent welding technology for large deep and narrow shaped box with robot. In: Transactions on Intelligent Welding Manufacturing, pp. 113–122. Springer, Singapore (2020) 13. Li, Y., Yu, B., Wang, B., Lee, T.H., Banu, M.: Online quality inspection of ultrasonic composite welding by combining artificial intelligence technologies with welding process signatures. Mater. Des. 194, 108912 (2020) 14. Ulas, M., Aydur, O., Gurgenc, T., Ozel, C.: Surface roughness prediction of machined aluminum alloy with wire electrical discharge machining by different machine learning algorithms. J. Mater. Res. Technol. 9(6), 12512–12524 (2020) 15. Kong, L., Peng, X., Chen, Y., Wang, P., Xu, M.: Multi-sensor measurement and data fusion technology for manufacturing process monitoring: a literature review. Int. J. Extreme Manuf. 2(2), 022001 (2020) 16. Kujawi´nska, A., Rogalewicz, M., Diering, M.: Application of expectation maximization method for purchase decision-making support in welding branch. Manage. Prod. Eng. Rev. 7, (2016). 17. Lertrusdachakul, I., Mathieu, A., Aubreton, O.: Vision-based control of wire extension in GMA welding. Int. J. Adv. Manuf. Technol. 78(5–8), 1201–1210 (2015)

Analysis of Market Behavior Using Popular Digital Design Technical Indicators and Neural Network Jossy George, Akhil M. Nair, and S. Yathish

Abstract Forecasting the future price movements and the market trend with combinations of technical indicators and machine learning techniques has been a broad area of study and it is important to identify those models which produce results with accuracy. Technical analysis of stock movements considers the price and volume of stocks for prediction. Technical indicators such as Relative Strength Index (RSI), Stochastic Oscillator, Bollinger bands, and Moving Averages are used to find out the buy and sell signals along with the chart patterns which determine the price movements and trend of the market. In this article, the various technical indicator signals are considered as inputs and they are trained and tested through machine learning techniques to develop a model that predicts the movements accurately. Keywords Neural network · Technical indicators · Market behavior · Deep learning

1 Introduction Over the past few years, the prediction of price movements in various markets has emerged as a field of research and study in areas such as trade, finance, and statistics as it helps in making decisions involving money. The basic methods of analysis were divided into fundamental analysis and technical analysis, which has delivered numerous expansion opportunities for the traders to combine the various methods of analysis and techniques to develop better models to make a profit. Though professional traders have developed models that combine machine learning techniques like Artificial Neural Networks (ANN), Multilayered Perceptions (MLP), Radial Basis Function (RBF), General Regression Neural Network (GRNN), and Decision Trees (DT). The use of the machine learning ensemble involves providing a more concrete predictable set of alternative models with flexible structures [1, 2].

J. George (B) · A. M. Nair · S. Yathish Department of Computer Science, CHRIST (Deemed to be University), Lavasa Pune, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_37

445

446

J. George et al.

The machine learning techniques and the technical indicators combine to form appropriate combinations of trading strategies with maximum possible profit-making strategy. These techniques are used in various fields and for various purposes like developing strategies with different combinations of technical indicators, formulation of prediction algorithms and functions, sentiment analysis, and data snooping. The combination of machine learning approaches is also helpful in determining the performance evaluation of credit scoring and analyzing the risk that is involved and minimizes it [3]. Even though various techniques have been used for the predictions of price and market movements, there are certain shortcomings to these approaches as there are better models and techniques that perform better according to the data and the combinations used. The limitation of data also plays an important role in the interpretation and the results that are derived. Researchers have used different datasets from different markets and time series which influence the results as it changes accordingly and the best alternatives also differ. Even though the technical indicators and machine learning techniques are used for analyzing the market situations and their behavior, several other factors affect the predictions made by these technical systems [4]. The factors that affect the market situation despite the predictions make the prices of the stocks fluctuate from the line of best fit in the prediction chart [5]. It is observed that there is minimal involvement or participation from the public towards investments in the share market. In a country with a population of over 1.2 billion people, only around 20 million have trading accounts that are abandoned after a few transactions while others remain active [6]. Regardless of the constant media coverage of the crisis faced by the economy, the markets remain at an all-time high. For the first time, BSE’S Sensex hit the 41,000 marks on 26th November 2019, and the 12,100 marks were crossed by the National Stock Exchange’s Nifty index during 27th November 2019, which indicates that the market sentiments are optimistic for the future and there has been quite a satisfactory performance of various stocks in the markets. The GDP growth rate of the country has dropped to multi-year lows but the optimistic growth of the Sensex shows that there is scope for getting the economy stabilized and boost it further [6].

2 Literature Review The prediction of the stock market movements is a broad area of study which deals with different machine learning techniques and technical indicators that construct a better performing model that produces better results. A combination of forecasting models with more than one model often leads to an improvement in the performance of the models especially when the ensemble is quite different [7]. The basic idea of the research is to provide a basic idea of time series data, the need for various models like ANN, the importance of stock indices and investigates the neural network model for time series in forecasting. The profitability of the prediction models developed is also important as it describes the performance of the model and determines the

Analysis of Market Behavior Using Popular Digital Design …

447

better model. Banga & Brorsen used the buy and sell signals of technical indicators as inputs to test the profitability of the composite prediction model [8]. Machine learning and statistical methods are developed theoretically, which is better and helps in recognizing patterns that were producing satisfactory results in due course of time. However, statistical and machine learning methods were profitable only in a few cases and failed to produce a significant profit in all cases. Faijareon and Sornil, the authors have developed trading strategies by combining technical indicators, to overcome the difficulty of making profits when there was an involvement of a large number of investors trading against each other in the market [9]. A different trading strategy was developed with the combination of rules evolved together via the CHAID algorithm. The proposed technique was used to compare eight popularly used strategies on randomly selected stocks from the Stock Exchange of Thailand. The results of the proposed technique were outperforming the trading strategies which shows the applicability of the technique in real-time stock trading. Elango & Sureshkumar have also written about enhancing the forecasting models for better accuracy to improve the profitability of the investment made [10]. The performance of the stock and the error rate in the prediction of the stock price is done using indicators like Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Relative Absolute Error (RAE). Tsai & Hsiao also helped in developing models with different techniques through various combinations of multiple feature selection methods to identify more representative variables for better prediction. The best performance is derived through the joining of PCA and GA and also with the combination of PCA, GA, and CART which filters the unrepresentative data and provides the 14 most important features [11–13]. Neural networks are a type of machine learning technique that involves training and testing of the dataset. The technique of predicting the patterns through technical analysis that exists in the price dataset is used in NN in a computerized manner to discover these patterns [14]. The market events and news also have an impact on the prediction of price movements since it deals with the sentiments of the traders and market participants. The co-efficient of sentiment analysis is greater than that of the numerical dynamics of news and comments. This indicates that the research is comparatively focused on the deeper text mining of numerical dynamics which are better for the prediction of the stock movements [15]. The sentiment analysis also considers the fundamentals of the respective stock or the market. The data collected from financial news and tweets proved that there is not much impact of sentimental attitude on the stock price movements but the sentiment emotions do have an impact on the price movements [16]. Dempster & Payne et al., have explored the use of technical indicators for computing the prices in the Foreign Exchange market. The study encountered a state in the output data which was not found in the sample data which results in both RL and Markov chain method holding their current position and also there was a problem of overfitting that is the models attempt to solve too specifically [17]. Researchers like Li & Edward P K Tsang explored the potential of the predictability power of the technical analysis through genetic programming. The study was conducted to illustrate that by taking the technical indicators used in the

448

J. George et al.

technical analysis as inputs for FGP to generate GDTs that performs better than technical indicators [18]. The combination of fundamental analysis and machine learning techniques for stock price prediction was conducted by Huang. This consisted of presenting, discussing, and comparing results from three learning algorithms i.e., FNN. ANFI’s and RF, in research as features, are selected for new model building and testing [5]. Windowing functions are combined with SVR to predict results to compare it with the actual prices of the stock exchange. The combination of the SVR model and the flatten windowing operator are not suitable for predicting good price but models created using Rectangular and Flatten Window Operator produced better results [19]. A comprehensive analysis of the profitability of using technical analysis over a large number of data was made by Jiang, Tong & Song. The data snooping techniques helped in identifying the top-performing or economically sufficiently profit-making technical indicators [20]. With the use of machine learning techniques, the scope of evaluation for credit scoring has widened. Through this study, the performance of machine learning techniques is achieved through the accuracy of the prediction results. The XGBoost model has the most satisfying and conclusive results [21]. Performance evaluation of the stock and stock selection based on the various technical indicators was conducted by MiddiAppala Raju, Middi Venkata & Sai Rishita. The selection of scrips based on technical indicators is a good strategy as they do result in good returns. The technical indicators provide investors and traders with concrete parameters on which their trading and investment decisions [22].

3 Methodology Artificial Neural Networks (ANN) are capable of studying and analyzing the compound relationships between the input and output variables. It also can be used for establishing non-linear relationships. In this study, the inputs are obtained from the signals of the technical indicators such as Simple Moving average, Exponential moving average, Moving Average Convergence and Divergence (MACD), and Bollinger Bands which are processed and analyzed through the Neural Network model [23, 24]. The research is based on the Neural Network models which are used for better performance and prediction of stock price movements in the market. The data set used is obtained from the National Stock Exchange (NSE) which is one of the benchmark indexes of the stock market. The following data is selected to test the best alternatives for the prediction of price movements considering the economic conditions and time forecasting. The results are also dependent on the dataset chosen since a large number of data would diversify the prediction ability of the models and may not provide appropriate results when there is an enormous dataset. Root Mean Square Error (RMSE) is a standard way to find out the error of a model in the process of predicting data. RMSE is the distance between the observed values and the predicted values of the data. This study analyses the following performance of the various models through Root Mean Squared Error (RMSE), R-Squared, Adjusted

Analysis of Market Behavior Using Popular Digital Design …

449

R-squared, and R-value [25, 26]. Root Mean Square Error (RMSE) is the standard deviation of the residuals from the line of best fit which is used in forecasting, climatology, and regression analysis. The RMSE measures the shrewdness of the residual around the line of best fit [27]. The formula for calculating RMSE is:  RMSE =

n i=1 (X o,i

− X f,i )2

n

where, f = predicted values. o = actual/observed values. R-square is the statistical measure that determines the fitness of the chosen model along with the ‘line of best fit’. The accuracy of the model concerning the target or the predictions of the model with the original data. It explains the percentage of the response variable that can be explained by variation in the model. R2 = SSreg\SStot The correlation coefficient for continuous data would range from −1 to 1 which determines the strength and direction of the relationship between the variables. Rvalue indicates the correlation between the predicted variable and the actual data. When the value lies between 0 and +1/−1, it indicates that there exists a relationship and strength between the variables which is determined by the sign of the correlation coefficient.

4 Results and Discussions A prefilled copyright form will be provided by volume editors. Please send your signed copyright form to volume editors as a scanned PDF. The author may sign on behalf of the other authors of a particular article. In this case, the author signs for and accepts responsibility for releasing this material on behalf of any co-authors. Digital signatures are not acceptable.

4.1 Data Analysis This study looks into the signals from Nifty 50 of various indicators like CCI Oscillator, RSI Indicator, Parabolic SAR, MACD, and Bollinger Bands that have been collected for training and testing the data from the past 9 years i.e. 2011–2019. The data is processed by the Neural Network (NN) model to predict the prices of Nifty

450

J. George et al.

50 and the relative error of these are analyzed. The signals of various technical indicators of NSE over the past 9 years have been collected and these are used as inputs for machine learning techniques. The model was built and the data has been used for training and testing for predictions which are then compared with the actual datasets. During the process of building the model, the training algorithm is set to Levenberg–Marquardt for the hidden layer and training the data. It is a combination of the gradient descendent rule and the Gauss–Newton method which takes the iteration of larger values in the beginning and smaller values in the later stages. The learning rate and the momentum were set at 0.01 and 0.9 for the model respectively. The Epochs were maximized to 1000 for the model to produce significant results. The predictions are made analyzed and compared with various measures that help in determining the efficiency of the model. The input data are then used in the developed Neural Network model for testing and training of the data which are analyzed for best results as outcome through the model. Tanh sigmoid function is used as an activation function for the model which helps in analyzing the signals generated in the neurons for processing the data. The following technical indicators have been used to predict the prices of Nifty 50 using the various machine learning techniques and are compared to test the accuracy of the predictions. The technical indicators signals are used as inputs for analyzing the data which are trained and tested, later used in the process of forecasting. The process of data analysis has been explained with the help of Fig. 1 which shows the process of feeding the inputs to the model and obtaining results after training and testing. The technical indicators help in generating buying/selling signals according to the market behavior and also indicators like Bollinger Bands and MACD show the future trends of the market with the help of Moving averages. With the help of these signals, traders decide their position in the market. Considering the signals as inputs for the machine learning techniques to forecast the prices, it produces the optimum combinations of indicators along with the accuracy of the prediction. Figure 2 shows the training and testing of the dataset through the Neural Network model. The inputs are fed to the hidden layer which processes the data with weights and vector which is sent to the output layer for displaying the output. The technical indicators are used as inputs for the model which are compared with the closing price of Nifty 50. The analysis of data is done using the Neural Network model which trains, tests, and validates the data. The inputs are selected for training and testing of the data which will be processed through neurons that are connected with nodes of the hidden layer and output layer along with weights and a bias value. The tanh function is used

Fig. 1 Process of data analysis

Analysis of Market Behavior Using Popular Digital Design …

451

Fig. 2 Neural network model for Nifty 50 dataset

as the activation function in the model which defines the output of the layers after processing the data obtained through the nodes.

5 Experimental Analysis The dataset collected is analyzed through the Neural Network model which produces the results after the training and testing of data. The learning function helps in understanding the prediction process of the model and trains the model to obtain results with minimal loss. The Levenberg- Marquardt algorithm is used as the training function which is a combination of the gradient descent and Gauss–Newton that is also used for updating the parameter during the iterative process. The output obtained after the training and testing of data is categorized in different periods. Figure 3 shows the output obtained from the model along with the curve fit, plotting the outputs obtained from the training and testing of the data in the model. The R-value of the output obtained is different across the process of data analysis which determines the relation between the variables. From Figs. 4 and 5, it is witnessed that the R-value of the dataset output is increasing as the duration of the period increases in the dataset. The smaller dataset degrades the performance of the model as the duration of the period is less for the model to validate its datasets. The performance of the model can be seen increasing as the observations in the dataset increase. The output figures of the various years indicate that the larger data helps in upgrading the performance of the model, as the predicted values lie around the best fit line which indicates that the model is perfect for prediction. The curve fitting of the predicted values can be done with the use of a curve fitting tool in MATLAB which helps in plotting the graph. The results of the dataset used are categorically derived along with training and testing of those datasets which are compared with their performance along with the evaluating parameters for the results. Various performance parameters are used for evaluating the results of the model which consists of different dataset as shown in Figs. 6 and 7.

452

J. George et al.

Fig. 3 Output for the year 2011

The following Table 1 shows the results of the neural network model to predict the close prices of the Nifty 50 index. The R-square value for the datasets increases as the period increases providing data for the model to learn the patterns of the data or the market movement, to provide better results. The years 2011–19 had the highest R-square value of 0.984 which is the fitness of the model for the dataset. The dataset of the years 2011–2013 had the best error rate of 177.8 which indicates that the model is suitable for prediction for 2–3 years since the error rate is lower. There is a possibility of external factors to influence the prediction such as natural disasters, government policies, economic slowdown, etc., which causes the noise in the data. The following graph shows the predicted values in the dataset by the model for the years 2011–2019. The graph shows the comparison of the actual prices of Nifty 50 and the predicted or the target output of the dataset throughout the years. It can be observed that the graph is steeper which shows that the value of the index has been increasing with time. The datasets collected from the benchmark index are analyzed with the help of the Neural Network model which provides better results with the combination of proper learning function and the activation function. This decides the parameter for the selection of output to be transferred to the next layer. The results of the model signify that the larger dataset helps in providing better solutions through the model but the prediction error also increases as the volume of the data rises. The RMSE for the year 2011–2019 is 278.3 which is the highest among the datasets indicating that

Analysis of Market Behavior Using Popular Digital Design …

453

Fig. 4 Output for the year 2011–2013

the model uses Levenberg-Marquadart which uses the combination of adient descent and the Gauss–Newton method [13, 28–30] (Fig. 8). The following Table 2 shows the R-values of the dataset analyzed through the model. The R-value of the datasets increases which indicates the strength and relationship of the variables as the volume of the data rises. The relation between the actual close price of the index and the prediction has been increasing throughout the years. The feed-forward propagation neural network model achieved a better performance in predicting the results of the dataset having a moderate volume of data as it provides better solutions with a strong relationship with the variables. The larger dataset has produced solutions between the variables but with an increase in the prediction error as the volume of the data increases the noise of the data. The Neural Network model is suitable for the prediction of the stock market movements.

454

J. George et al.

Fig. 5 Output for the year 2011–2015

6 Conclusion In the research, the performance of the Nifty 50 index over the past 9 years was analyzed through the Neural Network model which is measured through parameters such as R-square, Adjusted R-square, RMSE, and R-value. The combination of technical indicators and machine learning techniques such as Neural Network, Gradient Boosted Forest (GBF), Decision Tree (DT), and Support Vector Machine (SVM) to analyze the input data through various models of the machine learning techniques. Neural Network model with an appropriate learning function and learning rate of 0.01 along with the momentum at 0.9 for the model. The Epochs were maximized to 1000 for the model to produce significant results. The model achieved moderate error with maximum closeness in the prediction of the confined price which can be used for predicting datasets with less variation between them.

Analysis of Market Behavior Using Popular Digital Design …

Fig. 6 Output for the year 2011–2017

Fig. 7 Output for the year 2011–2019

455

456

J. George et al.

Table 1 Performance analysis comparison of various datasets of Nifty 50 Year

No. of time period R-square Adjusted R-square RMSE DFE

2011

248

0.589

0.5874

219.2

2011–2013

SSE

245 1.18E + 07

748

0.8128

0.8125

177.8

745 2.35E + 07

2011–2015 1241

0.9689

0.9689

222.6

1236 6.14E + 07

2011–2017 1736

0.9828

0.9828

214.5

1731 7.96E + 07

2011–2019 2206

0.9837

0.9837

278.3

2180 1.69E + 08

Fig. 8 Prediction chart for the year 2011–2019

Table 2 Regression analysis R-value for the datasets Year

2011

2011–2013

2011–2015

2011–2017

2011–2019

R-value

0.763

0.901

0.984

0.991

0.991

References 1. Weng, B.: Application of Machine Learning Techniques for Stock Market Prediction. Auburn University, Doctor of Philosophy (2017) 2. Naik, N., Mohan B.R.: Stock price movements classification using machine and deep learning techniques-the case study of Indian stock market. In: Macintyre, J., Iliadis, L., Maglogiannis, I., Jayne, C. (eds.) Engineering Applications of Neural Networks. EANN 2019. Communications in Computer and Information Science, vol. 1000. Springer, Cham (2019). https://doi.org/10. 1007/978-3-030-20257-6_38 3. Sachin, K., Shailesh, J., Thakur, R.S.: Performance forecasting of share market using machine learning techniques: a review. Int. J. Electr. Comput. Eng. 6(6), 3196–3204 (2020). Accessed 29 Sept 2020 4. Khan, W., Malik, U., Ali Ghazanfar, M., Awais Azam, M., Alyoubi, K. H., Alfakeeh, A.S.: Predicting stock market trends using machine learning algorithms via public sentiment and political situation analysis. In: Soft Computing (2020), vol. 24 (2019). Available: https://doi. org/10.1007/s00500-019-04347-y. Accessed 29 Sept 2020

Analysis of Market Behavior Using Popular Digital Design …

457

5. Huang, Y.: Machine learning for stock prediction based on fundamental analysis. In: Electronic Thesis and Dissertation Repository (2019) 6. E. Times, “Economic Times,” Economic Times, 2019. [Online]. Available: https://economict imes.indiatimes.com/ 7. Ashok Kumar, S.M.D.: Performance analysis of Indian stock market ındex using neural network time series model. In: International Conference on Pattern Recognition, Informarics and Mobile Engineering (PRIME) (2013) 8. Banga, J.S., Brorsen, W.B.: Profitability of alternative methods of combining signals from technical trading systems. Intell. Syst. Account. Finan. Mange. 1–54 (2019) 9. Chawwalit Faijareon, O.S.: Evolving and combining technical indicators to generate trading strategies. J. Phys. (8), (2019) 10. Sureshkumar, K.K., Elango, N.M.: An efficient approach to forecast Indian stock market price and their performance analysis. Int. J. Comput. Appl. 34(5), (2019) 11. Hsiao, Y.-C., Tsai, C.F.: Combining multiple feature selection methods for stock prediction: union, intersection, and multi-ıntersection approaches. In: Decision Support Systems, pp. 258– 269 (2019) 12. Rekha Das, S., Mishra, D., Rout, M.: Stock market prediction using Firefly algorithm with evolutionary framework optimized feature reduction for OSELM method. Expert Syst. Appl. X 4, (2019). Available: https://doi.org/10.1016/j.eswax.2019.100016. Accessed 29 Sept 2020 13. Kumar, D., Murugan, S.: Performance analysis of NARX neural network backpropagation algorithm by various training functions for time series data. Int. J. Data Sci. 3(4), (2018). Accessed 29 Sept 2020 14. Tsanga, K.P., Philip, M.: Design and implementation of NN5 for Hong Kong stock price forecasting. Eng. Appl. Artif. Intell. 453–461 (2007) 15. Shangkun Deng, T. M.: Combining technical analysis with sentiment analysis for stock price prediction. In: IEEE International Conference on Dependable, Autonomic and Secure Computing, pp. 800–807 (2011) 16. Andrius Mudinas, D.Z.M.L.: Market Trend Prediction using Sentiment Analysis: Lessons Learned and Paths Forward. Cornel University Press (2018) 17. Dempster, M.A.H.: Computational learning techniques for intraday FX trading using popular technical ındicators. IEEE Trans. Neural Netw. 12(4), (2001) 18. Jin Li, E.P.T.: Improving technical analysis predictions: an application of genetic programming. In: Proceedings of the Twelfth International FLAIRS Conference, pp. 1–5 (1999) 19. Phayung Meesad, R.I.: Predicting stock market price using support vector regression. In: International Conference on Informatics, Electronics and Vision (2013) 20. Fuwei Jiang, G.T.: Technical analysis profitability without data snooping bias: evidence from Chinese stock market. Int. Rev. Finan. 19(1), 191–206 (2017) 21. Anqi Cao, H.H.: Performance evaluation of machine learning approaches for credit scoring. Int. J. Econ. Finan. Manage. Sci. 6(6), 255–260 (2018) 22. Middi Appala Raju, M.V.: Performance evaluation and stock selection based on technical indicators-RSI, MFI, MACD and stochastic ındicators. Int. Res. J. Finan. Econ. (169), 113–130 (2018) 23. Vargas, M.R., dos Anjos, C.E.M., Bichara, G.L.G., Evsukoff, A.G.: Deep leaming for stock market prediction using technical ındicators and financial news articles. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Rio de Janeiro (2018). https://doi. org/10.1109/IJCNN.2018.8489208 24. Lei, Y., Peng, Q., Shen, Y.: Deep learning for algorithmic trading: enhancing macd strategy. In: Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence (ICCAI ‘20), pp. 51–57. Association for Computing Machinery, New York, NY, USA. https:// doi.org/10.1145/3404555.3404604 25. Suma, V., Shavige Malleshwara Hills: Data mining based prediction of demand in Indian market for refurbished electronics. J. Soft Comput. Paradigm (JSCP) 2(03), 153–159 (2020) 26. Chakrabarty, N.: A regression approach to distribution and trend analysis of quarterly foreign tourist arrivals in India. J. Soft Comput. Paradigm (JSCP) 2(01), 57–82 (2020)

458

J. George et al.

27. Waqar, M., Dawood, H., Guo, P., Shahnawaz, M.B., Ghazanfar, M.A.: Prediction of stock market by principal component analysis. In: 2017 13th International Conference on Computational Intelligence and Security (CIS), pp. 599–602. Hong Kong (2017). https://doi.org/10. 1109/CIS.2017.00139 28. Nabipour, M., Nayyeri, P., Jabani, H., Mosavi, A., Salwana, E.: Deep learning for stock market prediction. Entropy 2020 22(8), (2020). Accessed 29 Sept 2020 29. Rani, I., Chandar, S.: A study on forecasting mutual fund net asset value using neural network approach. In: 3rd National Conference on Innovative Research Trends in Computer Science and Technology (NCIRCST 2018), vol. 4, no. 3 (2018). Accessed 29 Sept 2020 30. Chan Phooi M’ng, J., Mehralizadeh, M.: Forecasting East Asian indices futures via a novel hybrid of wavelet-PCA denoising and artificial neural network models. Plos ONE (2016). Available https://doi.org/10.1371/journal.pone.0156338. Accessed 29 Sept 2020

Distributed Multimodal Aspective on Topic Model Using Sentiment Analysis for Recognition of Public Health Surveillance Yerragudipadu Subbarayudu and Alladi Sureshbabu

Abstract Social media is the leading platform to accomplish large data in the field of health discipline tweets everywhere in the world empowerment currently. And it is a prominent data source for searching health terms (topics) and predicts solutions in the direction of health care. Health care has become one of the largest sectors in the world in terms of income and employment. Billions of customers use Twitter daily to enable people to share health-related topics of their views and opinions on various healthcare topics. Topic models commence from natural language processing (NLP) to acquiring immeasurable knowledge on healthcare areas which highly motivated to analyze the topic models (TM). Topic models are addressed intended for the squeezing of health topics for modeling the selective latent tweet documents in the healthcare system. Analyzing the topics in TM is an essential issue and facilitates an unreliable number of topics in TM that address the destitute results in health-related clustering (HRC) in various structured and unstructured data. In this regard, needful visualizations are imperative measurements to clipping the information for identifying cluster direction. So that, to believe and contribute proposed distributed multimodal active topic models such as Hadoop distributed non-negative matrix factorization (HdinNMF), Hadoop distributed latent Dirichlet allocation (HdiLDA), and Hadoop distributed probabilistic latent schematic indexing (HdiPLSI) are reasonable approaches for balancing and clipping to the direction of health topics from various perspective data sources in health statistics clustering. Hadoop DiNNMF distributed model is achieved and covered by cosine metrics when exposed to visual clusters and good performance measures compared to other methods in a series of health conditions. This assistance briefly describes the public health structure (hashtags) in good condition in the country and tracks the evolution of the main health-related tweets for preliminary advice to the public.

Y. Subbarayudu (B) Computer Science and Engineering, Jawaharlal Nehru Technological University, Anantapur, Andhra Pradesh, India A. Sureshbabu Computer Science and Engineering, JNTUA College of Engineering, Anantapur, Andhra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_38

459

460

Y. Subbarayudu and A. Sureshbabu

Keywords HDFS · Sentiment analysis · Natural language tool kit (NLTK) · TF-IDF · Twitter API · Resource manager (RM) · Name node (NN) · Data node (DN)

1 Introduction In this section, the social media networks are balanced for extracting the tweets and share their opinions on interesting topics. The opinions are embedded in human activities because they have a key impact on our decision-making in world empowerment. Digitization has seen a dramatic increase in the use of well-liked social networking sites resembling Twitter (T), Facebook (fb), and Yahoo! and healthcare sites have led to large datasets. Advanced techniques address the easier for users around the world to express themselves and interface their posts openly, and potential users on the Web rely on that reviews and opinions to make decisions. Twitter API updates millions of consumer posts on various topics such as health tweets and often searches for the health topics on social media to make a health decision, and some of the data summarizing methods are gradually failing to efficiently analyze the datasets. The topics of the models reveal the topics of the documents that are represented by the word distribution. Topic modeling is a conception in which documents are a combination of terms to alter into word allocation probabilities. Biomedical documents include an unstructured and high-dimensional nature, so here are dissimilar methods on behalf of working with biomedical text documents [1], for instance, FLSA, LDA, and LSA. The latent Dirichlet distribution (LDA) includes probabilities that forecast to the rear distribution of different words and topics from the set of external text corpus [2]. Broadcast LDA topics using the Gibbs example, which is an iterative method. Also, this technique selects a few parameters approximating the number of topics, iterations, and Dirichlet priorities. Therefore, LDA’s topic model requires an in-depth experimental analysis with various constitutions to achieve optimal congruence. The latent semantic analysis (LSA) technique extracts topics and displays the semantic meaning of words with statistical calculations in a huge collection of papers [3]. LSAs detect latent classes while minimizing the dimensions of the vector space model [4]. False information or false statistics is a new term and is now considered the greatest threat to democracy [5].

2 Related Works Within 140 characters, users can tweet whatever they like or wish. Topics range from political opinions, comments on news events, and daily life to health care [6]. According to USNEWS [7], more and more medical professionals like doctors and nurses adopt social network tools like Twitter to monitor and interact with the patient with health-related hashtags. Communication [8] via Twitter provides an alternative

Distributed Multimodal Aspective on Topic Model Using Sentiment …

461

engagement beyond doctors’ offices or hospitals for patient health topics [9]. Social network has played a significant role in changing the nature and the way of healthcare interaction between healthcare organizations and consumers. Topic modeling is a probabilistic technique used to detect latent semantic patterns, called topics in a set of unstructured documents. Topic modeling addresses the description of the semantic configuration of documents according to the topic exposed. This advance is based on the rule that topics have a cut of random probability allocation in words inside a paper [3, 10]. Explicit words are expected to be seen more habitually in a document, as the document is naturally connected to an explicit topic [3, 11]. Topics revealed through topic modeling are truly semantic groups created from words that are often used mutually in a document [3]. By identifying the latent semantic structures, the probability allocation of every topic is calculated together with the allocation of topics per document and tasks per word topic in each document [3, 10]. LDA, a probabilistic and generative replica, is one of the topic modeling algorithms that are often used in text extraction [10]. The term “latent” in the LDA is interrelated to the discovery of the semantic content of documents by analyzing the latent semantic structures in documents [12]. The generative method in LDA is known as the development of assigning words in a document to arbitrary variables and assembling them semantically by repetitive probability progression based on the Dirichlet distribution [3]. The LDA does not necessitate any labeling or instruction kit, as it uses an unsupported training method [10]. Therefore, the LDA model can be functioning effectively to large collections of documents in a text corpus to find semantic models [3, 10, 13]. Until now, LDA is widely used in text retrieval studies conducted in various contexts such as natural language processing, information retrieval, mood analysis, literature research, and analysis of social trends [3, 14]. Similarly, this model has been used as an effective method in some studies analyzing online job postings in various industries [12, 15]. Topic models, originally developed for the analysis of textual data, are now applied to various types of data, including genetic data. Besides, topic modeling methods are used in various tasks, such as computational linguistics, review of original program documents [16], a summary of product review opinion [17], and description of the software revolution. The topic and text search analysis of the documents analyzed from the twitter sentiment analysis [18, 19, 20, 21]. The possibility of modeling themes caught the attention of researchers in the biomedical discipline. The most amazing clusters of biomedical texts withstand largescale publication, and topic modeling methods are effective in managing comprehensive document collections. Therefore, the subject model can provide promising results in the mining of biological and biomedical texts [22]. The biomedical expose uses the RedLDA thematic modeling technique to recognize redundancies in in-patient proceedings [23]. Latent semantic analysis (LSA) uses instant automatic classification to summarize clinical cases [24]. In [25], the LSA found clinical reports of the psychiatric description and produces the semantic liberty of psychiatric terms. LSA also uses it to discover the semantic concepts and the field of ontology that builds the words proceeding model for verbal expression [26]. In clinical waters, LSA furthermore gives recovered results in tagging and segmentation of subjects [27]. Biomedical text documents are constantly growing

462

Y. Subbarayudu and A. Sureshbabu

nowadays, at the same time as the analysis of these documents is very important for the discovery of the important information resource. Biomedical text document archives such as PubMed provide helpful services to the technical community. Topic modeling is a well-liked method that reveals unknown topics and structures in unorganized biomedical text documents. This document structure is used to search, index, and add documents [28]. Big data analysis is a useful technique for analyzing deeper values hidden in a large dataset generated in our life today, and it has almost become more prominent in various applications such as industry development, smart home for smart city development, and security management [29]. Topic models target semantic groups to collect topics about documents. Topics have multiple semantics and multiple algorithm clarification [30] for Ncut_nmf [8]. The “latent Dirichlet” [31] distribution is a very complex application of the Bayesian technique, and the word “latent” means capturing the semantic text by calculating the hidden themes of the words in the body of each document. Distinguish the topic in the text using terms. The model is completely unattended and does not require any basic knowledge. Non-negative matrix factorization (nNMF) is a family of linear algebra algorithms for identifying the latent frame in data interpreted as a non-negative matrix. And here the inputs are term-document matrix (TDM) and distributed TFIDF. The input for the matrix factorizations is TDM [3] and the number of items and illustrates two decomposition matrices, i.e., one for each topic and the second is each topic in the tweet corpus. After decomposition, you will get two non-negative matrices of the original n-words on K topics, so the NMF model will distribute it in one of these topics. The latent semantic probability analysis is a method for model information within a probability framework. Here, you discover the hidden themes of some papers that are related to improving poor performance in health cluster issues (HRCT), and the above issues naturally affect the quality of TM [43–45]. In this context, Hadoop claims and contributes to multimodal models distributed in themes to achieve greater precision than the current ones within the framework of Hadoop.

3 Proposed Methodology The topic models range from natural language processing (NLP) to the acquisition of immeasurable knowledge that is highly motivated to analyze models in the fields of health care. Topic models are intended to remove health-related terms, model selective latent health system tweets, or application models. Analysis of TM topics is a major problem and facilitates the unreliable number of TM topics related to poor health clustering (HC) outcomes. In this sense, the necessary visualizations are a vital part of cutting the information to determine the direction of the group. To contribute the proposed Hadoop Topic models, which are distributed with Hadoop non-negative matrix factorization (hDivNMF), Hadoop latent Dirichlet allocation distribution (hDivLDA) and Hadoop distributed probabilistic latent schematic indexing for balancing the direction of health terms or topics from twitter API. These proposed models are derived as follows.

Distributed Multimodal Aspective on Topic Model Using Sentiment …

463

In this section, the work aims to create a model to classify a structured and unstructured corpus of mood analysis data and illustrate the full outline of the proposed method. As mentioned above, the input data is divided into two categories such as structured and unstructured corpus in Fig. 1. The structured data is classified into three modules and different sets of text, voice, and visual data. Here, the OCR and Tesseract are used to extract the text corpus from the video data. And disputes for text, voice content, and video content are used to process the textual content of structured data individually. Finally, assemble the text corpus from the structured and unstructured data corpus. The proposed system analyzes all the different datasets and achieves an enhanced result. For this explanation, the proposed method consists of three different types of corpus such as visual data, audio data, and text data. While the system receives video as input, the primary job is to separate the audio and visual data from the sentiment analysis using a respectable URL. The text corpus can be obtained from the visual source and the URL in the form of the video title, description, hashtags, keywords, comments, and opinions posted on social media. The coordination will evaluate this data disjointedly and merge all the results to obtain the final result. The main description of the research framework architecture is shown in Fig. 1.

3.1 Data Collection Twitter is the most powerful data source and fast-growing microblogging Web site in the world empowerment for collecting real-time streaming data. Many resources of subsets of social health tweets are enabled from Twitter API and need to generate Twitter API secret key, consumer key, consumer secret key, and access token key. The system is designed using object-oriented programming langue (Python and R) for accessing corpus data, and instead of these, the needed Python and R library files are used to extract tweets from Twitter API. Corpus contains a collection of structured and unstructured health-related tweets in the form of topics and hashtags, and it will be stored in Hadoop cluster environment. The only contribution compulsory is the number of child nodes in the Hadoop environment that the user wants for the Hadoop cluster, and the tool will recognize the other active machines on the set of connections. First come and first serve (FCFS) is stated above, to explore live machines on the network using the IP address of the machine where the tool is running. The first machine found alive on the network is collected first, and handing out begins with its consequent machines; correspondingly, the IP address of all live machines stored in a file on the network is found. And Hadoop distributed file system (HDFS) allocates the tweets into different block sizes along with replications.

464

Fig. 1 Proposed architecture and design

Y. Subbarayudu and A. Sureshbabu

Distributed Multimodal Aspective on Topic Model Using Sentiment …

465

3.2 Preprocessing Preprocessing is an issue because it uses natural language processing (NLP) and a natural language tool kit (NLTK) from the body of the tweet to remove inappropriate data. Some of the details of the corpus data used in the exploratory study and the standard health key phrases of the standard assessment Trec2015 [32] and Trec214 [33] are used to extract the correct tweets related to the body health of the body. For the development of a typical body health technique, the experimental word chosen is to first extract the text from the body of the health-related tweet, then extract the word bag number from the text of the previously written health tweet, and finally detach the wicked words from the body bag of words (BOW). Distributed topic models are implemented to identify suitable terms from structured and unstructured data instead of a health-based corpus. Health-related topics are derived from structured and unstructured social media in health care and stored text corpus data in the Hadoop cluster. A visual case encloses audio, text metadata, and visual content in the form of frames, as it is a series of images with adequate motion. The preprocessing technique consists of three different modules, and these modules are used for analyzing the text from audio and visual contents [5] and extract health-related topics.

3.2.1

Extended Term Frequency-Inverse Document Frequency (ETF-IDF)

TF-IDF is a static that helps in identifying how important a word is to corpus while doing the text analytics. It is a product of two measures, i.e., term frequency and inverse document frequency. TFIDF(t, d, D) = [TF(t, d) ∗ IDF(t, D)]

(1)

where t states the terms, d states each document in corpus, and D states that group of documents in a corpus.

3.2.2

Term Frequency (TF)

In this section, term frequency is clear to calculate the number of occurrences of one and each word or topic in each document. words converted to lowercase, using common text extraction methods like ‘a’, ‘the’ as pre-words. TF =

No. of occurrences of a word in a document No. of words in that document

(2)

466

3.2.3

Y. Subbarayudu and A. Sureshbabu

Inverse Document Frequency (IDF)

The purpose of inverse document frequency is to reduce the importance of a word that repeats multiple times in a document. IDF = log{

3.2.4

No. of documents } No. of documents containing word

(3)

Term Frequency-Inverse Cluster Frequency (TF-ICF)

In this section, the term frequency reversed to the frequency of the groups is one of the weight labels based on the information in the documents in the group. This method is calculated to overcome vocabulary problems and obtain important topics for each class. TF - ICF = TFt,i ∗ log{N /CFT}

(4)

where: TFt,i = no. of total terms in a class i. N = no. of total classes. CFT = no. of total classes that contain term t.

3.3 Similarity Metrics The compilation of body comparisons (documents) based on characteristics is an effect with distance or similarity indicators. Two types of similarity indicators are included here, i.e., cosine similarity and Euclidean distance. Cosine similarity is an indicator used regardless of size. Mathematically, it represents and measures the cosine of the angle between two vectors and is designed in a multidimensional space. It is the most valuable indicator of similarity to Euclidean distance due to document size. It turns out that the closer the documents are to the angle and the similarity of the cosine. Cos(θ ) = ( p · q) · [ p · q]−1

Cos(θ ) =

 n  1

  pi qi ·

n 1

 p2i ·

(5)

n 1

−1 q2i

(6)

The cosine similarity included more words from the collection of documents and visualizes a higher-dimensional space. Euclidean distance is the largest part common

Distributed Multimodal Aspective on Topic Model Using Sentiment …

467

indicator for calculating correspondence characteristics between documents using a mathematical formula.  (7) Eucl Distance (n, m) = ( p − q)2 ± · · · ± (x z − yz)2

3.4 Hadoop Distributed Latent Dirichlet Allocation (HdiLDA) The Dirichlet distributed latent distribution or allocation to big data is one of the most valuable models for topics or terms. The latent Dirichlet distribution or allocation is a very complex application of the Bayesian technique, and here, the latent word means to capture the meaning of the text to find the hidden terms or themes of the words in the corpus of each document in the corpus [34]. Finally, the text frame using health terms or topics related to each document in the corpus was considered. The model is completely neglected or unsupervised and does not require any basic knowledge. The generative model for latent Dirichlet allocation (LDA) describes the quantities used in the system model shown below, and here M denotes zero number of documents to generate in a corpus document, K denotes no. of topics or terms in corpus, (α) denotes hyper-parameter on combined properties, (β) denotes hyperparameter on combined components, and (vm )&ψ k denotes notation for parameter p(Z|d= m) and P(t|z = K). In this section, Gibbs’ sampling algorithm is motivated to describe the variable hidden in a collection in the middle of a Hadoop distributed file system and randomly selected for each word in the body of the thematic document. This algorithm randomly deals with the thematic document and the distribution of a topic for better improvement and evaluation of P(t|d) and P(w|t) and unites each word with the latest topic where topic t takes with a probability of P(t|d) * P(w|t).

468

Y. Subbarayudu and A. Sureshbabu

3.5 HdinNMF The non-negative matrix factorization is a house of linear algebra algorithms for obtaining the latent structure in the data designed and applied for topic models to Hadoop environment for distributed the tweet corpus [35, 36]. It is similar and equivalent to NMF where the input is term-document matrix (TDM) and TF-IDF distributed. The input of matrix factorizations is TDM and number of topics and illustrated two decompose matrices, i.e., one for every topic and the second one is every topic from tweet corpus. After decomposition, two non-negative matrices of the original n-words by k topics are obtained, so that the NMF model is to allocate one of those topics. The first thing to note is that non-negative matrix factorization can be shown to be equivalent to optimizing the same objective function as the one from the probabilistic latent semantic analysis. The following algorithm NMF [37, 38] is designed for topic modeling on a term-document matrix (TDM), using scikit-learn. To understand, a matrix X is divided into two matrices W and H such that X = WH. There is no assurance for the restoration of the unique matrix, so it brings as close as possible. Assume X is made up of m rows x1, x2, … xm, W is made up of k rows w1, w2, … wk, and H is made up of m rows h1, h2, … hm. Each row of X can be considered a data point. For example, in the container of image decomposition, each row in X is an image and every column represents some characteristic. Here, the ith line in X and x i is able to be written as the significance of this equation becomes clearer when to visualize. The multi-respective update is a frequently used technique of optimization and is the multiplicative modernized method. In this technique, W and H are each reorganized iteratively according to the following rule:

Distributed Multimodal Aspective on Topic Model Using Sentiment …

[H ]def



[W ]def

469

([H ], [WTX]/[WTWH])



(8)

([W ], [XHT]/[WHHT])

(9)

The matrix when it is not simplified remains stable. Intent functions cannot be less incremental than this rule. A less formal but more intuitive explanation of this method is to interpret it as a large gradient slope. A large gradient slope can be recorded as follows:  (10) [H ]defH − (n, (WTWH] − WTX)) [W ]defW −



(C, −[XHT − WHHT])

(11)

3.6 HdiPLSA The probability latent semantic analysis is a method for model information under the probabilistic framework. Here, the data is distributed from various perspective nodes like name node and slave nodes shown in Fig. 1. The HdiPLSA can be understood in two different ways that are first one latent variable model and the second one matrix factorization. HdiPLSA is topic model and its improvement to LSA that aims to find latent topics from the corpus by replacing SVD in LSA with a P( D, W )def



 ( P( D),

  

 [P(Z |D), p(W |Z )

(12)

Z

Here, P(D), P(Z|D), and P(W|Z) are the parameters of our representation. P(D) evaluates in a straight line from our quantity. P(Z|D) and P(W|Z) are the models as multinomial distributions and train by the expectation–maximization (EM) algorithm. Grippingly, P(D,W) can be consistently parameterized through a dissimilar deposit of parameters    P( D, W )def (P(Z ), P(D|Z ), P(W |Z )

(13)

Z

The explanation of this innovative parameterization is so interesting to observe a thorough equivalent between our PLSA model our LSA model:

470

Y. Subbarayudu and A. Sureshbabu

(14)

Wherever, the probability of topic P (Z) corresponds to the oblique matrix of probabilities for a single topic, the probability of document known to the topic P (D | Z) corresponds to matrix topic U-document, and the probability of given word on the topic P (W | Z) corresponds to our term topic matrix V. In general, when people look for a model of a topic outside of the basic performance that LSA provides, they turn to HdiLDA. So, HdinNMF and HdiLDA are the most common topic types, extending PLSA to address these issues.

4 Experimental Analyses and Discussion Topic models have been expanded to topic models distributed in a Hadoop environment to differentiate topics from the myriad of health-related tweets. These promising methods describe the number of topics in a known topic group. The primary purpose of these distributed topic models is to address topics for clarity of the corpus list and store large amounts of data in a Hadoop distributed file system. Traditional topic templates are sufficient and determine the number of topics in the body of tweets (terms), but avoid the wrong topics. All models are implemented in Python in a Hadoop environment, and the hardware configuration is a high-end processor and 16 GB of RAM for quick access. Following the described procedure, the data is collected from the body of the tweet, extracted, and preprocessed using a natural language process and a collection of natural language tools. The existing healthrelated dwellings from the analysis of social sentiment compared to the size of the corpus, type, and number of a group related to the characteristics of the data corpus used and the related results are shown below.

4.1 Evaluation Performance In this study, the topic models are used to obtain results with an initial number of topics or terms, so the numbers of topics are conceptual and hidden on the corpus text extraction. Hadoop distributed models of the topics are experimenting with Euclid and cosine metrics. Assessment of modeling techniques is addressed with accuracy of cluster assessment (CEV) [39], mutual normalized assessment information (NEMI) [40], accuracy (P), call (R), and assessment F (F) [41, 42]. This study is derived from the measurement method using precision (P), recovery (R), and the F-1 measurement. These methods are considered to be the most reliable for measuring the efficiency of the proposed technique. The process of estimating precision (P) and recovery (R)

Distributed Multimodal Aspective on Topic Model Using Sentiment …

471

is calculated by forecasting the true value information presented by True_Positive (TP), True_Negative (TN), False_Positive (FP), and False_Negative (FN). Precision (P) = TP/(TP + FP), recall (R) = TP/(TP + FN), and: F−1Measure = 2 ∗

Precision(P) ∗ Recall(R) Precision(P) + Recall(R)

The precision (P) and recovery or recall (R) values from the proposed Euclidean and cosine similarity models are classified, in which HdinNMF gets high-quality precision (P) and recovery (R) in the cosine metric and its indication in the body of the respective tweet. Performance values are derived from the accuracy and retrieval of data_corupus from two data topics to up to fifteen topics. The precision of the cluster estimate and the standardized estimate of mutual information for the proposed models are shown in Table 1. The model works best using a cosine metric from Euclid’s tweet corpus. Thus, Hadoop distributed non-negative matrix factorization (HdinNMF) is used to measure the cosine similarity showing good precision from other models. This precision provided the corpus of tweets from two data subjects to fifteen data subjects. FM measure (F) defines and takes the harmonic mean recall (R) and precision (P). The estimated values of the F-measure for the paradigms are shown in Fig. 2. It is observed the HdinNMF model produces better precision than the other models. In this sense, the highest degree of recognition of topics is related to all kinds of tweets in the cosine distance and showed that the cosine distance similarity indicators are successful. Table 1 Cluster evaluation accuracy for topic models Tweets datasets Cluster evaluation accuracy (CEA) Eucldn

Cosine

HdinNMF

HdiLDA HdiPLSA HdinNMF HdiLDA HdiPLSA

2 topics

1.23

1.08

0.73

1.23

1.03

0.73

3 topics

1.23

0.73

0.61

1.23

0.86

0.67

4 topics

1.12

0.87

0.59

1.16

0.86

0.62

5 topics

0.85

0.73

0.54

1.23

0.70

0.57

6 topics

0.75

0.64

0.57

1.00

0.68

0.57

7 topics

0.68

0.64

0.53

1.09

0.55

0.50

8 topics

0.87

0.87

0.87

1.04

0.55

0.52

9 topics

0.73

0.64

0.51

1.00

0.55

0.52

10 topics

0.77

0.58

0.45

0.82

0.51

0.52

11 topics

0.68

0.50

0.43

0.94

0.50

0.47

12 topics

0.69

0.58

0.44

0.91

0.56

0.46

13 topics

0.65

0.45

0.48

0.74

0.52

0.43

14 topics

0.60

0.49

0.45

0.88

0.48

0.44

15 topics

0.52

0.44

0.43

0.56

0.41

0.38

472

Y. Subbarayudu and A. Sureshbabu

Fig. 2 Comparison analysis of distributed NMF on topic modeling

Distributed Multimodal Aspective on Topic Model Using Sentiment …

473

Table 2 Normalized evaluation mutual information for topic models Tweets datasets Normalized evaluation mutual information Eucldn

Cosine

HdinNMF

HdiLDA HdiPLSA HdinNMF HdiLDA HdiPLSA

2 topics

1.23

0.62

0.24

1.23

0.47

0.26

3 topics

1.23

0.38

0.24

1.23

0.51

0.25

4 topics

0.95

0.61

0.32

1.06

0.53

0.32

5 topics

0.71

0.5

0.3

1.23

0.49

0.31

6 topics

0.74

0.46

0.33

0.86

0.46

0.35

7 topics

0.61

0.51

0.36

1.02

0.36

0.33

8 topics

0.76

0.76

0.76

0.99

0.42

0.35

9 topics

0.66

0.49

0.37

0.93

0.42

0.35

10 topics

0.74

0.42

0.34

0.82

0.44

0.39

11 topics

0.65

0.42

0.34

0.93

0.4

0.36

12 topics

0.64

0.47

0.34

0.88

0.46

0.37

13 topics

0.64

0.37

0.4

0.74

0.43

0.38

14 topics

0.59

0.43

0.38

0.87

0.45

0.39

15 topics

0.56

0.41

0.38

0.79

0.42

0.39

The importance of the experimental analyses indicated in the cosine models is more relevant in health issues. Figure 1 shows the comparative results with the graphical presentation of data from two health topics to fifteen health topics in tweets, showing the presentation of distributed models of topics with comparative analysis of cosine and Euclidean. Based on this analysis, the report confirms that HdinNMF can group health-related tweets more accurately using indicators of similarity. Some of the topics are increasing in CEA and decreasing in NEMI values and still maintain their consistency as HdinNMF (Table 2). The analysis report gives a better evaluation obtained in HdinNMF than other models. Therefore, a thematic (topic) model for accessing topics and completing results from the grouping of data from the HealthCorps is more appropriate. From Fig. 2, the comparative analysis of distributed non-negative matrix factorization using topic modeling states that the precision of the cluster estimate and the standardized estimate of mutual information for the proposed models is shown in Table 1. The model works best using a cosine metric from Euclid’s tweet corpus. Thus, Hadoop distributed non-negative matrix factorization (HdinNMF) using a measure of cosine similarity shows good precision from other models. This precision provided the corpus of tweets from two data subjects to fifteen data subjects. FM measure (F) defines and takes the harmonic mean recall (R) and precision (P). The estimated values of the F-measure for the paradigms are shown in Fig. 2. It is observed the HdinNMF model produces better precision than the other models. In this sense, the highest degree of recognition of topics is related to all kinds of tweets in the cosine distance and showed that the cosine distance similarity indicators are shown in Fig. 2.

474

Y. Subbarayudu and A. Sureshbabu

The importance of the experimental analyses indicated in the cosine models is more relevant in health issues. Figure 2 shows the comparative results with the graphical presentation of data from two health topics to fifteen health topics in tweets, showing the presentation of distributed models of topics with comparative analysis of cosine and Euclidean. Based on this analysis, the report confirms that HdinNMF can group health-related tweets more accurately using indicators of similarity. None of the topics is increasing CEA, and NEMI values are decreasing and still maintain their consistency as HdinNMF. The analysis report gives a better evaluation obtained in HdinNMF than other models. Therefore, a thematic (topic) model for accessing topics and completing results from the grouping of data from the HealthCorps is more appropriate.

5 Conclusion and Future Scope This work presents a technique for unstructured and structured data categorization by analyzing text, audio, and visual data and detects the topics on health issues from real-time social media health care in global surveillance. The result shows that distributed topic models are balanced by evaluating sentiment analysis along with the Hadoop system for health-related tweets, and the performance analysis results indicate that different input contents can be effectively cataloged into different categories with the purpose of the proposed methods. In this work progress, the cosine metric of the distributed topic models is extremely successful for detecting hidden topics from the corpus of tweets and analyzes the numerical improvement for 2 topics to 15 topics. The topic distribution models are considered under cosine metrics and representing a better implementation of the number of topics for the unadorned tweeter of the body. Therefore, they are better suited for future work with immeasurable topics about healthy tweets and keep track of how much time and memory each model topic spends and it is a challenging issue of my second objective for getting relevant and accurate. In the future work, the main emphasis will be on social media for medical affairs, social media for corporate communications, social media for medical societies, healthcare stakeholder segmentation, healthcare network analysis, and healthcare sentiment analysis on broad datasets and how many users can search or see relevant details on health tweets. This assistance briefly describes the public health structure in good condition in the country and tracks the evolution of the main health-related tweets. The health system has emerged in the world so that any type of community (people) has well-informed information. In real time, the health problems are analyzed with public alliance using Twitter sentiment analysis. Acknowledgements The research work supported the governmental and private health organizations, junior and senior researchers, and research and development of the health science institutions, for their excellent advice throughout the study.

Distributed Multimodal Aspective on Topic Model Using Sentiment …

475

References 1. Rashid, J. et al.: Topic modeling technique for text mining over biomedical text corpora. https:// doi.org/10.1109/ACCESS.2019.2944973 2. Holzinger, A., Schantl, J., Schroettner, M., Seifert, C., Verspoor, K.: Biomedical text mining: state-of-the-art, open problems and future challenges. In: Interactive Knowledge Discovery and Data Mining in Biomedical Informatics, pp. 271–300. Springer, Berlin, Germany (2014) 3. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003) 4. Landauer, T.K., Laham, D., Rehder, B., Schreiner, M.E.: A comparison of latent semantic analysis and humans. In: Proceedings of 19th Annual Meeting Cognition Science and Society, pp. 412–417 (1997) 5. Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. J. Amer. Soc. Inf. Sci. 41(6), 391407 (1990) 6. Dredze, M.: How social media will change public health. IEEE Intell Syst 27(4), 81–84 (2012) 7. Neuhauser, A.: Health Care Harnesses Social Media. U.S. News, (2014). https://www.usnews. com/news/articles/2014/06/05/health-care-harnessessocial-media 8. Hawn, C.: Take two aspirin and tweet me in the morning: how Twitter, Facebook, and other social media are reshaping health care. Health Aff. 28(2), 361–368 (2009) 9. Subbarayudu, Y., Patil, S., Ramyasree, B., Praveen Kumar, C., Geetha, G.: Assort-EHR graph based semi-supervised classification algorithm for mining health records. J. Adv. Res. Dyn. Control Syst. EID: 2-s2.0–85058439255 10. Agrawal, A.: What is wrong with topic modeling? (And how to fix it using search-based SE). IEEE Trans. Softw. Eng. (2016). https://doi.org/10.1016/j.infsof.2018.02.005 11. Blei, D.M.: Probabilistic topic models. Commun. ACM 55(4), 77–84 (2012) 12. Grifths, T.L., Steyvers, M., Tenenbaum, J.B.: Topics in semantic representation. Psychol. Rev. 114(2), 211–244 (2007) 13. Wallach, H.M.: Topic modeling: beyond bag-of-words. In: Proceedings of ICML, pp. 977–984 (2006) 14. Grifths, T.L., Steyvers, M.: Finding scientific topics. Proc. Nat. Acad. Sci. USA 101(1), 5228– 5235 (2004) 15. Gürcan, F.: Major research topics in big data: a literature analysis from 2013 to 2017 using probabilistic topic models. In: Proceedings of the International Conference on Artificial Intelligent Data Process (IDAP), pp. 1–4 (2018) 16. Gurcan, F., Kose, C.: Analysis of software engineering industry needs and trends: Implications for education. Int. J. Eng. Educ. 33(4), 1361–1368 (2017) 17. Tian, K., Revelle, M., Poshyvanyk, D.: Using latent Dirichlet allocation for automatic categorization of software. In: Proceedings of the 6th IEEE International Working Conference on Mining Software Repositories, pp. 163–166 (May 2009) 18. Zhai, Z., Liu, B., Xu, H., Jia, P.: Constrained LDA for grouping product features in opinion mining. In: Proceedings of Pacific Asia Conference on Knowledge Discovery Data Mining, pp. 448–459 (2011) 19. Wu, Q., Zhang, C., Hong, Q., Chen, L.: Topic evolution based on LDA and HMM and its application in stem cell research. J. Inf. Sci. 40(5), 611–620 (2014) 20. Bagheri, A., Saraee, M., de Jong, F.: ADM-LDA: An aspect detection model based on topic modelling using the structure of review sentences. J. Inf. Sci. 40(5), 621–636 (2014) 21. Hong, L., Davison, B.D.: Empirical study of topic modeling in Twitter. In: Proceedings of the 1st Workshop Social Media Analysis, pp. 80–88 (Jul 2010) 22. Chen, Z., Huang, Y., Tian, J., Liu, X., Fu, K., Huang, T.: Joint model for subsentence-level sentiment analysis with Markov logic. J. Assoc. Inf. Sci. Technol. 66(9), 19131922 (2015) 23. Liu, L., Tang, L., Dong, W., Yao, S., Zhou, W.: An overview of topic modeling and its current applications in bioinformatics. SpringerPlus 5, 1608 (2016) 24. Cohen, R., Aviram, I., Elhadad, M., Elhadad, N.: Redundancy-aware topic modeling for patient record notes. PloS ONE, 9, (2014). Article no. e87555

476

Y. Subbarayudu and A. Sureshbabu

25. Kintsch, W.: The potential of latent semantic analysis for machine grading of clinical case summaries. J. Biomed. Inform. 35(1), 37 (2002) 26. Cohen, T., Blatter, B., Patel, V.: Simulating expert clinical comprehension: Adapting latent semantic analysis to accurately extract clinical concepts from psychiatric narrative. J. Biomed. Inform. 41(6), 10701087 (2008) 27. Yeh, J.-F., Wu, C.-H., Chen, M.-J.: Ontology-based speech act identication in a bilingual dialog system using partial pattern trees. J. Amer. Soc. Inf. Sci. Technol. 59(5), 684694 (2008) 28. Ginter, F., Suominen, H., Pyysalo, S., Salakoski, T.: Combining hidden Markov models and latent semantic analysis for topic segmentation and labeling: method and clinical 29. Haoxiang, W.: Emotional analysis of bogus statistics in social media. J. Ubiquitous Comput. Commun. Technol. (UCCT) 2(03), 178–186 (2020) 30. Yan, X., Guo, J.: Learning topics in short text using ncut-weighted non-negative matrix factorization on term correlation matrix 31. Yan, X., Guo, J.: Clustering short text using Ncut-weighted non-negative matrix factorization. In: Proceedings CIKM 2012, pp. 2259–2262. HI, USA, Miami (2012) 32. REC2015. https://trec.nist.gov/pubs/trec24/trec2015.html 33. TREC2014. https://trec.nist.gov/pubs/trec23/trec2014.HTML 34. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichletallocation. J. Mach. Learn. Res. 3, 993–1022 (2003) 35. Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999) 36. Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Annual Conference on Neural Information Processing Systems, pp. 556–562 (2000) 37. Choo, J., Lee, C., Reddy, C.K., Park, H.: Utopian: user-driven topic modeling based on interactive nonnegative matrix factorization. IEEE Trans. Visual Comput. Graph. 19(12), 1992–2001 (2013) 38. Lee, D., Seung, H.: Algorithms for non-negative matrix factorization. In: Advances in Neural ˙Information Processing Systems 13, NIPS 2000, pp 556–562. Denver, CO, USA (2000) 39. Pattanodom, M., Iam-On N., Boongoen, T.: Clustering data with the presence of missing values by ensemble approach. In: 2016 Second Asian Conference on Defense Technology (ACDT) (2016). https://doi.org/10.1109/acdt.2016.7437660 40. Amelio, A., Pizzuti, C.: Is normalized mutual information a fair measure for comparing community detection methods? In: IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (2015) 41. Xu, G., Meng, Y., Chen, Z., Qiu, X., Wang, C., Yao, H.: Research on topic detection and tracking for online news texts. IEEE Access 7, 58407–58418 (2019). https://doi.org/10.1109/ access.2019.2914097 42. Huang, L., Ma, J., Chen, C.: Topic detection from microblogs using T-LDA and perplexity. In: 24th Asia-Pacific Software Engineering Conference Workshops (2017) 43. Li, Z., Shang, W., Yan, M.: News text classification model based on-topic model. In: 2016 IEEE/ACIS 15th International Conference on Computer And ˙Information Science (ICIS) (2016) 44. Al Amin, H.M., Arefin, M.S., Dhar, P.K.: A method for video categorization by analyzing text, audio, and frames. Int. J. Inf. Tecnol. https://doi.org/10.1007/s41870-019-00338-2 45. Bashar, A.: Intelligent development of big data analytics for manufacturing industry in cloud computing. J. Ubiquitous Comput. Commun. Technol. (UCCT) 1(01), 13–22 (2019)

RETRACTED CHAPTER: Cluster-Based Multi-context Trust-Aware Routing for Internet of Things

PT

ER

Sowmya Gali and N. Venkatram

R ET

R AC

TE

D

C

H

A

This chapter was unintentionally published twice in this volume - Chapter 39, as a draft, and Chapter 40 [1], as the final version. For this reason and this reason alone, Chapter 39 has been retracted from the volume. Chapter 40 [1] should be considered the version of record and used for citation purposes. Springer apologizes to the readers for not detecting the duplication during the publication process. All authors agree to this retraction. [1] Gali S., Venkatram N. (2022) Energy-Efficient Cluster-Based Trust-Aware Routing for Internet of Things. In: Jeena Jacob I., Gonzalez-Longatt F.M., Kolandapalayam Shanmugam S., Izonin I. (eds) Expert Clouds and Applications. Lecture Notes in Networks and Systems, vol 209. Springer, Singapore. https://doi.org/10. 1007/978-981-16-2126-0_40

S. Gali (B) · N. Venkatram Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation (KLEF), Vaddeswaram, Guntur, Andhra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022, corrected publication 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_39

477

Energy-Efficient Cluster-Based Trust-Aware Routing for Internet of Things Sowmya Gali and N. Venkatram

Abstract Nowadays, Internet-based communications is used in every application in real time which is an open platform and the security is the main concern for the Internet of things (IoT). In this paper, we propose a novel routing scheme called as cluster-based multi-context trust-aware routing (CMCTAR). For energy preservation, we suggest a clustering strategy as a preprocessing and the nodes with higher resources are selected as clusters heads. In CMCTAR, the evaluation of a trust is done by using three different metrics such as communication trust, nobility trust and datarelated trust. To judge the node trustworthiness in an IoT network, a composite routing metric is calculated based on these trust metrics. Simulation results are worked out by considering the proposed approach which is based upon the performance metrics such as average packet delivery ratio, malicious detection nature of the node, and end-to end delay. Even in the presence of malicious nature of the node the proposed scheme shows extensive performance over the conventional methods. Keywords Internet of things · Trust management · Clustering · Communication trust · Malicious detection rate · Network lifetime

1 Introduction In recent years, the fast growth in the development of Internet of things (IoT) makes it to penetrate its widespread applicability into various fields including medical care, climate monitoring, industrial manufacturing, and agricultural production. This can also be seen in daily life like smart home and Internet of vehicles (IoV) [1]. The IoT is a very flexible network that can accomplish multiple aspects such as tracking, identification, monitoring, and management. According to the generalized nature of IoT, you can connect any device to the Internet through various devices such as laser scanners, Global Positioning Systems (GPSs), radio frequency identification devices (RFIDs) [2], code recognition devices and infrared sensors to exchange the S. Gali (B) · N. Venkatram Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation (KLEF), Vaddeswaram, Guntur, Andhra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_40

493

494

S. Gali and N. Venkatram

information as well as to communicate [3]. Taking the advantage of IoT, many devices may connect to the Internet and they can communicate with each other and also can exchange information, which had given a great convenience in the daily lives. But as the connected devices to the Internet increases, information exchange between the devices may raise, which do not have any guarantee about the security [4]. For example, a person’s location can be easily tracked, and based on the location, the travel time, and his/her feelings can be acquired easily. This type of an unsecure communication between the IoT devices makes the network secure deficient [5]. Hence, the issue of security ensuring is more and more important in IoT. Toward the security provision, a new trust management method is developed in this paper considering the communication scenario, nobility between nodes and the data processing through the nodes. Consideration of more parameters for authenticity makes the network more resilient toward all possible attacks and the nodes which were compromised by any type of attacks can be detected more easily. Furthermore, to achieve energy-efficient and to rise the network lifetime, a novel clustering mechanism is proposed in which only few nodes are chosen to process the trust evaluation. An extensive simulation is done over the proposed mechanism which shows its effectiveness for detecting the compromised nodes and also in the improvisation of network lifetime. Remaining paper is summarized as follows. The details of earlier developed security strategies for IoT network are mentioned in Sect. 2. The details of network considered in this paper are modeled Sect. 3. Section 4 discusses the details of trust evaluation mechanism which is composed of three different aspects. Section 4.5 depicts the routing process details. Experimental investigations are discussed in Section 5 and the final concluding remarks are demonstrated in Sect. 6.

2 Literature Survey Trust management (TM) [6] has been developed for IoT network which has as a significant impact in the provision of secure and stable configuration of IoT. TM can be used to enhance the security of the nodes particularly when the node count increases to that extent such that the central administrator is not able to handle. Indirectly node trustworthiness reflects the behavior of node and the quality of service (QoS) [7, 24]. Provision of security in the network through cryptographic solutions makes the nodes resource constraint and makes the nodes susceptible to compromise. Contrary to these strategies, the TM schemes do not composed of much encryption complexity. This advantage makes the TM a feasible chance to enhance the security in the IoT network. Furthermore, the TM also provides a constant monitoring and observation of the performance and nodes behavior, assess the trustworthiness and determines only the nodes which are more trustworthy to collaborate with. Table 1 explores the details of comparative analysis. Various TM schemes are developed in earlier to ensure the trust-based routing in IoT network. In the trust evaluation process, the parameters considered to define the

Energy-Efficient Cluster-Based Trust-Aware Routing …

495

Table 1 Literature survey comparison References

Methodology

Year

Possible cons

Chen et al. [8]

Trust and reputation model TRM-IoT based on fuzzy set theory

2011

Less focus over the resource constraints

Bao and Chen [9]

Dynamic trust management mechanism considered honesty, cooperativeness as main factors

2012

Energy constraints are not focused

Nitti et al. [11]

Objective and subjective evaluation based on the social behavior of nodes

2014

Additional storage burden due to hash table

Gai et al. [14]

Multi-dimensional trust 2017 elements such as social relationship, QoS, and reputation

Energy issue is not considered

Alshehri et al. [19]

CITM-IoT considered memory as a reference resource to evaluate the trustworthiness of a node

2016

Memory does not cover maximum number of attacks

Jabeur et al. [20]

1. Clustering is done in two phases: macro and micro 2. Firefly algorithm is accomplished for clustering

2017

Two levels of clustering constitute heavy computational burden

trustworthiness of nodes play a crucial role in the security provision. If these parameters are more in number, then the node selected is said to be more trustworthy and also robust. Considering this multi-parameter strategy, Chen et al. [8] proposed a new trust and reputation model TRM-IoT based on fuzzy reputation and the parameters considered for trust evaluation are packet delivery ratio (PDR), energy consumption, and end-to-end packet forwarding ratio. Concentrating only on trust and reputation may minimize lifetime of the network. Next, a new method proposed by Bao and Chen [9] considered the community-interest, co-cooperativeness, and honesty for trust evaluation. In the same year, a new strategy trustworthy called, ‘Social Internet of Things (SIoT),’ was proposed by Atzori et al. [10], and in this, the evaluation of trust is done based on the social network aspects. Further, Nitti et al. [11] developed two trust models, objective evaluation model and subjective evaluation model for management of trust. Here the trust value of every node is measured by considering the social behavior of a node toward its neighbor nodes. The trust value is evaluated in an indirect way, i.e., the opinion of neighbor nodes decides the trustworthiness of every node. Similarly, a new technique was proposed by Kogias et al. [12] which provides a trust and reputation model for IoT (TRMS-IoT). It combines the peer-topeer and MANETs adapting then on IoT protocols, and according to this method, each thing can compute the trustworthiness of anything in the network considers its own experience of referring to its friends or the platform. Next, the method proposed by Bernabe et al. [13] provides a flexible access control system for trust management in IoT network, called TAC-IoT. Gai et al. [14] suggested multifacet trust

496

S. Gali and N. Venkatram

evaluation method for anomaly detection in IoT. This trust model considered the multi-dimensional trust elements such as social relationship, quality of service and reputation. However, all these methods did not focus over the energy reservation which is most important in the resource constraint IoT network. Reena et al. [15] proposed a cluster-based self-organized energy-efficient trust management scheme to achieve an energy preservative secure communication between nodes in IoT. Here, the trust model is derived depending on the time identity to punish the malicious nodes. This method cluster the nodes based on their energy requirements and the trust model considers the PDR only as a reference metric, which is not sufficient. Recently, a ‘Clustering-Driven Intelligent Trust Management Methodology for the Internet of Things (CITM-IoT)’ is proposed by Alshehri et al. [16] which addresses the scalability and provides a solution for countering the bad mouthing attacks. This approach considered the memory as a reference resource to estimate the trustworthiness of a node. Further, a clustering strategy is also developed in which the entire node set is categorized as supernodes (SNs), master nodes (MNs), and cluster nodes (CNs). But, this approach did not discuss the energy preservation and also accomplished as cooperative communication between the cluster nodes by which the energy consumption will increase greatly. Recently, a Fuzzy C-Means [17] Clustering based cluster head selection was accomplished to cluster the nodes in IoT by P. K. Reddy and R. S. Babu [18]. An optimal Secure and ‘Energy Aware Protocol (OSEAP)’ and an ‘Improved Bacterial Foraging Optimization (IBFO)’ [19] algorithm were accomplished here. However, the FCM algorithm won’t suits for clustering of nodes. Because, in the FCM, the nodes are clustered based on their significance but in actual the nodes needs to be clustered with respect to their distance from other nodes. Furthermore, the IBFO results in an extra computational burden over the route establishment process when the source node wants to send information to destination nodes. There is no discussion about the node selection strategy, i.e., there is no mechanism which measures the trust degree of nodes. A recent bio-inspired clustering method proposed by Jabeur et al. [20] toward an adaptive spatial clustering for IoT is also based on the firefly algorithm. In their approach, the authors developed a protocol in which the data regarding the trust relationship is stored at nodes. Further, a convergence-based trust relationship evaluation is proposed in this work to establish the trust for the nodes which are in the process. However, this approach got failed in addressing the critical issues like energy preservation and storage requirements.

3 Network Model This approach considers the IoT network as based on a clustered network, and in the network, the nodes are categorized into cluster heads (CHs) and cluster nodes (CNs). In each cluster, the CH consists of high energy than CNs, and all the CNs will communicate with CH directly. Further, the CH will communicate with another CH to further forward the information form source to destination. Each and every CN

Energy-Efficient Cluster-Based Trust-Aware Routing …

497

has unique identity, and it can belong to more than one clusters. Since the CH needs to process the data of multiple CNs, its energy needs to be high. Hence, the CHs are chosen based on maximum energy policy. Initially, among the all available nodes of a network, the nodes which have more energy and more trustworthiness are selected as cluster heads. Next, the clusters are formulated based on the Euclidean distance evaluation between the CHs and remaining nodes. For every CH, the cluster nodes are decided based on the neighboring distance between the CH and nodes. The nodes which fall in the communication range of every CH are considered as its CNs, and the range is defined as the average distance between the CH and remaining nodes. A sample of cluster-based IoT network architecture is shown in Fig. 1. As shown in Fig. 1, the network consists of CHs and CNs. Here the clusters are formulated based on the Euclidean distance. There exist some overlapping clusters and the nodes which fall in the overlapping region can connect to any of the CHs to which they have. Based on this aspect, the connections between nodes and CHs are categorized as strictly connected nodes (SN) and volatile connected nodes (VN). The main intention behind the categorization of SN and VN is to ensure the connectivity. For a cluster node, it expects the next hop neighbor node will cooperate until its complete data transfer. This is possible only if the next hop neighbor has more connectivity, and it must a member of more than one cluster. Except the CH to which the nodes are connecting every time, remaining all the characteristics is same for SN and VN. Different types of representation are considered for these nodes, as shown in Fig. 1. The strictly connected nodes means, that node is purely in the communication

Fig. 1 Network model of cluster-based IoT network

498

S. Gali and N. Venkatram

range of only one CH and the volatile connected node means, it can be connected to any CH it have and depends on the distance between the VN and the CHs; the VN connects to the CH to which it is close to and also the CH for which the highest trustworthiness exists. Because, as the distance between nodes increases, the power needed to communicate with nodes also increases, and hence to preserve the energy, the VNs will connect to the CHs to which they have less distance. In the case of equal distances for all CHs, the VN can connect to any of the CHs. This paper makes the following assumptions: 1.

2. 3.

The IoT network is cluster based, and the communication between CH and the respective CNs is in a direct fashion, whereas the communication between CHs in a free manner. Each IoT node has a unique ID, and it can belong to single cluster or multiple clusters. The energy of CHs is assumed to be greater than the energy of CN. The propagating data in the IoT network is hybrid in nature; i.e., it can be a continuous and event-driven.

4 Trust Evaluation Scheme Based on the network model described in the earlier section, here a novel trust evaluation mechanism is introduced. After clustering the nodes into different clusters, the trust evaluation is accomplished to measure the trustworthiness of CNs and this responsibility is completely taken by CH. For a particular cluster, the communication between the CNs and CH should be in a uniform format. Hence, the trust evaluation between CH and CNs consists of direct trust and indirect trust. The evaluation of a trust is done in a periodic fashion such that the malicious nodes can be detected at any time. The proposed trust evaluation mechanism considers multi-dimensional trust. It is composed of two different, trusts namely data-related trust and network-related trust. To make the network resilient from all possible attacks, this work considered the trust evaluation in multi-dimensional fashion. The innovative trust considered here is the data-related trust. Since some attacks types are there which compromise the network by tampering the data transferring through the network, this approach considered the trust evaluation through data also. In the network-related trust, the trust evaluation is carried based on the communication and nobility of CNs. So the final composite routing metrics considers totally three parameters for decision making toward the malicious nature of nodes. In this work, we assume the IoT as a randomly deployed network with particular set of nodes connected through Internet. However, the major issue in IoT or WSN is the design of an efficient trust management mechanism which can assure the secure data transmission in network. Compared to the WSN, the IoT has more challenges due to the Internet-based connections. Hence, it needs more analysis about the trustworthiness of nodes before selecting them as next hop nodes.

Energy-Efficient Cluster-Based Trust-Aware Routing …

499

4.1 Communication Trust The communication trust is assessed by measuring the number of communication interactions happened between the CH and CN. According to the trust evaluation methodology described in [21], the communication trust between two nodes is in the both direct and indirect forms. In this paper, the CH takes the responsibility of trust evaluation. Since the CH has more energy, it can compute the trustworthiness of nodes in that particular cluster. Here the direct trust means the self-experience of CH toward a CN, j and indirect trust means the neighbor nodes experience toward that CN, j. Since there exists ‘p’ number of neighbor nodes, the indirect trust is obtained by averaging the trust values of all neighboring nodes of that CN, j. The mathematical representation of total trust between the CH i and CN,j is formulated as follows: TTi, j = w1 × DTi, j + w2 ×

P  ITik, j p=1

Np

(1)

where DTi, j indicates the direct trust between CH, i and CN, j. ITik, j defines that indirect trust acquires CH, I from adjacent node p of node j. N p referred as the number of node j adjacent nodes of node j. The indirect trust IT p, j is represented by Eq. (2).  k∈Nk ,k= j

ITip, j =



DTi, p × DT p, j

(2)

k∈Nk ,k= j

where DTi, p indicates the direct trust between the CH, i and its one of the CN, p, which is a neighbor node of CN, j and DT p, j indicates the direct trust between the CN, j and its neighboring CN, p. In this paper, the communication trust refers all types of communication interactions including receiving and sending of route request packets (RREQ), route reply packets (RREP), and data packets. As of [22], the greater number of communications between nodes reflects higher value of trust. But in IoT network, if the total communication interactions count exceeds a predefined threshold of interactions, then the value of trust will be reduced for the reason, and there may be the communication interactions due to the attackers that send a huge number of packets to the node die up quickly. Hence, this paper considered communication trust between node and CH to define the trust value. Based on the statistics of Probability Destiny Function (PDF) which has a normalized range of [0 1] the communication turst is computed if the total number of communication interactions are greater then the predefined threshold. The communication value of trust CN j obtained by CHi in the time interval can be formulated as [23].

500

S. Gali and N. Venkatram

 CTi, j (t) =

10 ×

DTi, j μ 

10 × exp −

|DTi, j −μ| ϑ



if TT ≤ ThInt if TT > ThInt

(3)

where DTi, j defines the total number of direct communication interactions between CH, i and CN, j, μ is the average number of communication interactions between the CH, i and the all CNs in that cluster. z represents the largest value, i.e., less than or equal to z. ThInt referred as the threshold of communication interactions, and it is formulated as the product of an arbitrary, ω constant and μ, ω × μ. The value of ω varies depending on the commutation paradigm, and it defines the upper limit of communication interactions. ϑ is an arbitrary constant and accomplished here to obtained the values range in the range of [0 10] to make the uniformity. This values is varied as 1, 10, 100 depends on the total number of communication interactions. For a single digit number of interactions, the ϑ is 1 and it is 10 for ten-digit and it is 100 for hundred-digit number.

4.2 Nobility Trust The nobility trust is measured with respect to the packet forwarding probability (PFP) of a node. The PFP is related to the number of successful as well as unsuccessful interactions. In the IoT network, the CH, i overhears the CN j, if CN j does not deliver the probe packet in the specified time interval or transmit the packet to a different node which is not present in its route. For instance, node A receives 7 probe packets from B node. Simultaneously, the node B receives 9 packets from A node. Then the packet loss rate (PLR) is 0.3, and the packet success rate (PSR) is 0.7. But the PLR from node B to node A is 0.1 only. Hence, the possibility that a packet can be transmitted successfully from node A to node B in a predefined time interval is 0.7. Here the overhearing is done through the HELLO packets instead of separate packets. This reduces the addition energy consumption and overhead incurred at CH in the network. Based on this concept, the nobility trust is derived and is formulated with the accumulation of number of received data packets to the total number of expected data packets. If the CN, j received the packet from CH,i or it has forwarded that data packet received from CH, then it sends a probe packet to the CH and it is treated as favorable interaction or else it is treated as an unfavorable interaction. At that instant of time, if a CN is compromised due to the selective forwarding attack, then only some portion of the packets will reach to the CH or to other cluster nodes. Then the CH or other nodes cannot send the probe packets which are very less in number. This indicates some malicious activity in the cluster. As higher the ratio of number of favorable communications to the total number of communications, we observed that higher the trust value. This paper accomplished the following formula to measure the nobility trust between CH,i and CN, j.

Energy-Efficient Cluster-Based Trust-Aware Routing …

NTi, j (t) = 10 ×

Preceived (ti−1 , ti ) Pexpected (ti−1 , ti )

501

(4)

where Preceived (ti−1 , ti ) the total number of HELLO packets is received at CH,i from CN,j, and it represents the total number of favorable interactions. Next, Pexpected (ti−1 , ti ) is the total number of expected packets to receive at CH, i form CN,j in the predefined interval (ti−1 , ti ). The highest value of nobility trust of CN,j toward its CH denotes more trustworthiness, and a less value indicates that the CN is malicious. The maximum value of NT is 10, and minimum value is 0 which denotes the CN is more trustworthy and CN is malicious, respectively. A medium value of NT = 5 denotes the medium trust of CN, j to the CH.

4.3 Data-Related Trust The data trust is measured by the CH node by observing the data propagating in the network. Attacks like data tampering can be detected through this analysis. In IoT network, the nodes will transmit multi-dimensional data to cluster head. An observation in any deviations in the data can be regarded as abnormality. The change within the data and the effective average of observed data governs the data-related trust. Since the data is transmitted in numerical format and to find the deviation, Euclidean distance is considered as a reference metric. Let us consider μk and σk as the mean and standard deviation of kth dimensional data, the average limit of observed data obtained as [μk − σk , μk + σk ]. The minimum value of observed data cannot be less than the lower limit ([μk − σk ]) of average limit of observe data and also should not be greater than the higher limit ([μk − σk ]). Furthermore, if the node density is more in the IoT network, then all nodes have approximately same type of data and it is difficult for any node to spot the dissimilar data, unless it is attacked or compromised. The mathematical formulation of data-related trust evaluated between CH,i and CN,j is represented as [23]   ATi, j (t) = 10 × exp −Di, j

(5)

where

Di, j

 K   2 xik − x jk =

(6)

k=1

where Di, j indicates the Euclidean distance within the average limit of multidimensional observed data by CH,i and the observed data by CN, j. k represents

502

S. Gali and N. Venkatram

the dimensionality of data, xik and x jk represent the average limit of kth dimensional data stored at CH and the kth dimensional data observed by CN, respectively.

4.4 Overall Trust(OT) The OT of CN,j measured by CH,i, which is a combined form of communication trust CTi, j (t) , nobility trust (NTi, j (t)), and data trust (ATi, j (t)), and it is formulated as follows; OTi, j (t) = α × CTi, j (t) + β × NTi, j (t) + (1 − α − β) × ATi, j (t)

(7)

where α and β are two arbitrary constants, having the range of [0 1]. These two constants are considered to signify the priority of all three trusts with respect to their values. These two constants are weights for each sub-trust value, a greater value defines more importance and less value signifies the less importance. For example, if α = 0.5, β = 0.3 and (1 − α − β) = 0.2, then the CH concentrates on the communication trust mostly. In such a case, the decision is most favored to the decision of communication trust. The result about the node’s trustworthiness revealed at communication trust is almost finalized at overall trust also. Further values of these two arbitrary constants vary and depends on the environment. Consider one evaluator node a and evaluating node b, the complete step-by-step process for trust evaluation of CNs is described as follows: Node a measures the communication trust of node b with the help of Eq. (3). Node a measures the nobility trust of node b with the help of Eq. (4). Node a measures the data-related trust of node b with the help of Eq. (5). Finally, the node a measures the overall trust of node b with the help of Eq. (7). Repeat the process form step 1 to step 4 for every set of CNs.

4.5 Routing Process Once the network model is constructed and the cluster formulation is also completed based on the exchange of initial trust values, then the optimal path is need to established from the available and possible routes for a given source and destination node pair. In general, the proposed trust evaluation model does not know about the routing process used in the network to accomplish the route selection [25]. Once the nodes completed the route discovery according to the adopted routing protocol, it won’t enter into the data transmission phase but measures the overall trust of nodes initially. Once the source node is decided, the value of trust of a destination node is defined to 1 since it is no significant to determine the trust value. Hence, the trust value (T ) of a hop link-r between node x and its downstream nod y is formulated as

Energy-Efficient Cluster-Based Trust-Aware Routing …

T (r ) =



503

({T (x, y)|x, y ∈ r, x → y})

(8)

In general, there exists so many links for any source and destination node pair and the source node needs to choose an optimal and secure path by measuring the trust of all the links.

5 Experimental Evaluations In this section, details of evaluated parameters are described over the proposed trust management mechanism and the results include malicious detection rate (MDR), false positive rate (FPR), false negative rate (FNR), average packet delivery ratio (APDR), and storage overhead. All these parameters are calculated by changing the number of malicious nodes. An IoT network is created to simulate the proposed method, with P number of nodes of an area of MXN, where N represents the width and M represents the length of the network. The simulation metrics considered for simulation are listed in simulation metrics Table 2.

5.1 Simulation Environment 5.2 Simulation Results Under the simulation results, the performance metrics are measured over the network deployed with above simulation parameters depicted in Table 2. Initially, a network with N number of node is created and the proposed methodology is accomplished over that network. At ever simulation, the portion of malicious nodes is varied and Table 2 Simulation metrics

Metric

Value

No of nodes in the network

20–60

Area of the node

1000*1000 m2

Packet size

512 Bytes

Traffic source

CBR

Node placement

Random

Malicious nature

0–40% of total nodes

Simulation time

50 s

α, β

0 ≤ α, β ≤ 1

λ

1,2,3, and 4

w1 , w2

0 ≤ w1 , w2 ≤ 1

504

S. Gali and N. Venkatram 1 CMCTAR-IOT ASC-IOT[20] CITM-IOT[19]

Malicious Detection Rate

0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6

0

5

10

15

20

25

30

35

40

% of Malicious Nodes

Fig. 2 MDR for varying malicious number of nodes

the performance metrics, namely MDR, FPR, FNR, APDR, and storage overhead are measured. The obtained performance metrics of proposed method are compared with the values of existing methods, and the observed results are described in the following figures; Figure 2 illustrates the varying details of malicious detection rate with varying number of malicious nodes. From Fig. 2, it can be noticed that the MDR had shown a falling nature with an increment in the percentage of malicious nodes. But, the MDR of CMCTAR-IoT is observed to be high compared to the MDR of existing methods. For a particular percentage of malicious nodes, the MDR of CMCTARIoT is greater than the conventional approaches, ASC-IoT and CITM-IoT. Since the proposed CMCTAR-IoT considered the trust assessment in multiple orientations, the obtained trust value of a node will represent all compromising possibilities, and hence, the any node which was compromised with any type of attack was detected easily. This detection boosts up the MDR because the MDR is a ratio of total nodes detected as malicious to the total original malicious nodes. Furthermore, the conventional approaches considered only memory requirement as a main reference parameter for trust evaluation, which is not sufficient to detect all possible compromised nodes due to various attacks. FPR is a metric which has an inverse relationship with MDR. As MDR increases, the FPR decreases and vice versa. Since the FPR is determined as a ratio of total nodes detected as normal when they are malicious, it has opposite relationship with MDR. Here the main objective of the proposed CMCTAR-IoT is to detect the malicious nodes, and the nodes, which are detected as normal when they are malicious, fall under false positives (FPs). As the number of FPs increases, FPR also increases. FPR

Energy-Efficient Cluster-Based Trust-Aware Routing …

505

0.5 CMCTAR-IOT ASC-IOT[20] CITM-IOT[19]

0.45

False Positive Rate

0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

0

5

10

15

20

25

30

35

40

% of Malicious Nodes

Fig. 3 FPR for varying malicious number of nodes

has increasing characteristics with increasing the malicious nature, as shown in Fig. 3. However, the proposed CMCTAR-IoT has less FPR compared to the conventional approaches. The main reason behind this less FPR is the composition of multiple factors in the trust evaluation of every IoT node. Due to the evaluation of trustworthiness of nodes in an updated manner, the all malicious nodes can be detected at least one time and result in a less FPR. Figure 4 represents the details of APDR for varying malicious nodes. As the number of percentage of malicious nature increase, the malicious nodes won’t cooperate to the general communication process due to which the packets cannot reach to the destination and results in a less packet delivery ratio. From Fig. 4, it can be noticed that the APDR follows a declined characteristics with the increasing nature of maliciousness. But the APDR of proposed CMCTAR-IoT is obtained as high compared to the existing methods. Since the proposed approach considered a novel clustering mechanism and also an efficient multi-facet trust evaluation strategy, the nodes which are compromised are detected and removed from the network. Next, an alternative path is allocated to the source node immediately through which the packets can move more successfully toward the destination node. This constitutes an increased APDR, and it is better even in the case of increasing malicious nodes. Furthermore, the conventional approaches formulated only a single reference metric to define the trustworthiness of nodes but the attackers have so many options to compromise the node. Storage overhead defines the amount of extra storage burden occurred at the node. In the developed mechanism, the trust evaluation is accomplished only at the CH, and to do this, the CH should have the knowledge about the CNs information like

506

S. Gali and N. Venkatram 1 CMCTAR-IOT ASC-IOT[20] CITM-IOT[19]

0.98 0.96 0.94

APDR

0.92 0.9 0.88 0.86 0.84 0.82 0.8

5

0

10

15

20

25

30

35

40

% of Malicious Nodes

Fig. 4 Average Packet Delivery Ratio for varying malicious nodes

number of interactions, energy, and data observing. To store all this information, the CH should have more storage capacity; otherwise, the CHs must be more in number. As the number of clusters increases, the storage overhead at a single CH will decrease and it is shown in Fig. 5. The proposed clustering mechanism is dynamic in nature, 6000 CMCTAR-IOT ASC-IOT[20] CITM-IOT[19]

Storage Overhead (Bytes)

5000

4000

3000

2000

1000

0

2

3

4

5

6

7

The Number of Clusters

Fig. 5 Storage overhead for varying malicious number of clusters

8

9

10

Energy-Efficient Cluster-Based Trust-Aware Routing …

507

50 CMCTAR-IOT ASC-IOT[20] CITM-IOT[19]

45

Network Lifetime (Sec)

40 35 30 25 20 15 10 5 0

0

5

10

15

20

25

30

35

40

% of Malicious Nodes

Fig. 6 Network Lifetime for varying malicious nodes

and the number of clusters needs to formulate which depends on the total number of nodes and network area. Hence, the storage overhead of CMCTAR-IoT is observed as less compared to the existing methods. The network lifetime of any network is directly concerned with the energy consumption of nodes. With an increase in the percentage of malicious nodes, the more number of nodes is said to be compromised, and in such situation, detection of a promiscuous node for forwarding the information is a time taking process and also consumes more energy. Excess energy consumption has a direct effect over the lifetime of node which also has impact on the network lifetime. The decreasing characteristics of network lifetime with increasing malicious nature are shown in Fig. 6. However, the proposed CMCTAR-IoT is observed to have a better lifetime compared to the existing methods. Due to the accomplishment of a dynamic clustering technique, the nodes which have more energy are selected as CHs and the entire trust evaluation process is accomplished at CH only to preserve the energy of nodes which in turn increased the overall network lifetime.

6 Conclusion To achieve a secure and energy-efficient data transmission between IoT devices, this paper proposed a novel trust management scheme along with an effective clustering scheme. By the consideration of multiple factors in the trust evaluation, this approach

508

S. Gali and N. Venkatram

ensures the secure data transmission with less power consumption. Simulation experiments conducted with different malicious nodes had shown the performance effectiveness with respect to the MDR, APDR, and network lifetime. On an average, the proposed CMCTAR gained an improvement of 4% and 8% in the MDR from conventional approaches ASC-IoT and CITM-IoT, respectively. Simultaneously, the improvement in APDR is observed to be 4% and 7% from ASC-IoT and CITM-IoT, respectively. From these observations, it can be concluded that the proposed approach is more resilient to various network attacks and provides a secure and reliable data transmission with less resources.

References 1. Qiu, T., Liu, X., Li, K., et al.: Community-aware data propagation with small world feature for internet of vehicles. IEEE Commun. Mag. 56(1), 86–91 (2018) 2. Liu, X., Li, K., Guo, S., et al.: Top-k queries for categorized RFID systems. IEEE ACM T Netw. 25(5), 2587–2600 (2017) 3. Atzori, L., Iera, A., Morabito, G.: The internet of things: a survey. Comput. Netw. 54(15), 2787–2805 (2010) 4. Raja, S.P., Rajkumar, T.D., Raj, V.P.: Internet of things: challenges, issues and applications. J. Circuit Syst. Comp. 27(12), 1830007 (2018) 5. Liang, Y., Cai, Z., Yu, J., et al.: Deep learning based inference of private information using embedded sensors in smart devices. IEEE Netw. Mag. 32, 8–14 (2018) 6. Yan, Z., Zhang, P., Vasilakos, A.V.: A survey on trust management for Internet of Things. J. Netw. Comput. Appl. 42, 120–134 (2014) 7. Ishmanov, F., Malik, A.S., Kim, S.W., Begalov, B.: Trust management system in wireless sensor networks: design considerations and research challenges. Trans. Emerg. Telecommun. Technol. 26, 107–130 (2015) 8. Chen, D., Chang, G., Sun, D., Li, J., Jia, J., Wang, X.: TRM-IoT: a trust management model based on fuzzy reputation for internet of things. Comput. Sci. Inf. Syst. 8(4), 1207–1228 (2011) 9. Bao, F., Chen, I.R.: “Dynamic trust management for internet of things applications”, In: Proc. of international workshop on Self-aware internet of things, California, USA, pp. 1–6 (2012) 10. Atzori, L., Iera, A., Morabito, G., Nitti, M.: The social internet of things (SIoT)—when social networks meet the internet of things: concept, architecture and network characterization. Comput. Netw. 56(16), 3594–3608 (2012) 11. Nitti, M., Girau, R., Atzori, L.: Trustworthiness management in the social internet of things. IEEE Trans. Knowl. Data Eng. 26(5), 1253–1266 (2014) 12. Kokoris Kogias, E., Voutyras, O., Varvarigou, T.: “TRM-SIoT: a scalable hybrid trust & reputation model for the social internet of things”, In: 2016 IEEE 21st international conference on emerging technologies and factory automation (ETFA), pp. 1–9 (2016) 13. Bernabe, J.B., Ramos, J.L.H., Gomez, A.F.S.: TAC-IoT: multidimensional trust aware access control system for the internet of things. Soft Comput. 20(5), 1–17 (2016) 14. Gai, F., Zhang, J., Zhu, P., Jiang, X.: “Multidimensional trust-based anomaly detection system in internet of things”, Springer International Publishing, pp. 302–313 (2017) 15. Varghese, R., Dr. Chithralekha, T., Kharkongor, C.: “Self-organized cluster based energy efficient meta trust model for internet of things”, 2nd IEEE International Conference on Engineering and Technology (ICETECH), Coimbatore, India (2016) 16. Alshehri, M.D., Hussain, F.K., Hussain, O.K.: Clustering-driven intelligent trust management methodology for the internet of things (CITM-IoT). Mobile Netw. Appl. 23(3), 419–431 (2018)

Energy-Efficient Cluster-Based Trust-Aware Routing …

509

17. López, T.S., Brintrup, A., Isenberg, M.A., Mansfeld, J.: “Resource management in the internet of things: clustering, synchronization and software agents”. In: Mark, H., Uckelman, D., Michahelles, F, (eds.) Architecting the Internet of Things. Springer. ISBN 978–3–642–19156–5 (2011) 18. Reddy, P.K., Babu, R.S.: An Evolutionary secure energy efficient routing protocol in internet of things. Int. J. Intell. Eng. Syst. 10(3), 337–346 (2017) 19. Rajagopal, A.: Soft computing based cluster head selection in wireless sensor network using bacterial foraging algorithm. Int. J. Electron. Commun. Eng. 9(3), 379–384 (2015) 20. Jabeur N, Yasar AU-H, Shakshuki E, Haddad H., “Toward a bio-inspired adaptive spatial clustering approach for IoT applications”, Future Generation Computer Systems, May 2017. 21. Gali, S., Nidu, V.: “Multi-context trust aware routing for internet of things”. Int. J. Intell. Eng. Syst. 12(1) (2019) 22. Hao, F., Min, G., Lin, M., Luo, C., Yang, L.: MobiFuzzyTrust: An efficient fuzzy trust inference mechanism in mobile social networks. IEEE Trans. Parallel Distrib. Syst. 25(11), 2944–2955 (2014) 23. Zhang, Z., Zhu, H.L., Luo, S., Xin, Y., Liu, X.: Intrusion detection based on state context and hierarchical trust in wireless sensor networks. IEEE Access 5, 12088–12102 (2017) 24. Raj, J.S.: QoS optimization of energy efficient routing in IoT wireless sensor networks. J. ISMAC 1(01), 12–23 (2019) 25. Haoxiang, W., Smys, S.: Soft computing strategies for optimized route selection in wireless sensor network. J. Soft Comput. Paradigm (JSCP) 2(01), 1–12 (2020)

Toward Intelligent and Rush-Free Errands Using an Intelligent Chariot N. J. Avinash, Hrishikesh R. Patkar, P. Sreenidhi, Sowmya Bhat, Renita Pinto, and H. Rama Moorthy

Abstract In a supermarket or a mall, people come to purchase products and during the time of payment, they need to calculate and know about the total bill which is hectic in nature. In order to overcome this problem, an application is created which keeps track of transaction history of both past and current billing records. This project is done to simplify shopping methods and reduce the long queue during the process of billing. In the previous models, authors have failed to make use of applications for shopping, also the previously proposed models had RFID scanner in every trolley for reducing the queues which was more expensive. So basically, there was no application created for shopping and and an alternate way for scanning the products other than the RFID scanner was not introduced in malls. The methodology used here consists of a centralized system for the recommendation and online transaction. Devices that are used in this prototype are a laptop with webcam and load cell. The final product can be replicated using Raspberry Pi. The objective of this model is to have an application that can scan the products and even register the new user. Keywords Android studio · Arduino board HX711 · Load cell · MySQL · LCD Module · Raspberry pi · XAMP · Cloud

1 Introduction Nowadays, everything is being modernized. So, in this direction, shopping sector like malls, supermarkets, and stores also needs to be developed when the customer goes for shopping to purchase any kind of products. In conventional method, shopping sector workers need to calculate the amount to be paid by the customers and then N. J. Avinash (B) · H. R. Patkar · P. Sreenidhi · S. Bhat · R. Pinto Department of Electronics and Communication Engineering, Shri Madhwa Vadiraja Institute of Technology and Management Bantakal, Udupi, India 574115 e-mail: [email protected] H. R. Moorthy Department of Cloud Technologies & Data Science, iNurture Education Solutions, Srinivas University, Mangalore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_41

511

512

N. J. Avinash et al.

generate bill for the same which is very much time consuming and a tedious job. On the other hand, customers would come to know the amount at the end of billing which may be an embarrassing situation if he/she has shortage of amount for the products he/she might have purchased and also need to wait for a long duration in queue in order to make the payment for their purchase. This is a major constraint even for the old age people, people with health issues, and people having other important work to be carried out. The scenario has become worst during the the lockdown or the pandemic which has led to frustration in both customers as well as shopkeepers. Hence, this project helps the customer when they have shortage of amount while purchasing the products. Here the model will calculate the amount of product along with the count of products so that he/she may not go back or return the product due to shortage in price, which may happen due to miscalculation done by the customer. Most of the time the customer would expect the list of the product purchased, prior to the billing, which lets them know about the product list, the price of the products in the list, and the total cost of products present in the list. The project is about the shopping method with the help of an application which is named “SHOP KARO.” The application developed lets the user or customer buys the products. The customer must have a smartphone and install the application. He/she should take the trolley or cart for shopping. The only task done by the user is to scan the products using smartphone through android app and add the product to the cart. If new products are available, then the user will be notified. The designed application will give the information to he/she to check for the list of products along with its cost. Once the user is done with the list, he/she can take the products present in the shopping trolley and place it over the weighing machine or the load cell. If the weight of the product and the weight present in the application is approximately same, then the customer is allowed for billing. In malls, each product contains the tag or the barcode which has information about the respective product. Information is like weight, cost, expiry date, manufacture date, etc. While scanning any product, the user also gets information about the product, which helps the user to know user to know about the product, also the user can easily distinguish the product regarding the expired or the fresh products. The user can even verify the total list of products after adding the product to the trolley and before billing. After checking the product list along with its cost, the customer may replace the product item by deleting the required product item from the list and scanning another product. On the admin side, the data server stores all the products purchased by the registered customers, new product available in the market can be made known to the customer by giving them a notification. Notification to the hand-in-hand products like toothbrush and toothpaste is also given. The shopping mall should provide an “n” weighing machine where the customer can bring the trolley with products for the weighing process. Weighing machines are kept close to the billing counter so that the customer approved for billing can bill both offline and online as per his/her convenience. Once the billings are done by the customer they can take the purchased product.

Toward Intelligent and Rush-Free Errands …

513

2 Literature Survey The paper [1] is mainly about smart cart which comprises of a microcontroller and scanning system. When RFID tags are read by RFID reader, the data is read by a microcontroller which is compared with the product details present in the main database and is displayed on the LCD screen. When some item is added or removed from the cart without scanning, then the buzzer will beep. LIFI/ZigBee is used for transferring the final bill to the main computer for paying and verification of the product. The paper [2] discusses the concept of RFID-based shopping and billing. Here they have provided a RFID scanner for scanning the RFID tag after which the customer can add the scanned product into the cart. In paper [3], they have included a NFC tag associated with the item, which will contain all the information about the products. Customers can easily swipe on the NFC tag and add the products to the cart and further they can even edit anytime during shopping. During the time of billing, the customer should scan using their smartphone and transfer the product details for billing. Payment can also be made through the existing payment system. The paper [4] mainly contains the cart section and billing section. They have used a microcontroller, along with RFID reader. Also a wireless module ESP 8266 and and LCD has been used. The hardware framework and mobile application are connected to the server. Here in paper [5] the carts are enabled by the help of RFID reader, so the products will have RFID tags attached to it. When a customer puts any products by scanning product tags in the cart, its code will be detected by RFID scanner and the name, quantity, and price of those products will be stored in the memory of the microcontroller. The same will be displayed on LCD screen. Here they have utilized the Amazon cloud and created their account. Further, the data is pushed to the cloud using Wi-Fi module ESP8266. Using Wi-Fi, they have installed the application of the shopping mall. As soon as the customer logs in, the server will send the data to the cart. After finishing the purchasing, payment can be done either online or by cash. In paper [6], the barcode reader is attached to the trolley or cart. When the customer drops any item into the cart, that barcode reader identifies the barcode number of the product. The barcode tag describes information stored in the database which can be retrieved by a centralized system. All the activities are controlled together by Bolt ESP8266. In paper [7], they are using Raspberry Pi, barcode scanner, touch screen display, and a button which are placed or fixed to the shopping cart. Every product has a barcode tag on it. Barcode scanner reads the product information before adding item into the trolley. The customers have to drop every product in the trolley after scanning the products. Whenever the customers scan the products, the touch screen display displays the cost of the product, the total number of items, and the total cost of the

514

N. J. Avinash et al.

product. A QR code is attached to the trolley. The customer should scan the QR code, and the then he/she can pay through online or offline mode. Sudipta Ranjan Subudhi et al. [8] have proposed a model for intelligent shopping cart. This cart can be integrated in a super market. This proposed model can simplify the shopping process as it is automated, i.e., it detects automatically the items that are added to the cart and then it shows the relative information on the display. This prototype also validates the user using its unique identification number (UID) and has an authentication procedure involved which uses fingerprint information along with allowing the customer to make payment in a secure way using Universal Payment Interface (UPI) or One-Time Password (OTP) method for transaction in the intelligent shopping cart. This way of payment avoids the waste of time that the customer would spend in either case at the payment desk paying the bills thus giving a great shopping experience to the people. Eu Jin Wong et al. [9] have presented a paper whose main objective is to exhibit a system that specifically allows the blind people to shop online without any ones help. The main goal here was to find out if a web can be designed which has haptic, audio, speech recognition ability and speech synthesizing and check if they can be used for shopping and evaluating the product. The resulting conclusion was that a device that has both haptic and audio ability can be used by the blind people to interact, navigate, and also evaluate the online product haptically thus verifying the cart content. The system also has a system for payment based on voice recognition allowing the blind people to do online payment. Chihhsiong Shih et al. [10] in the paper have shown how a smart shopping cart system can be implemented using pattern-based software framework. The framework has two modules, pattern recognition and code generation. The present location of the shopping cart will be monitored constantly using WSN. The system designed finds the shortest path from the present location of the cart to the location of the product targeted in the LCD screen. The paper has successfully proved the process described. Tomayess Issa [11] has examined the drawbacks of the online shopping system. It includes lack of direct face-to-face interaction of humans and also lack of physical contact. The user interface should have human–computer interaction. Also the website developer needs to consider the important factors such as navigation, interaction and feedback, including these changes will enable customer satisfaction thus resulting in more customers revisiting the Web business often. Jumin Zhao et al. [12] shows a study which show the drawback of the physical stores. The physical stores cannot analyze the data or do an in-depth analysis of the customer feedback such as which product is consumed or purchased in a large scale, which product has more demand or how many customers visit the shop in a daily basis and so on. This paper presented by the author shows using RFID tags data mining can be done on various aspects easily. By verifying the received signal strength indicator (RSSI) information, the velocity of items is calculated, and then machine learning technique and hierarchical agglomerative clustering method are used to do in-depth analysis of the velocity data.

Toward Intelligent and Rush-Free Errands …

515

OUSSAMA TOUNEKTI et al. [13] regarding payment through mobile (MEPS) system are web-based and multiple client support system any number of customer can participate in the payment process without interrupting with normal process and can also avoid the server errors. Customer in this follows hierarchical structure to pay the amount TAP eliminate the errors and give offer if any, and if any interrupt between payment page, then it will be refreshed as soon login into the account. TAP will give the general behavior of payment mode using different network security, cloud more concern about the security of the customer, all information will hold by TAM. Novita A. Napitupulu et al. [14] with reference to the online shopping, author briefs about the history of the last transactions through portal and also mentions regarding application of online shopping like based on the behavior of the customer. The application should show the details of the product and thus give the importance to the individual importance based on the six hypotheses. Author says about the psychological influence while purchasing. Such important observation can also be implemented in the project model. Dr. Suryaprasad et al. [15] NLISC in order to save the customer time while purchasing or in shopping more information will be provided based on the location of the mall. Based on the LDC and SCC and also with the help of UIDC and based on the billing system ABIMC used in the model to develop a location-based shopping. The drawback of the paper is the network issues that arise generally. NLISC, RFID usage will give better results, but more usage of RFID will cause lot of problems in the system. With reference to above survey, the limitation here is only network issues, since its secured cloud there will be no fear of misuse or breach of information; hence, few of the above observations are considered to build a prototype for easy shopping.

3 Methodology The load cell operates like a resistance strain gauge. When the load or weight is applied on the load cell, there is a change in the resistance; in turn, there is a change in voltage. This voltage is sent to HX711. Here the voltage is converted to a digital value and sent to the Raspberry Pi. HX711 is an analog-to-digital converter (ADC). The Raspberry Pi takes information or the password from the keypad and displays the message on the LCD screen. Raspberry Pi also connects the server and the application. Figure 1 shows the connections of the entire system. Figure 2 represents the flowchart for working procedure of the proposed system. The customer should install the application named “SHOP KARO” and get registered. If the user is registered, then he can log in at any time. Once the registration/login process is done, the user can scan the product, set the required quantity, and add to the cart. If the product is expired, then a pop-up window will appear saying that it is expired. The user can make changes in the cart that is the user can add, delete, and set the quantity of the product. After the user makes sure about the products listed,

516

Fig. 1 Hardware connections Fig. 2 User end working procedure

N. J. Avinash et al.

Toward Intelligent and Rush-Free Errands …

517

Table 1 Hardware requirements Sl.No Component

Purpose

Specification

1

Load cell

Weighing products

2 kg used for prototype

2

HX711

To convert analog values of weight 24-bit analog-to-digital converter, to its digital value on-chip active low-noise PGA with gain of 32, 64, and 128

3

Raspberry Pi Interfacing between android Broadcom BCM2837, 4 × ARM application and weighing machine Cortex-A53, 1.2 GHz CPU, Broadcom Video Core IV GPU,1 GB LPDDR2 (900 MHz) RAM,10/100 Ethernet, 2.4 GHz 802.11n wireless networking, microSD, 40-pin header, populated GPIO, HDMI, 3.5 mm analogue audio–video jack, 4 × USB 2.0, Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI) ports

4

Smartphone

Scanning and billing

With SHOP KARO app installed

5

Keypad

To enter information/password

24 VDC, 30 mA.8-pin access to 4 × 4 matrix

6

LCD

To display message

16 X 2 module, each character is build by a 5 × 8 pixel box, can work on both 8-bit and 4-bit modes

he/she can go for weighing the product by placing the trolley over the weighing machine (load cell). If the total sum of the weight of the scanned product is equal to the actual weight of the product measured by the load cell, then the user is allowed to make payment, else the process is repeated until the weights are found to be the same. Table 1 represents the hardware component requirements for the proposed method along with the purpose of its usage and specification.

4 Results The whole system is slower as lots of delays in the programs have been included. Figures 3, 4, and 5 show the pages of the application. The password typing process should be done very slowly as Raspberry Pi is slow in recognizing the value. In Fig. 4, the middle page shows the error in the actual weight of the product and the information weight calculated by the application. The first page and the last page of the Fig. 4 are the list of the item scanned and bill of the products purchased, respectively.

518

N. J. Avinash et al.

Fig. 3 First page, registration page, and login page of the application respectively

Fig. 4 Application pages during the billing process

Figure 5 shows the pages before scanning, page during the scanning of the barcode, and the page which lets the user set the quantity. The third page in Fig. 5 also tells about the expired product. Figure 6 shows the whole hardware setup of the system which includes connection as shown in Fig. 1. The proposed model shown in the below Fig. 6 ensures the privacy and security of data from the customer point of view. Each and every individual customer will have a unique ID and is password protected so that the customer need not face any security issues.

Toward Intelligent and Rush-Free Errands …

519

Fig. 5 Scanning process of the application

Fig. 6 Hardware setup

The history of the customers and the products purchased by them can be viewed in the data server as shown in Fig. 7 which includes all the details like item ID, weight of the item, item’s name, quantity, price, and name of the customer.

5 Conclusion The whole system can be made more valuable by adding the concept of deep learning and machine learning. We can also add face recognition and detection in the application program. The recommendation of the products based on the previous purchase history can be done. Instead of using an online shopping application, this application should be made available for online shopping too. Delay in the program timings should be reduced for faster and better performance.

520

N. J. Avinash et al.

Fig. 7 Data server showing the history of the customers

The problems faced by previous models are eliminated here. The long queues for the billing process are comparatively reduced, and the cost of the project is also reduced. The weight of the trolley gets lighter as this project keeps the load cell or the weighing machine outside the trolley. The tab screen is also not being used, and it is not attached to the trolley; instead, the smartphone is being used for scanning and billing process.

References 1. Pangasa, H., Aggarwal, S.: “An analysis of Li-Fi based prevalent automated billing systems in shopping malls”, Proceedings of the Third International Conference on Computing Methodologies and Communication 2. Ali, Z., Sonkusare, R.: “RFID based smart shopping”, 2014 International Conference on Advances in Communication and Computing Technologies 3. Dave, J., Gondaliya, S., Patel, B., Mascarenhas, A., Varghese, M.: “M-commerce shopping using NFC”, IEEE 3rd International Conference on Sensing, Signal Processing and Security 4. Chadha, R., Kakkar, S., Aggarwa, G.: “Automated shopping and billing system using radiofrequency ıdentification”, 9th International Conference on Cloud Computing Data Science & Engineering 5. Sutagundar, A., Ettinamani, M., Attar, A.: Iot based smart shopping mall, Second International Conference on Green Computing and Internet of Thins 6. Lekhaa, T.R., Rajeshwari, S., Aiswarya Sequeira, J., Akshayaa, S.: “Intelligent shopping cart using bolt ESP8266 based on ınternet of things,” 2019 5th International Conference on Advanced Computing & Communication Systems 7. Viswanadha, V., Pavan Kumar, P., Chiranjeevi Reddy: “Smart shopping cart”, 2018 International Conference on Circuits and Systems in Digital Enterprise Technology (2018)

Toward Intelligent and Rush-Free Errands …

521

8. An Intelligent Shopping Cart with Automatic Product Detection and Secure Payment System, 978–1–7281–2327–1/19/$31.00 ©2019 IEEE 9. An Empirical Study of Haptic-Audio Based Online Shopping System for the Blind AsiaHaptics 2016, Haptic Interaction, vol. 432, pp. 419–424. Springer’s Conference, LNEE 10. “An Automatic Smart Shopping Cart Deployment Framework based on Pattern Design”, IEEE 15th International Symposium on Consumer Electronics (2011) 11. Online Shopping and Human Factors E-commerce Platform Acceptance, pp 131–150. Springers Conference 12. Zhao et al.: Mining shopping data with passive tags via velocity analysis. EURASIP J. Wireless Commun. Netw. 2018:28, (2018). https://doi.org/10.1186/s13638-018-1033-5 13. Users Supporting Multiple (Mobile) Electronic Payment Systems in Online Purchases: An Empirical Study of Their Payment Transaction Preferences, Received November 26, 2019, accepted December 18, 2019, date of publication December 23, 2019, date of current version January 3, 2020 14. The Influence of Online Shopping Applications, Strategic Promotions, and Hedonist Habits on e-Shopaholic Behavior, 2020 International Conference on Information Management and Technology (ICIMTech) 15. A Novel Low-Cost Intelligent Shopping Cart, 2011 IEEE 2nd International Conference on Networked Embedded Systems for Enterprise Applications

NOMA-Based LPWA Networks Gunjan Gupta and Robert Van Zyl

Abstract Non-orthogonal multiple access (NOMA) is used in the uplink for LPWAN for providing services to users or nodes requiring a different set of services. The LoRa nodes usually suffer from inter-cluster and intra-cluster interference. In this article, both are minimized by assigning spreading factor (SF) depending upon their channel coefficient. The assignment of different SF and use of NOMA allows successive interference cancellation (SIC) which itself reduces interference while decoding the signals. In numerical results, several LoRa nodes are varied to calculate the number of packets transmitted, percentage collisions, and throughput, which shows that the use of NOMA enhances the performance considerably. The proposed algorithm allocates channels to LoRa nodes based on their channel gains and distance which reduces collisions and increases throughput. Keywords IoT · LPWAN uplink NOMA · LoRa · SIC · Throughput · Channel assignment · SF · Power domain allocation

1 Introduction The emerging applications for example smart parking, security sensors at homes, automation at homes, smart irrigation systems and smart industrialization, etc., in our day-to-day life, are using IoT as the main technology [1]. These applications may cover meters to kilometre distances, due to these networks which supports their need to energy-efficient, traditionally multi-hop networks are used for this purpose. The multi-hop networks are complex to manage and they lack robustness. Therefore, work in this article focuses on low-power wide-area networks (LPWAN) [2], Long Range (LoRa). LPWAN is designed in a way so that they can support a longer lifetime of the battery which they are using and cover a longer distance. The applications which are supported by LPWAN are mostly delayed tolerant which is accomplished G. Gupta (B) · R. Van Zyl Department of Electrical, Electronics and Computer Engineering, French South African Institute of Technology, Cape Peninsula University of Technology, Cape Town, South Africa e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_42

523

524

G. Gupta and R. Van Zyl

by lower data rates. LoRa uses the industrial, scientific, and medical (ISM) spectrum like 868 MHz in the Europe sub GHz band [3]. The analogue technologies like narrowband IoT (NB-IoT) [4] and enhanced machine-type communication (eMTC) uses GSM and LTE [5, 6] which are cellular-based and use the licensed spectrum. The LPWAN has the challenge of coexistence and scalability. In the present form, LPWANs are interference susceptible, and managing intra networks is a challenge. The work in [7], shows that when a higher number of LoRa nodes are present collisions occur very often which is further enhanced with the increase in the payload. In [2], interference is investigated between nodes that are using the same SF and frequency, the results show an exponential drop in coverage probability with the increase in LoRa nodes. The experimental deployments in [8] exhibit 120 nodes that can be supported per 3.8 ha. It has been shown that LoRa nodes can support a massive number of nodes even in presence of interference. Non-orthogonal multiple access (NOMA) is a promising technology that allows signals to overlap in frequency or time and uses code domain or power domain to allow multiple users to share the resource block in time, frequency or bandwidth, etc., in cellular networks [9, 10]. The power domain multiplexing of users adopted in NOMA uses superposition coding (SC) in transmission and successive interference cancellation (SIC) employed at receivers [10]. The work in [11], compares the traditional orthogonal multiple access (OMA) [12, 13] with NOMA for massive connectivity and the enhancement of spectral efficiency for heterogeneous networks (HetNets) along with Multiple Input Multiple Output (MIMO). The work in [14], proposes a power control of the uplink NOMA which outperforms OMA with regards to sum rate. NOMA is used for both uplink and downlink in [15] to maximize sum throughput maximization by employing power allocation and user clustering. The work in [16], explains how NOMA can be employed in cellular-based IoT.

2 System Model The work in this paper considers the uplink transmissions in an LPWAN. A single gateway is used which is located at the center of a circle with a coverage radius r surrounded by N LoRa nodes are distributed randomly. A LoRa node is denoted by n l , 1 ≤ n l ≤ N and the distance of the LoRa node from the gateway is denoted by dl . The LoRa nodes and gateway are equipped with a single antenna used for communication. The LoRa nodes make use of one of the resource blocks, i.e., channel, time, bandwidth, etc., for communication with the gateway as assigned by the LPWAN. The work in this article uses resource blocks of channel and time. The LoRa nodes use variable spreading factors (SF) which provide different time on air even though using the same or different channel. A cluster is a group of nodes sharing the same resource blocks and gives origin to inter-cluster and intra-cluster interference as LPWAN is not able to maintain orthogonality between different SFs [17].

NOMA-Based LPWA Networks

525

A LoRa nodes cluster sharing the same resource blocks is denoted by c using the same channel, and transmission time denoted by Chc and tc . The total clusters, channels, and transmission times are Cl, n c and Tc and they are related to each other by Cl = n c × Tc

(1)

There are two ways in which interference is possible in LPWAN as explained earlier. The interference in same cluster LoRa nodes on same resource block and interference between nodes on the same channel. The NOMA adds power domain separation between nodes sharing the same resource blocks. At the transmitter, signals are superimposed and at the receiver signals, are separated using successive interference cancellation (SIC) and use of Turbo and LDPC codes which allows massive connectivity of LPWAN. The channel model is used in urban or suburban where line of sight (LoS) communication between LoRa nodes and gateway is highly unlikely. Due to this reason Rayleigh fading channel is used as a channel between nodes and gateway. The Log distance propagation model is used as a path loss model for LPWAN. Consider the LPWAN using NOMA for the uplink transmission; and a node nl will have interference from nodes in the same clusters and with the nodes using the same channel for transmission. The h nl denotes the channel coefficient for node n l . The SIC is employed at the gateway after arranging to interfere LoRa nodes in the order of their decreasing channel coefficients, i.e., h 1 > h 2 > h 3 · · · . The channels are independent of each other and are under the influence of Rayleigh fading. The channels are modeled as h nl ∼ C N 0, λnl . The gateway will start decoding the signals of LoRa node with the highest channel coefficient and subtracting it from the received signal before decoding for the remaining nodes. The SIC is done similarly for all the nodes. When SIC is applied for a LoRa node ‘1’ with the highest channel coefficient and h 1 > h 2 > h 3 · · · h N , the remaining nodes are interfering nodes (inter or intra) SINR1 =  N j=2

P1 |h 1 |2  2 P j h j  + σ 2

(2)

where P1 , P2 denotes power allocation of node 1, 2, and central gateway (BS) respectively. Also, σ 2 is the AWGN variance. The minimum transmission rate of the nl th node is Rnl = B log2 (1 + SINRnl )

(3)

SINRnl is minimum SINR for n l node after SIC. SINR is signal to interference plus noise ratio. B denote its bandwidth. For the improvement among users and user’s connectivity, the minimum transmission rate of the nodes obtained after applying SIC needs to be maximized. It can

526

G. Gupta and R. Van Zyl

be achieved by enhancing power allocation and transmission time in good channel conditions. The work in this article is on LoRa nodes where assigned SF directly influences the transmission time. In order to maximize minimum transmission rate, i.e.,   max B log2 (1 + SINRnl )

(4)

 2 where Pmin ≤ Pnl ≤ Pmax , ∀n l and Pnl h nl  ≥ RSSI. RSSI is receiver’s sensitivity threshold defined at a particular SF.

2.1 Algorithm

1.

Initialize

2.

Sort

3.

If

4. 5. 6. 7.

Else

8.

Find

9. 10. 11. 12. 13.

Distribute nodes among channels. End if Find . Assign first LoRa nodes Repeat it for and so on.

and

.

in descending order.

In this article, a sub-optimal channel assignment scheme is proposed which is less complex. In line 2, the LoRa nodes are arranged in descending order of their channel coefficient. The number of LoRa nodes is divided by the number of channels in line 3, if the remainder is zero means that LoRa nodes can be assigned equally among an available number of channels. In lines 4, 5, and 6, LoRa nodes are assigned to channels in descending order of their channel coefficient, LoRa node 1 to channel 1, LoRa node 2 to channel 2, and so on. If the division of several LoRa nodes and the number of channels has a remainder, the remained LoRa nodes are divided among channels. In line 10, it is calculated how many users can use a particular SF. In line 12, the first n S F6 users are assigned SF = 6. In line 13, it is repeated for all other SFs.

NOMA-Based LPWA Networks

527

After transmission, SIC is employed for signals depending upon their channel gains. The signal with higher channel gains is decoded initially followed by signals with lower channel gains. The advantage of doing is to attain an adequate rate for higher channel gain signals with higher interference while lower channel gain signals can take advantage of weak interference.

3 Numerical Results LoRa is simulated considering the example of LPWAN on MATLAB. LoRa nodes are assumed to be distributed randomly around a gateway in a coverage area of radius = 3 km. The number of channels is n c = 16, and each channel has a frequency of 868 MHz. The maximum and minimum transmit power is 20 dBm and 0dBm respectively. The bandwidth used B = 125 kHz. The path loss factor is 3.5 and AWGN noise variance is σ 2 = −174 + 10 log10 B + NF, where NF is noise figure at room temperature of 6 dB. Spreading factor (SF) used is from 6 to 12. The SF usually suffers from the problem of imperfect orthogonality; this is also considered as they increase the number of collisions. In Fig. 1, the number of packets transmitted is compared with variable LoRa nodes from 100 to 400, with duty cycle as 0.1% and 0.25%, respectively. From the result, it can be calculated that the packets transmitted are 16,000 and 4000 for a duty cycle of 0.25% and 0.1% respectively. The packet transmission increases with an increase in

Fig. 1 Packets transmitted versus LoRa nodes for different duty cycle

528

G. Gupta and R. Van Zyl

Fig. 2 Percentage collisions with increasing LoRa nodes at a duty cycle of 0.1%

LoRa nodes, however, the collision also increases, and due to this not all packets are received. The SIC employed for NOMA helps reducing collisions and interference which allows more number of LoRa nodes. In Fig. 2, the number of collisions is compared for variable LoRa nodes from 100 to 600 at a duty cycle of 0.1%. Using a lower duty cycle enables an increase in the channel capacity, which is further enhanced by the use of power domain NOMA employed for transmission. It is obvious that collisions are below 70% for 600 LoRa nodes. In Fig. 3, the throughput is compared when LoRa nodes are varied from 100 to 600 at a duty cycle of 0.1%. The throughput is an indication of the number of packets received successfully, however, an increase in LoRa nodes decreases throughput due to an increase in collisions.

4 Conclusions In this article, the operation is performed to include NOMA for LPWAN in the uplink. The use of NOMA allows the use of SIC in the power domain and does interference cancellation using SIC. It also allows the accommodation of more LoRa nodes and increases minimum rate maximization. This work does not use the clustering for the cancellation of intra-interference and inter-interference, as SIC is used to minimize it. In the future, it can be executed to optimize transmission time and comparison with other proposed schemes in the literature.

NOMA-Based LPWA Networks

529

Fig. 3 Percentage throughput with increasing LoRa nodes at a duty cycle of 0.1%

References 1. Al-Fuqaha, A., Guizani M., Mohammadi, M., Aledhari, M., Ayyash, M.: Internet of Things: a survey on enabling technologies, protocols, and applications. IEEE Commun. Surv. Tutor. 17(4), 2347–2376 (2015). 2. Georgiou, O., Raza, U.: Low power wide area network analysis: can LoRa scale? IEEE Wirel. Commun. Lett. 6(2), 162–165 (2017) 3. Qin, Z., McCann, JA.: Resource efficiency in low-power wide-area networks for IoT applications. In: Proceedings of 2017 IEEE Global Communications Conference, GLOBECOM 2017, pp. 1–7, Singapore, 2017. 4. Wang, YPE., Lin., X, Adhikary, A,. Grövlen, A., Sui, Y., Blankenship, Y., et al.: A primer on 3GPP narrowband Internet of Things. IEEE Commun. Mag. 55(3), 117–123 (2017). 5. Adelantado, F., Vilajosana, X., Tuset-Peiro, P., Martinez, B., Melia-Segui, J., Watteyne, T.: Understanding the limits of LoRaWAN. IEEE Commun. Mag. 55(9), 34–40 (2017). 6. De Poorter, E., Hoebeke, J., Strobbe, M., Moerman, I., Latré, S., Weyn, M., et al.: Sub-GHz LPWAN network coexistence, management and virtualization: an overview and open research challenges. Wirel. Pers. Commun. 187–213 (2017). 7. Bankov, D., Khorov, E., Lyakhov, A.: On the limits of LoRaWAN channel access. In: Proceedings-2016 International Conference on Engineering and Telecommunication, EnT 2016, pp. 10–14, 2016. 8. Bor, M., Roedig, U., Voigt, T., Alonso, JM.: Do LoRa low-power wide-area networks scale? In: Proceedings of the 19th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems 2016, pp. 59–67, 2016. 9. Balyan, V.: Outage probability of cognitive radio network utilizing non orthogonal multiple access. In: 7th International Conference of Signal Processing and Integrated Networks 2020, pp. 751–755, Noida, India, 2020. 10. Balyan, V., Daniels, R.: Resource allocation for NOMA based networks using relays: cell centre and cell edge users. Int. J. Smart Sens. Intell. Syst. 13(1), 1–18 (2020) 11. Liu, Y., Qin, Z., Elkashlan, M., Ding, Z., Nallanathan, A., Hanzo, L.: Nonorthogonal multiple access for 5G and beyond. Proc. IEEE 105(12), 2347–2381 (2017) 12. Balyan, V., Saini, DS.: Integrating new calls and performance improvement in OVSF based CDMA networks. Int. J. Comput. Commun. 5(2), 35–42 (2011).

530

G. Gupta and R. Van Zyl

13. Balyan, V., Saini, DS., Gupta, B.: Service time-based region division in OVSF-based wireless networks with adaptive LTE-M network for machine to machine communications. J. Electr. Comput. Eng. 2019, 3623712. 14. Zhang, N., Wang, J., Kang, G., Liu, Y.: Uplink nonorthogonal multiple access in 5G systems. IEEE Commun. Lett. 20(3), 458–461 (2016) 15. Ali, M.S., Tabassum, H., Hossain, E.: Dynamic user clustering and power allocation for uplink and downlink non-orthogonal multiple access (NOMA) systems. IEEE Access 4, 6325–6343 (2016) 16. Benkhelifa, F., Qin, Z., McCann, J.: arXi Minimum throughput maximization in LoRa networks powered by ambient energy harvesting. In: ICC 2019—2019 IEEE International Conference on Communications, pp. 1–7, Shanghai, China, 2019. 17. Croce, D., Gucciardo, M., Mangione, S., Santaromita, G., Tinnirello, I.: Impact of LoRa imperfect orthogonality: analysis of link-level performance. IEEE Commun. Lett. 22(4), 796–799 (2018).

Copy Move Forgery Detection by Using Integration of SLIC and SIFT Kavita Rathi and Parvinder Singh

Abstract Image falsification has become a central issue in many applications. Common techniques are used to create fake digital images (copy-moving and sewing images). Existing programs can roughly indicate suspicious, counterfeit areas especially when multiple forged regions are present. The most important contribution of this proposed work is to detect the multiple forged regions in the image. An effective method of detecting image falsification is proposed by integrating keypoint-based counterfeit detection methods with SLIC super pixel segmentation algorithms. Adaptively, the algorithm generates super-pixels with the help of Simple Linear Iterative Clustering (SLIC), extracts function points using SIFT algorithm, and matches the key- points functions to each other to locate the selected function points. SLIC image segmentation uses simple and global stopping criteria, which reduces the accuracy of image debugging and detects copy-moving image falsification. Dynamic threshold is used for the identification of suspected regions. Experimental results obtained from testing MICC-F220 dataset shows notable precision, F1-measure, and recall as comparison to existing methods. SIFT is excellent as scale-invariant and use of SLIC is overcoming the drawback of SIFT that is false positive rate. Keywords Segmentation · Dynamic threshold · Image forgery · Clustering · Dataset

1 Introduction Most common used Key-points methods are SIFT and SURF [1–4]. These methods have been mostly used for reorganization and image identification because of their robustness against geometrical transformation such as scaling and rotation [5–7]. K. Rathi (B) · P. Singh Faculty, CSED, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Sonipat, Haryana, India e-mail: [email protected] P. Singh e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_43

531

532

K. Rathi and P. Singh

Key-point methods take less time and robust against geometrical transformation [8, 9]. However, they have no good localization power and do not give positive results against multiple forged region [10–12]. To solve these problems, this paper proposed image forensic method based on SLIC [13, 14] and Affine-SIFT [15] to realize CMF with rotation, scaling, and affine transformation. Proposed approach can efficiently localize the forged regions. The overall research paper on Digital Image Forgery Detection had been published are 532 out of which 58 paper had been based on SIFT method [16]. SIFT is excellent as scale-invariant and use of SLIC is overcoming the drawback of SIFT that is false positive rate. SIFT extract key points so takes less time in detection but this may lead to false forgery detection so SLIC helps in providing best key points.

2 Literature Review In this proposed work, an image is firstly converted into grayscale. Then preprocessing step was applied to the image, i.e., adding Gaussian Blur. After that features are extracted via SIFT algorithm. At the end, the clustering method was applied on it to detect copied area and RANSAC method for exposing geometric transformation [13]. Paper “A study of copy move forgery detection scheme based on segmentation”, used SLIC (simple Linear Iterative Clustering) for dividing the image into patches and then apply SIFT for detecting features and descriptor of an image. Key-points on the basis of threshold value and identifying forged area are matched. This method was very efficient for scaling and rotation only [16–18]. Authors analyzed an algorithm which extract large number of key-points on the bases of segmentation scheme. In this proposed scheme firstly the host image is segmented into independent patches prior to key-point extraction. On the basis of these patches key-points are matched. The matching process detects suspicious pair. At second stage, an EM- algorithm is used to refine the estimated matrix. The disadvantage of this proposed work is that it cannot detect multiple forged regions and computational time is very high [14, 19]. For detecting copy-move forgery efficiently authors analysed both key-point based method and block-based method. SLIC is used to divide block and then key-points are matched with SIFT key-point based method [9, 20]. Authors represent key-point based image forgery detection based on Helmert transformation and SLIC algorithm. The key points are obtained by using SIFT algorithm followed by clustering and group combining. A cluster yield two match groups i.e. source and destination. Helmert transformation demonstrates relationship of coordinate without distortion. Moreover, key points are localized by ZNCC method [21, 22]. Authors represent Image forgery detection using improved SLIC (simple linear iterative clustering). The input image is divided into blocks and then features extraction is applied to each block by SIFT (scale-invariant feature transform) algorithm. After feature extraction improved version of SLIC i.e. Primitive SLIC is applied to

Copy Move Forgery Detection by Using Integration of SLIC and SIFT

533

detect forged region. Main contribution of this work is that it is robust against splicing attacks [23–25]. An input image is divided into blocks using DWT (discrete Wavelet Transform) firstly then performs clustering via SLIC algorithm. The key points are obtained by using SIFT algorithm then these key-points are used for best-matching pairs. With LFP (labeled features pairs) suspected region of image is identified [26–28]. In this algorithm, AFFIN-SIFT is used to detect the multiple geometric transformations, which are not detected by the Existing-SIFT algorithm. ORSA (optimized random sampling algorithm) is used for removing the outliers followed by SLIC (Simple linear Iterative clustering algorithm) for identifying the forged area in image [9]. SIFT is used with binary search method to locate the disturbed region. Here-key points are combined in terms of scale and color, which reduces the difficulty of time. In order to obtain an accurate interference circuit, a local novel algorithm was developed [20, 22] a comprehensive review of existing techniques is provided with Good and evil differences. This will provide a basic perspective for researchers working in the field [29, 30]. First, the input image is separated by a block with the SLIC algorithm. Subsequently, the SIFT method is used to extract interesting points. In the end, the key points are paired with the identification of the erected area and this was done by the Euclidean distance [31–33] (Fig. 1). Authors represent an improved method for CMFD. At pre-processing stages CLAHE (Contrast-limited adaptive histogram equalization) is used to improve feature detection in smooth regions. Then SIFT algorithm is applied on input image for extraction of features and for matching process FANN (Fast Approximate Nearest Neighbour) approach is used. Density-based clustering method is applied to reveal forged region. At the end, guaranteed outlier removal algorithm is proposed to overcome false match with efficient plain copy-move forgery detection [23, 34]. In this work, SLIC is used with DAFMT (Discrete Analytic Fourier-Mellin Transform)

Fig. 1 Literature survey on SIFT method

534

K. Rathi and P. Singh

to extract key-points from block. Experiment result shows that it can detect plain copy move forgery very efficiently [22, 35]. Here, firstly input image is converted into HSV (Hue, Saturation, and Value). Then key-points are detected with the help of SURF algorithm and are matched via Nearest Neighbour Distance ratio. After matching process, these key-points are grouped by using clustering method to decrease false match. Finally, the input image is examined at various scales with a Gaussian pyramidal decomposition. For improving accuracy of detection voting method is combined with multi-scale analysis. Experiment result shows that the proposed method work good for rotation, resizing, and combination of these [20, 36]. Authors represent a key-point based image forgery detection technique using SIFT algorithm and these key-points are used for best-matching pairs by using clustering. Each cluster yields two match groups, i.e., source and destination. Moreover, key points are localized by ZNCC method. Experiment result shows that it is robust against rotation, scaling, and combination of both [21, 37]. For detecting copy-move forgery efficiently, authors analyzed both key-point based method and block-based method. In block-based technique, Adaptive over-segmentation algorithm (SLIC) is used to divide the host image into non-overlapping blocks. In key-point based method, the feature points are extracted by SIFT from each block instead of extracted the features from whole image. Then key-points are matched with nearest Neighbour algorithm. Result shows that this technique is robust against plain CMF attacks [14, 38, 39]. In this paper, features are extracted and matched with the help of SIFT algorithm and then Agglomerative hierarchical clustering is applied to identify possible clone areas. Result shows that it is robust against multiple cloning and has less false matches [9]. This study analyzed an algorithm that extract large number of key-points based on segmentation scheme. In this scheme firstly the host image is segmented into independent patches prior to key-point extraction. These patches are used to match key-points. The matching process consists of two stages. In the first stage, suspicious pair of patches are detected that may contain copy-move forgery region. At second stage, an EM- algorithm is used to refine the estimated matrix. Limitation of this work is that it cannot detect multiple forged regions and computational time is very high [24, 40].

3 Proposed System This process proposed a framework to improve the performance of fake localization by incorporating graphics of manipulation opportunities. The framework first selects and perfect two forensic methods, namely a detector based on statistical attributes and a copy of a false motion detector, and then adjusts the results to obtain probability map operations. After verifying the nature of the opportunity graph and comparing the different merge plans, it finally came up with a simple and effective strategy for integrating the wrong opportunity graph to get the end result of the position. The feature-based and block-based match processes use the in-image counterfeit

Copy Move Forgery Detection by Using Integration of SLIC and SIFT Input Image Resize Input Image

MICC F-220 DATASET

Preprocessing Super Pixel Segmentation

SIFT

535

SLIC

Feature Extraction

Feature Matching

Dynamic/Adaptive Thresholding

Mark Tempered Region Performance Evaluation

Precisoin, Recall, F1 Measure

Fig. 2 Workflow diagram

detection process. The image block size is calculated by DWT conversion. Using SLIC, the image is over-segmented (Fig. 2). An image is a rectangular array of pixels and each Pixel signifies a measure of certain attributes of the scene. There may be many of these attributes, but we usually measure the average brightness of these values usually signified by eight-bit integers that provide 256 brightness levels. Resolution quantifies the close distance between lines, which is clearly visible. Resolution units may relate to physical dimensions (e.g., lines per mm, lines per inch).

3.1 Pre-processing Image Resize: In computer graphics and digital imaging, image scaling refers to adjusting the size of digital images. When there is any change say increase or decrease in the total number of pixels, the image size needs to be adjusted or when to correct image lens is distorted or the image is rotated; re-alignment may occur. Scaling refers to increasing the number of pixels so that more details can be recorded easily.

3.2 SLIC Segmentation Using a SLIC algorithm, the image is over-segmented. The SLIC algorithm divides image based on Dimension Reduction determined by the DWT transform. Dimension reduction eliminates the noisy data and also improves the speed for matching process

536

K. Rathi and P. Singh

of SIFT. SLIC applies the same tightening setting (user selected) for all super pixels in the image. If the image is smooth in some areas and very textured in other areas, SLIC will generate smooth, even-sized super pixels in smooth area and very jagged super pixels in the texture area [18, 25, 28]. SLIC Segmentation includes [17, 18]. Convert RGB image into LAB colors space given as below   L ∗ A ∗ B = rgb2lab i m , wp

(1)

where im is an input image and wp is optional string specifying the adapted white points (this is default). Distance Measure Proposed algorithm takes desired number of equally sized super pixels K (six no. of pixels). For N pixels image size of each super pixel is N/K. At the centre of every grid interval for roughly equally sized super-pixel i.e. S=



N /K

(2)

Algorithm choose K super-pixel cluster centres C k = [lk , ak , bk , x k , yk ]T and assume that pixels are associated with this cluster centre lies within 2S × 2S area around super-pixel centre on xy plane. For clustering, the super pixel Euclidean distance is meaningful because it identifying small neighbor distance. If the value of spatial pixel exceed this limit, then it will be outweigh pixel. Therefore, the process uses 5D space instead of simple Euclidean norm. It measures distance Ds and if the value of m (m is weighting factor representing the nominal maximum color distance and take range of m [1–20]) is high than the cluster is more compact. We choose m = 10 for the cluster compactness. Last step is combining edge distance with weight, defined as below: D = D + we ∗ de

(3)

where D is total distance, we is the weight of pixels and d e is the largest value of edge distance.

3.3 Feature Extraction and Description The block function extraction process is used to calculate the similarity of the functions extracted from the block region based on the scale-invariant function transformation (SIFT) procedure. This process recognizes key points in image. The Laplace function calculates the edges of the image according to the derivative. Then arrange the values in order to the matrix. First, select a value in a specific circled box then the value of the selected area will expand. During the expansion process, the values

Copy Move Forgery Detection by Using Integration of SLIC and SIFT

537

are compared with each other and the value with lower value is merged. Then delete the value with the smaller value [13]. The resultant point will be recorded as an HRL point [20, 22, 31]. Description of the Extraction method is given below: SIFT (Scale Invariant Feature Transform) It includes Scale-Space Extrema Detection for Extrema detection (if point is maximum or minimum the location and scale are recorded) we identify point of interest (key point), for identifying extrema firstly image is evaluated and then convolved image is grouped by octave and value of K i = 1.5 is used for obtaining fixed no. of convolved image per octave [13, 14]. Second step is Key-point Localization: To identifying location and scale of each candidate key point, Taylor series expansion is used [19]. Third step is Eliminating Edge Response: To eliminate edge response, computing principal curvature is calculated with the help of eigen value of second-order Hessian matrix, H:  H=

Dx x Dx y Dx y D yy



T r (H ) = Dxx + Dyy = λ1 + λ2

(4)

r = λ1 /λ2

(5)

Here r is the ratio of eigen values. Threshold value lies between 0.5 − ~0.8 for proposed algorithm. Orientation Assignment is to set orientation of each key-point. A gradient orientation histogram is formed with 36 bins of 360° angle. Each keypoint in neighboring window is assigned to one of the histogram bin based on their gradient magnitude. Key point Descriptor is calculated for each pixel based on 16 × 16 region around the neighborhood then best ten matches are chosen. Key point Matching is done by identifying the nearest neighbor of two objects in image. If ratio >0.8, then they are rejected.

3.4 Thresholding In the proposed algorithm, the distance between key-points is used for evaluating dynamic threshold and if the value of threshold is greater than or equal to one then it will be declared as forged region. Steps for Dynamic Thresholding are given below: 1. 2.

Find the first local maximum value (it examines by determining minimum distance between pixels). And this maximum value is used as threshold value. In second step; second local maximum value is calculated and compare with first local maximum. If the (local maximum ~ = second local maximum) then it will be used as threshold value.

538

K. Rathi and P. Singh

3.

The last step calculates the mean values of pixels and compared with the threshold value. If T ≥ one; It will be declared as forged region. Algorithm Input Image: the input image will have two similar objects in single image. Pre-processing: The collected input images are subjected to pre-processing image resize. Then SLIC (Simple Linear Iterative Clustering) is applied on image for generating clusters. The steps of SLIC algorithm are given below: Initialize the cluster centre C k = [lk, ak, bk, xk, yk]T by gathering pixels in regular grid steps S The centre of the perturbation cluster is located in the n × n block to the lowest gradient position Repeat Do Ck at cluster centre Assign the best matching pixels near 2S × 2S square; Centre of cluster is measured by distance End Calculate the distance between the new cluster centre and the remaining error E {L1 from the previous one centre and recalculated centre} Up to E ≤ Threshold Strengthen the connection. Here we cannot evaluate E error because we use fixed number of iterations, i.e., 100. After clustering the super-pixels, SIFT (Scale-Invariant Feature Transform) is used for extracting features and matching features. Then Dynamic thresholding is used to detect forged regions. And the threshold value varies from 1 to 200. Determining the performance by evaluating precision, recall, and F1-Measure.

i. ii. iii. iv. v. vi. vii. viii. ix.

4. 5. 6.

4 Stimulation Result and Analysis The experiment is implemented using MATLAB 2017b on computer with i3 processor and 8 GB of RAM.

4.1 Dataset The MICC-F220 data [9] is used to evaluate system performance. Colombian photo libraries contain images from the author’s own collections and photographs. Contains 220 images: 110 modified images or 110 original images of 722 × 480 to 800 × 600 pixels. Overall, size of fake area is 1.2% of the total image. The fake image was

Copy Move Forgery Detection by Using Integration of SLIC and SIFT

539

created by selecting a rectangle or square area of the image and then merging it with the image after attacking various attacks. These revolts are revolutions and slides. Their combination has been used to create fake images.

4.2 Evaluation Metrics Precision, recall, and F1-measure are used as evaluation metrices [9]: Precision = TP/TP + FP

(6)

Recall = TP/TP + FN

(7)

F1 - measure = 2 × (precision × recall)/(precision + recall)

4.3 Results Comparison of proposed algorithm with existing algorithms on the basis of performance are presented (Figs. 3, 4 and Tables 1, 2). Execution results of some images from dataset are shown including execution time taken for each step in shown in Fig. 5. 120 100 80 60 40 20 0

Precision Recall F-1 measure

Fig. 3 Comparison of performance results of existing algorithms with proposed algorithm

540

K. Rathi and P. Singh 120 100 80 60 40 20 0

Precision Recall F-1 measure

Fig. 4 Comparison of performance results of existing algorithms with proposed algorithm

Table 1 Detection results of plain copy move forgery

Precision

Recall

F-1 measure

Proposed algorithm

98.84

100

99.41

Aya hegazi et al. [24]

97.5

100

97.56

Pun et al. [23]

97.96

71.56

82.7

Silva et al. [9]

97.96

63.72

77.11

Hui-yu et al. [22]

97.1

87.08

91.82

Pun et al. [20]

95.3

87

90.9

Amerini et al. [21]

93.55

76.8

84.35

Li et al. [14]

56.89

63.33

59.99

Precision

Recall

F-1 measure

Proposed algorithm

98.84

100

99.41

Aya hegazi et al. [24]

97.5

100

97.56

Table 2 Results of multiple forgery detection

Pun et al. [23]







Silva et al. [9]







Hui-yu et al. [22]







Pun et al. [20]







Amerini et al. [21]







Li et al. [14]







5 Conclusion An effective method of detecting image falsification is proposed by integrating keypoint-based counterfeit detection methods with SLIC super pixel segmentation algorithms. Adaptively, the algorithm divides the host image into non-overlapping and irregular super-pixels with the help of Simple Linear Iterative Clustering (SLIC),

Copy Move Forgery Detection by Using Integration of SLIC and SIFT

541

Fig. 5 Execution results of some images from dataset are shown including execution time taken for each step

extracts function points using SIFT algorithm, and matches the key-points functions to each other to locate the selected function points. SLIC image segmentation uses simple and global stopping criteria, which reduces the accuracy of image debugging and detects copy-moving image falsification. Dynamic threshold is used for the identification of suspected regions. Experimental results obtain from testing MICC-F220 dataset demonstrate that proposed method can obtain impressive precision, recall, and F1-measure in comparison to existing methods. The algorithm can identify forgeries when the forger introduces changes such as Gaussian noise addition, scaling,

542

K. Rathi and P. Singh

and JPEG compression, but cannot recognize when the forger introduces blurs into the forged area to make it less apparent. Execution time taken at each step of the proposed scheme is presented and values of TP, TN, FP, FN are provided so that different matrices can be calculated by the researchers in future for doing comparision as all the evaluation matrix formulas are based on these four values only. No earlier article has provided all this information making comparision difficult for the researchers. The proposed algorithm can be modified further to improve detection accuracy, accuracy, and recall rate. For this reason, parameter optimization needs to be handled in a better way.

References 1. Al-Qershi, O.M., Khoo, B.E.: Passive detection of copy-move forgery in digital images: stateof-the-art. Forensic Sci. Int. 231(1–3), 284–295 (2013) 2. Ryu, S.-J., Lee, M.-J., Lee. H.-K.: Detection of copy-rotate-move forgery using Zernike moments. In: International Workshop on Information Hiding. Springer, Berlin, Heidelberg, 2010. 3. Ryu, S.-J., et al.: Rotation invariant localization of duplicated image regions based on Zernike moments. IEEE Trans. Inf. Forensics Secur. 8(8), 1355–1370 (2013) 4. Bravo-Solorio, S., Nandi, A.K.: Automated detection and localisation of duplicated regions affected by reflection, rotation and scaling in image forensics. Signal Process. 91(8), 1759–1770 (2011) 5. Davarzani, R., et al.: Copy-move forgery detection using multiresolution local binary patterns. Forensic Sci. Int. 231(1–3), 61–72 (2013) 6. Mishra, P., et al.: Region duplication forgery detection technique based on SURF and HAC. Sci. World J. 2013 (2013). 7. Shahroudnejad, Atefeh, and Mohammad Rahmati. “Copy-move forgery detection in digital images using affine-SIFT.” 2016 2nd International Conference of Signal Processing and Intelligent Systems (ICSPIS). IEEE, 2016. 8. Amerini, I., et al.: A sift-based forensic method for copy–move attack detection and transformation recovery. IEEE Trans. Inf. Forensics Secur. 6(3), 1099–1110 (2011) 9. Amerini, I., et al.: Copy-move forgery detection and localization by means of robust clustering with J-Linkage. Signal Process.: Image Commun. 28(6) 659–669 (2013) 10. Lee, J.-C., Chang, C.-P., Chen, W.-K.: Detection of copy–move image forgery using histogram of orientated gradients. Inf. Sci. 321, 250–262 (2015) 11. Pan, X., Lyu, S.: Region duplication detection using image feature matching. IEEE Trans. Inf. Forensics Secur. 5(4), 857–867 (2010) 12. Shivakumar, B.L., Santhosh Baboo, S.: Detection of region duplication forgery in digital images using SURF. Int. J. Comput. Sci. Issues (IJCSI) 8(4), 199 (2011) 13. Mohammed, I., Hariadi, M., Pumama, K.E.: A study of copy-move forgery detection scheme based on segmentation. IJCSNS 18(7), 27 (2018) 14. Pun, C.-M., Yuan, X.-C., Bi, X.-L.: Image forgery detection using adaptive oversegmentation and feature point matching. IEEE Trans. Inf. Forensics Secur. 10(8), 1705–1716 (2015) 15. Elaskily, M.A., et al.: Comparative study of copy-move forgery detection techniques. In: 2017 Intl Conference on Advanced Control Circuits Systems (ACCS) Systems & 2017 Intl Conf on New Paradigms in Electronics & Information Technology (PEIT). IEEE, 2017. 16. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6386951/. 17. https://en.wikipedia.org/wiki/Scale-invariant_feature_transform.

Copy Move Forgery Detection by Using Integration of SLIC and SIFT

543

18. Pandey, R.C., et al.: Fast and robust passive copy-move forgery detection using SURF and SIFT image features. In: 2014 9th International conference on industrial and information systems (ICIIS). IEEE, 2014. 19. Li, J., et al.: Segmentation-based image copy-move forgery detection scheme. IEEE Trans. inf. Forensics Secur. 10(3), 507–518 (2014) 20. Silva, E., et al.: Going deeper into copy-move forgery detection: exploring image telltales via multi-scale analysis and voting processes. J. Vis. Commun. Image Representation 29, 16–32 (2015) 21. Huang, H.-Y., Ciou, A.-J.: Copy-move forgery detection for image forensics using the superpixel segmentation and the Helmert transformation. EURASIP J. Image Video Process. 2019(1), 1–16 (2019) 22. Pun, C.-M., Chung, J.-L.: A two-stage localization for copy-move forgery detection. Inf. Sci. 463, 33–55 (2018) 23. Hegazi, A., Taha, A., Selim, M.M.: An improved copy-move forgery detection based on densitybased clustering and guaranteed outlier removal. J. King Saud Univ.-Comput. Inf. Sci. (2019). 24. Al-Hammadi, M.: Copy Move Forgery Detection In Digital Images Based On Multiresolution Techniques. MSc, Computer Engineering, Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh (2013) 25. Nithiya, R., Veluchamy, S.: Key point descriptor based copy and move image forgery detection system. In: 2016 Second International Conference on Science Technology Engineering and Management (ICONSTEM). IEEE, 2016. 26. Hosny, K.M., Hamza, H.M., Lashin, N.A.: Copy-move forgery detection of duplicated objects using accurate PCET moments and morphological operators. Imaging Sci. J. 66(6), 330–345 (2018) 27. Ramya, M., Sridevi. P.: Image forgery detection using improved SLIC (2016). 28. Resmi, M.R., Vishnukumar, S.: A novel segmentation based copy-move forgery detection in digital images. In: 2017 International Conference on Networks & Advances in Computational Technologies (NetACT). IEEE, 2017 29. Fule, N., Mane, V.: Improved matching technique in digital image region duplication. In: 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, 2018, pp. 427–430. https://doi.org/10.1109/ICECA.2018.8474683. 30. Mahfoudi, G., Morain-Nicollier, F., Retraint F., PIC, M.: Copy and move forgery detection using SIFT and local color dissimilarity maps. In: 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Ottawa, ON, Canada, 2019, pp. 1–5. https://doi.org/ 10.1109/GlobalSIP45357.2019.8969355. 31. Ravan J., Thanuja, Image forgery detection against forensic image digital tampering. In: 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS), Belgaum, India, 2018, pp. 315–321. https://doi.org/10.1109/CTEMS.2018. 8769121. 32. Raskar, P.S., Shah, S.K.: A fast copy-move forgery detection using global and local features. In, : 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA). Pune, India 2019, pp. 1–4 (2019). https://doi.org/10.1109/ICCUBEA47591.2019. 9128584 33. Kurien, N.A., Danya, S., Ninan, D., Heera Raju, C., David, J.: Accurate and efficient copymove forgery detection. In: 2019 9th International Conference on Advances in Computing and Communication (ICACC), Kochi, India, 2019, pp. 130–135. https://doi.org/10.1109/ICACC4 8162.2019.8986157. 34. Chowdhury, M., Shah, H., Kotian, T., Subbalakshmi, N., David, S.S.: Copy-move forgery detection using SIFT and GLCM-based texture analysis. In: TENCON 2019—2019 IEEE Region 10 Conference (TENCON), Kochi, India, 2019, pp. 960–964, doi: https://doi.org/10. 1109/TENCON.2019.8929276. 35. Jaafar, R.H., Rasool, Z.H., Alasadi, A.H.H.: New copy-move forgery detection algorithm. In: 2019 International Russian Automation Conference (RusAutoCon) Sochi, Russia, 2019, pp. 1–5. https://doi.org/10.1109/RUSAUTOCON.2019.8867813

544

K. Rathi and P. Singh

36. Nuari, R., Utami, E., Raharjo, S.: Comparison of scale invariant feature transform and speed up robust feature for image forgery detection copy move. In: 2019 4th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE), Yogyakarta, Indonesia, 2019, pp. 107–112. https://doi.org/10.1109/ICITISEE48480.2019.900 3761. 37. William, Y., Safwat, S., Salem, M. A.: Robust image forgery detection using point feature analysis, In: 2019 Federated Conference on Computer Science and Information Systems (FedCSIS), Leipzig, Germany, 2019, pp. 373–380. https://doi.org/10.15439/2019F227. 38. Wu, S., Wu, Z., Lai, W., Shen, D.: Localizing JPEG image forgeries via SIFT and DCT coefficient analysis. In: 2019 IEEE 19th International Conference on Communication Technology (ICCT), Xi’an, China, 2019, pp. 1635–1638. https://doi.org/10.1109/ICCT46805.2019. 8947235. 39. Narayanan, S.S., Gopakumar, G.: Recursive block based keypoint matching For copy move image forgery detection. In: 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 2020, pp. 1–6, doi: https:// doi.org/10.1109/ICCCNT49239.2020.9225658. 40. Chen, H., Yang, X., Lyu, Y.: Copy-move forgery detection based on keypoint clustering and similar neighborhood search algorithm. IEEE Access 8, 36863–36875 (2020). https://doi.org/ 10.1109/ACCESS.2020.2974804

Nonlinear Autoregressive Exogenous ANN Algorithm-Based Predicting of COVID-19 Pandemic in Tamil Nadu M. Venkateshkumar, A. G. Sreedevi, S. A. Lakshmanan, and K. R. Yogesh kumar

Abstract COVID-19 pandemic is continuing to impact the health and life of a large population, as well as the economy of several countries worldwide. The exponential growth in the number of positive cases necessitates a prediction model for effective and judicial redistribution of available resources. This paper’s main objective is to predict the growth and spread of this pandemic to help the government take the necessary administrative decisions for healthcare preparedness and management. The research work uses artificial neural network (ANN) techniques as an efficient tool to predict COVID-19 cases. In India, Tamil Nadu is one of the severely affected states. This paper proposes a novel idea by modeling nonlinear autoregressive exogenous (NARX) ANN to predict the number of COVID-19 positive cases, discharges, and deaths soon in Tamil Nadu. The proposed prediction network is compared with feedforward neural network (FFNN) and cascaded feed-forward neural network (CFNN), where NARX is found to achieve better accuracy with actual data. Keywords Artificial neural network · COVID-19 · Cascaded feed-forward neural network · Feed-forward neural network · Nonlinear autoregressive exogenous ANN

M. Venkateshkumar (B) · S. A. Lakshmanan Department of Electrical and Electronics Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Chennai, India e-mail: [email protected] S. A. Lakshmanan e-mail: [email protected] A. G. Sreedevi Department of Computer Science Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Chennai, India e-mail: [email protected] K. R. Yogesh kumar Center for Wireless Networks & Applications (WNA), Amrita Vishwa Vidyapeetham, Amritapuri, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_44

545

546

M. Venkateshkumar et al.

1 Introduction The COVID-19 pandemic caused by severe acute respiratory coronavirus syndrome—2 (SARS-CoV-2) instigated from Wuhan, Hubei Province, People’s Republic of China (PRC) in late December 2019, when a case of unidentified pneumonia was reported [1]. The majority of the countries across the globe have got affected by this infectious pandemic. Thousands of deaths were reported worldwide due to a lack of vaccines and antiviral drugs [2]. With a low mortality rate [3], this virus has high infectivity and transmissibility. The fatality rate was more for patients with hypertension, diabetes, cardiovascular disease, chronic respiratory diseases, and cancer. The situation is still the same, with protocols defining only symptom-based therapy rather than specific antivirals to directly mitigate Corona’s effect [4, 5]. As per available data, 80% of affected people have mild infections, while others do not develop any symptoms. The warning symptoms include fever, cough, shortness of breath, fatigue, loss of smell and taste, and pneumonia. The spread of COVID-19 is observed more by pre-symptomatic people who are in the incubation period. According to WHO reports, cough or sneeze is the standard ways for the virus to spread. The affected countries enforce preventive measures including voluntary social distancing, washing hands often, sterilizing regularly touched surfaces, and to avoid hands to touch the mouth, nose, and the face [6]. As a preventive measure, authorities of affected countries also enforced complete lockdown, closing down public transport, schools, universities, and offices. Several countries have established dedicated healthcare centers to treat COVID-19 affected patients. Artificial intelligence (AI) techniques can play a vital role in predicting the outbreak of viral infection and modeling drugs for the same. In this pandemic situation, paper [7] discusses how AI can help the clinical field. These techniques could be the key to predicting such a pandemic”s risks and effects [8, 9]. Several researchers are developing a new algorithm for modified predicting models to mitigate the COVID19 outbreak in society. In [9], Bayesian optimization guided shallow long short-term memory (LSTM) is proposed for foreseeing the country-specific hazard to know outbreak of disease. Another work [10] predicted the situation of COVID-19 patients in South Korea using the artificial neural network (ANN) principle to predict recovered and death cases. The work described in [11] involved the maximal number of patients by training and transforming a time-series dataset to a regression dataset and then using a multilayer perceptron (MLP) ANN. Another exciting work was found in [12], where a hybrid AI model together with the natural language processing (NLP) and the long short-term memory (LSTM) network is proposed for COVID-19 prediction based on varieties of the infection rates. Machine learning-based forecasting methods also proved relevant at this pandemic time. Using the decision-making approach in [13] predicted the trend in newly infected cases, deaths, and recoveries for the next ten days. Another approach was discussed in [14], which used social IoT help and optimized the problem into minimum weight vertex cover and then applying reinforcement learning methods

Nonlinear Autoregressive Exogenous ANN Algorithm-Based …

547

for early identification of COVID-19 cases. Nothing is an exception; even big data analysis plays a significant role here. For example, in [15], the authors proposed a spatio-temporal (HiRES) model for detecting suspected individuals based on the trajectory of big data and mean-field theory. Some other prediction models include a semi-supervised network for detection using chest CT-scans [16], soft computing models [17], and adaptive network-based fuzzy inference systems [18]. India is one of the worst affected countries trying hard to reduce the spread of COVID-19 with its large population. More disastrous is its economic and social impact. The Indian economy is stressed out by the large infectivity and transmissibility of this disease [19, 20]. A detailed analysis of India’s COVID-19 positive cases is done in [21] using deep learning models. Tamil Nadu has the second-highest number of positive cases in India after Maharashtra. But the fatality rate here is lowest among other states in India. The first case of the COVID-19 pandemic in Tamil Nadu was reported on March 7, 2020. By the end of April, it was 2058 reported cases, and by November, the state crossed 770,000 in the count. The present situation of COVID-19 in Tamil Nadu has led to an urgent need to perform predictive modeling on the pandemic’s growth and spread. The exponential growth of positive cases gave a lot of stress for administration and health officials to make prior arrangements for accommodating COVID-19 positive patients [22–24]. This demands an urgent prediction model that could predict the number of novel cases based on which the administrations can make provisions ready to accommodate them. This paper’s main objective is to predict the growth and spread of this pandemic to help government in making necessary arrangements to manage emergencies. This paper proposes a novel idea by modeling nonlinear autoregressive exogenous (NARX) artificial neural network (ANN) to predict the number of COVID-19 positive cases, discharges, and deaths in Tamil Nadu. NARX is a better tool to forecast time-series data, and the model contains many layers with feedback to give accurate predictions. MATLAB tool is used to design the proposed network, and the results are compared with feed-forward (FFNN) and cascaded feed-forward neural network (CFNN) network. The paper is organized as follows. In Sect. 2, modeling of FFNN, CFNN, and proposed NARX networks is discussed. Section 3 deals with the design and development of the proposed NARX model for COVID-19 predictions. In Sect. 4, the suggested NARX model results are discussed and compared with various networks. Finally, Sect. 5 ends with the conclusion of the research work.

2 Modeling of ANN ANN is a powerful tool to analyze the outbreak of COVID-19. It can acquire, store, and recollect the information based on the nonlinear mapping between input and output layers. The modeling of FFNN, CFNN, and the proposed NARX—ANN models are discussed in the following subsections.

548

M. Venkateshkumar et al.

Fig. 1 Basic structure of MLP

2.1 Feed-Forward Neural Network (FFNN) In FFNN, a weighted form of signal is sent to the hidden layer between the input and output layers. A nonlinear activation function such as sigmoidal, tangent, or ReLu is used to process the hidden layers’ signal. Similarly, the same weighted form of signal from the hidden layer is sent to the output layer and processed through the activation function. The basic structure of the MLP network is shown in Fig. 1. The mathematical equation related to MLP is written as: ⎛ yi = f ⎝

n 

⎞ x j wi j ⎠

(1)

j=1

where f is the activation function, I is the neuron’s index, j is the input index, x j is the input vector, wij is the weight vector, and yi is the output vector.

2.2 Cascaded Feed-Forward Neural Network CFNN is similar to FFNN, but it consists of a link connecting the input and every previous layer to the following layers. The connection is a hidden layer of CFNN that is nonlinear through an activation function. Moreover, the connection is indirect. A neural network with direct connection is formed when perceptron and multilayer networks are grouped between input and output layers. Such a network configuration is termed as cascade feed-forward neural network (CFNN). The structure of the CFNN is shown in Fig. 2. The mathematical equation related to CFNN is written as: y=

n  i=1

⎛ f i wii xi + f o ⎝

k  j=1

w oj f jh

 n 

⎞ w hj1 xi ⎠

(2)

i=1

If the bias is considered in the input layer, the proposed neural network’s output is given in the following equation.

Nonlinear Autoregressive Exogenous ANN Algorithm-Based …

549

Fig. 2 Structure of CFNN network

y=

n  i=1

⎛ f i wii xi + f o ⎝w b

k  j=1

 w oj f jh w bj +

n 

⎞ w hj1 xi ⎠

(3)

i=1

2.3 Nonlinear Autoregressive Exogenous (NARX)-ANN Model NARX model is a dynamic recurrent neural network that encloses several layers with feedback connections. It has previously been applied by many researchers to model nonlinear processes. It is used to calculate the time series and is more suitable for nonlinear analysis of COVID-19 prediction applications. NRAX, with its multiple feedback layers, is considered a recurrent energetic neural network. NARX system has the memory capacity to store the previous data and is used to improve the neural network’s performance. The open-loop and closedloop model of NARX-ANN is shown in Fig. 3. Mathematical equations for mapping the open and closed-loop structure are given in Eq. (4).   

y(t) y(t − 1) y t − n y x(t + 1) y (t + 1) = F x(t) x(t − 1) x(t − n x ) Fig. 3 Open-loop and closed-loop model of NARX-ANN

(4)

550

M. Venkateshkumar et al.

In an open-loop control structure, predicted data are projected from the present and past input and actual data. In the closed-loop control structure, prediction is achieved from the present and past values of x(t) and past predicted y(t) values. Initially, an open-loop NARX network is used because it is wholly fed forward, and conventional training methods can be employed. Finally, the proposed NARX-ANN is converted into a closed-loop structure for multi-level predictions.

3 Design of ANN for COVID-19 Prediction In this research work, a two-layer NARX network is developed to predict the COVID19 in Tamil Nadu. The input of the recurrent layers with two neurons, the hidden layer with ten neurons, and an output layer with three neurons is established in Fig. 4. The NARX network has two inputs, such as the day and the number of samples tested. The NARX network output has three predicted data, such as the number of positive cases, discharges, and death. The data to be trained are collected from May 1, 2020, to August 28, 2020, from the Health & Family Welfare Department, Government of Tamil Nadu website. Almost 120 days of data are used for training the NARX network. The input and target data are randomly divided, with 85% of data for training, 15% for validation, and 5% for testing. Levenberg–Marquardt algorithm trains the network [25]. The NARX closed-loop and NARX open-loop model has been developed in the MATLAB environment, as shown in Figs. 5 and 6. The overall training (0.99901), validation (0.98182), testing (0.99452), and regression (0.99828) data are presented in Fig. 7. The developed NARX network’s overall performance shows that the data are well trained and can be used for accurate COVID-19 predictions.

Fig. 4 NARX model

Nonlinear Autoregressive Exogenous ANN Algorithm-Based …

551

Fig. 5 NARX closed-loop network

Fig. 6 Series—parallel NARX network

Figure 8 represents the error analyses of the trained NARX network. The response of the output element of the NARX network—time-series performance is presented in Fig. 9. Finally, the autocorrelation of error of trained NARX is shown in Fig. 10.

4 Result and Discussions The proposed simulation model of the closed-loop NARX network has been compared with other ANN networks such as CFFN, NARX series–parallel network, and FFNN to predict the COVID-19 of Tamil Nadu. A comparative analysis of various ANN networks for the prediction of COVID-19 positive cases and death in Tamil Nadu is presented in Figs. 11, 12, and 13, respectively. The comparative studies show closed-loop NARX network predictions to be close with actual data. The closed-loop NARX network has been developed to predict the number of samples tested in the future for COVID-19 symptoms. The simulation of various ANN algorithms for predicting positive cases and its comparative study with actual values is presented in Table 1. Similarly, a comparative analysis for the number of discharges and the number of deaths with actual data is

552

M. Venkateshkumar et al.

Fig. 7 Overall performance of a closed-loop NARX network

presented in Tables 2 and 3. Finally, the proposed closed-loop NARX network has been tested with randomly chosen days, and its simulated data are presented in Table 4. The same network prediction for September, October, November, and December month COVID-19 status is given in Tables 5, 6, and 7 (for randomly chosen days).

Nonlinear Autoregressive Exogenous ANN Algorithm-Based …

553

Fig. 8 NARX error analyses

5 Conclusion This paper modeled a novel NARX-ANN network to forecast the number of COVID19 positive cases, discharges, and death rates in the state of Tamil Nadu, India. The proposed NARX network is found to be more accurate in prediction compared with FFNN and CFNN. The paper also used NARX to estimate the possible number of cases for COVID-19 for randomly selected days of September 2020. Based on the comparative study, NARX is recommended for COVID-19 predictions. These predictions assume that the public follows meticulously the government regulatory measures introduced from time to time to combat the spread of COVID-19 and that these regulatory measures will continue to be in place in coming months till a vaccine or cure is realized.

554

M. Venkateshkumar et al.

Fig. 9 The response of output element NARX network—time-series performance

Nonlinear Autoregressive Exogenous ANN Algorithm-Based …

Fig. 10 NARX autocorrelation of error

Fig. 11 Comparative analyses of various network COVID 19 positive cases—prediction

555

556

M. Venkateshkumar et al.

Fig. 12 Comparative analyses of various network COVID 19 death–prediction

Fig. 13 NARX-based prediction of future sampling data

Nonlinear Autoregressive Exogenous ANN Algorithm-Based …

557

Table 1 Comparative analyses of various ANN algorithm for COVID-19 positive cases prediction Day no (start from 1st May)

Day

No of the samples tested

Positive Actual

CFNN

NARX

TDN

FF

116 117

24

70,023

5967

6494

6151

5892

5701

25

70,221

5951

6526

6152

5894

5683

118

26

75,500

5958

6873

6404

5956

5987

119

27

76,345

5981

6945

6436

5954

6009

120

28

75,103

5996

6888

6364

5927

5889

121

29th August

80,988

6352

7272

6644

5985

6419

122

30th August

83,250

6495

7432

6745

6001

6751

123

31st August

75,100

5956

6946

6335

5886

5797

Table 2 Comparative analyses of various algorithm for COVID-19 discharge prediction Day no (start from 1st May)

Day

No of the samples tested

Discharge Actual

CFNN

NARX

TDN

FF

118 117

24

70,023

6129

6300

5980

6509

6161

25

70,221

6998

6340

5987

6595

6163

118

26

75,500

5606

6653

6024

6276

5675

119

27

76,345

5870

6728

6035

6313

5619

120

28

75,103

5752

6691

6034

6547

5838

121

29th August

80,988

6045

7036

6068

6045

5467

122

30th August

83,250

6406

7187

6083

5880

5825

123

31st August

75,100

5956

6780

6050

6896

5980

Table 3 Comparative analyses of the various algorithm for COVID-19 death prediction Day no (start from Day 1st May)

No of the samples Death tested Actual CFNN NARX TDN FF

118

24

70,023

97

120

87

103

104

117

25

70,221

107

120

87

102

104

118

26

75,500

118

124

92

105

109

119

27

76,345

109

125

92

104

110

120

28

75,103

102

125

90

102

107

121

29th August 80,988

87

128

96

105

117

122

30th August 83,250

94

130

98

106

121

123

31st August

91

128

90

99

106

75,100

558

M. Venkateshkumar et al.

Table 4 Random analyses NARX for COVID-19 prediction Day (start No of Sample Positive Discharge Death from 1st May) Tested Actual Predicted Actual Predicted Actual Predicted 45

18,782

1974

2163

1138

1042

38

42

50

27,537

2115

2779

1630

1419

41

58

55

32,079

2865

3107

2424

1856

33

60

60

30,039

3947

3240

2212

2396

62

57

65

36,164

4280

4017

2214

3560

65

62

70

42,367

4231

4685

3994

4497

65

68

75

41,357

4526

4747

4743

4874

67

65

80

52,993

4979

5437

4059

5260

78

76

85

65,150

6785

6136

6504

5540

88

89

90

60,794

6426

5861

5927

5611

82

82

95

58,211

5609

5691

5800

5677

109

80

100

67,533

5883

6163

5043

5820

118

90

105

62,275

5835

5844

5146

5837

119

81

110

67,025

5860

6051

5236

5919

121

95

112

75,076

5986

6045

5742

5982

116

92

114

73,547

5980

6346

5603

5987

80

87

Table 5 September 2020 prediction using NARX network Day (start from 1st May)

No of sample tested

Predicted Positive

Discharge

Death

124

80,970

6609

6084

95

125

81,873

6641

6093

96

126

82,678

6669

6101

97

127

83,388

6692

6108

97

128

84,000

6709

6115

96

129

84,530

6723

6122

98

130

84,971

6732

6128

98

131

85,331

6738

6133

98

132

85,616

6739

6138

97

136

86,083

6715

6155

97

140

85,571

6648

6167

96

144

83,441

6505

6174

93

148

81,161

6384

6172

91

150

79,123

6286

6168

89

Nonlinear Autoregressive Exogenous ANN Algorithm-Based … Table 6 Random analysis October and November 2020 prediction using NARX network

Table 7 November and December 2020 prediction using NARX network

Day (start from 1st May)

559 Actual

Predicated

173

20 Oct

3094

2561

180

27 Oct

2522

2620

188

04 Nov

2487

2349

193

09 Nov

2257

2245

202

18 Nov

1714

2261

206

22 Nov

1655

2090

Day

Date

Predicated

173

20 Oct

2561

180

27 Oct

2620

188

04 Nov

2349

193

09 Nov

2245

202

18 Nov

2261

206

22 Nov

2090

210

26 Nov

2021

214

30 Nov

1945

218

4 Dec

1867

222

8 Dec

1789

226

12 Dec

1709

230

16 Dec

1629

239

25 Dec

1449

245

31 Dec

1329

References 1. Huang, C., Wang, Y., Li, X., Ren, L., Zhao, J., Hu, Y., Zhang, L., Fan, G., Xu, J., Gu, X.: Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. The lancet 395, 497–506 (2020) 2. Bhagavathula, A.S., Rahmani, J., Aldhaleei, W.A., Kumar, P., Rovetta, A., Global, regional and national incidence and case-fatality rates of novel coronavirus (COVID-19) across 154 countries and territories: a systematic assessment of cases reported from January to March 16, 2020, medRxiv (2020) 3. Liu, Y., Gayle, A.A., Wilder-Smith, A., Rocklöv, J.: The reproductive number of COVID-19 is higher compared to SARS coronavirus. J. Travel Med. (2020). 4. Velavan, T.P., Meyer, C.G.: The COVID-19 epidemic. Trop. Med. Int. Health 25, 278 (2020) 5. Letko, M., Marzi, A., Munster, V.: Functional assessment of cell entry and receptor usage for SARS-CoV-2 and other lineage B betacoronaviruses. Nat. Microbiol. 5, 562–569 (2020) 6. Tomar, A., Gupta, N.: Prediction for the spread of COVID-19 in India and effectiveness of preventive measures. Sci. Total Environ. 138762 (2020) 7. Hussain, A.A., Bouachir, O., Al-Turjman, F., Aloqaily, M.: Ai techniques for covid-19. IEEE Access 8, 128776–128795 (2020)

560

M. Venkateshkumar et al.

8. Wong, Z.S., Zhou, J., Zhang, Q.: Artificial intelligence for infectious disease big data analytics. Infect., Dis. Health 24, 44–48 (2019) 9. Pal, R., Sekh, A.A., Kar, S., Prasad, D.K.: Neural network based country wise risk prediction of COVID-19. arXiv preprint arXiv:2004.00959 (2020). 10. Al-Najjar, H., Al-Rousan, N.: A classifier prediction model to predict the status of Coronavirus CoVID-19 patients in South Korea (2020). - c, N., Lorencin, I., Mrzljak, V.: Modeling the spread of 11. Car, Z., Baressi Šegota, S., Andeli´ COVID-19 infection using a multilayer perceptron. Comput. Math. Methods Med. 2020 (2020). 12. Zheng, N., Du, S., Wang, J., Zhang, H., Cui, W., Kang, Z., Yang, T., Lou, B., Chi, Y., Long, H.: Predicting covid-19 in china using hybrid AI model. IEEE Trans. Cybern. (2020). 13. Rustam, F., Reshi, A.A., Mehmood, A., Ullah, S., On, B., Aslam, W., Choi, G.S.: COVID-19 future forecasting using supervised machine learning models. IEEE Access (2020) 14. Wang, B., Sun, Y., Duong, T.Q., Nguyen, L.D., Hanzo, L.: Risk-aware identification of highly suspected COVID-19 cases in social IoT: a joint graph theory and reinforcement learning approach. IEEE Access 8, 115655–115661 (2020) 15. Zhou, C., Yuan, W., Wang, J., Xu, H., Jiang, Y., Wang, X., Wen, Q.H., Zhang, P.: Detecting suspected epidemic cases using trajectory big data. arXiv preprint arXiv:2004.00908 (2020). 16. Mohammed, A.K., Wang, C., Zhao, M., Ullah, M., Naseem, R., Wang, H., Pedersen, M., Alaya Cheikh, F.: Semi-supervised network for detection of COVID-19 in chest CT scans (2020). 17. Ardabili, S.F., Mosavi, A., Ghamisi, P., Ferdinand, F., Varkonyi-Koczy, A.R., Reuter, U., Rabczuk, T., Atkinson, P.M.: Covid-19 outbreak prediction with machine learning, available at SSRN 3580188 (2020). 18. Pinter, G., Felde, I., Mosavi, A., Ghamisi, P., Gloaguen, R.: COVID-19 pandemic prediction for Hungary; a hybrid machine learning approach. Mathematics 8, 890 (2020) 19. Vinusha, H.M., Shivamallu, C., Prasad, S.K., Begum, M., Gopinath, S.M., Srinivasa, C., Chennabasappa, L.K., Navya Shree, B., Kollur, S.P., Ankegowda, V.M., Balasubramanian, S.: Understanding the pathogen evolution and transmission prevention measures: Recent findings on molecular interventions towards COVID-19 therapeutics via hints from the past. Int. J. Res. Pharm. Sci. 11(SPL1), 442–451 (2020) 20. Lakshmi Priyadarsini, S., Suresh, M.: Factors influencing the epidemiological characteristics of pandemic COVID 19: a TISM approach. Int. J. Healthc. Manag. 13(2), 89–98 (2020) 21. Arora, P., Kumar, H., Panigrahi, B.K.: Prediction and analysis of COVID-19 positive cases using deep learning models: a descriptive case study of India. Chaos, Solitons Fractals 139, 110017 (2020) 22. Tobías, A.: Evaluation of the lockdowns for the SARS-CoV-2 epidemic in Italy and Spain after one month follow up. Sci. Total Environ. 138539 (2020) 23. Wang, L., Li, J., Guo, S., Xie, N., Yao, L., Cao, Y., Day, S.W., Howard, S.C., Graff, J.C., Gu, T.: Real-time estimation and Prediction of mortality caused by COVID-19 with patient information based algorithm. Sci. Total Environ. 138394 (2020) 24. L.-S. Wang, Y.-R. Wang, D.-W. Ye, Q.-Q. Liu, A review of the: Novel coronavirus (COVID-19) based on current evidence. Int. J. Antimicrob. Agents 2020, 105948 (2019) 25. Venkateshkumar, M., Indumathi, R.: Comparative analysis of hybrid intelligent controller based MPPT of fuel cell power system. In: 2017 IEEE International Conference on Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), Chennai

Detecting Image Similarity Using SIFT Kurra Hima Sri, Guttikonda Tulasi Manasa, Guntaka Greeshmanth Reddy, Shahana Bano, and Vempati Biswas Trinadh

Abstract Manually identifying similarity between any images is a bit difficult task. So, we have come up with an image similarity detection model which will identify the similarities between two images. Features of the one image are compared with the other to find how similar they are. The scale-invariant feature transform (SIFT) algorithm is used to detect similarity between input images and also to calculate the similarity score up to which extent the images are matched. SIFT detects the keypoints and computes its descriptors. We will find the best matches of the descriptors by using FLANN-based algorithm. It takes the descriptor of first image and compares with second image. The accuracy point which we achieved in translational image similarity is 60% and in rotational image similarity is 90%, and feature matching similarity differs depending upon the given inputs. Keywords SIFT · Keypoints · Descriptors · FLANN · Matching · Similarity

1 Introduction Image matching plays an important role in the field of computer vision. This is used to recognize an object and to retrieve an image [1]. In recent years, with thorough research on the concept of artificial intelligence and computer vision, implementation of precise and real-time image match has turned out to be intense research topic [2]. Local features of an image typically have some local invariance to the illumination of an image, image rotation and image scaling relative to global features. At present, there are many algorithms for detecting the features, namely SIFT, SURF, ORB and BRISK [3]. K. H. Sri (B) · G. T. Manasa · G. G. Reddy · S. Bano Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India S. Bano e-mail: [email protected] V. B. Trinadh College of Arts and Sciences, Georgia State University, Atlanta, GA, USA © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_45

561

562

K. H. Sri et al.

Among those algorithms, SIFT algorithm is one of the best approaches and it provides best results by extracting the local features of an image. Professor David Lowe proposed this SIFT algorithm. The strong traits of the SIFT algorithm are to prevent transformations of the image for matching the keypoints [4]. SIFT primarily extracts all the keypoints of an image, and it selects the potential keypoints by eliminating the keypoints that have low contrast and high edge response. Then, it computes the keypoint descriptors which are required to match the features. Image descriptors are widely used in computer vision applications such as image matching, image classification and image retrieval [5]. The usual SIFT descriptor uses orientation histogram to point out the shape of the specific region [6]. The matcher that is used with SIFT in this approach is FLANN-based matcher. FLANN is a collection of many optimized algorithms that helps to search the fast nearest neighbour. It finds the first two best matches of the keypoints to compute their ratio and works fast for large datasets.

2 Related Work Based on feature detection, there are many algorithms, namely scale-invariant feature transform (SIFT), speeded up robust features (SURF), binary robust independent elementary features (BRIEF), oriented FAST and rotated BRIEF (ORB). Comparing with those algorithms, SIFT provides best results to match the images. SIFT feature detector when combined with its descriptor will give better results compared to others. Although ORB is faster than SIFT, SIFT gives accurate results. So, we implement SIFT algorithm to find image similarities. FLANN-based matcher gives the good results, and it is also faster than the BF matcher. To increase the speed or precision, parameters of the FLANN can be modified. So, FLANN matcher is used with SIFT algorithm in order to match the similarities.

3 Procedure 3.1 Pixel Difference Pixel difference is helpful to check whether the given input images are equal or not. Suppose the pixel value of a first image at a certain position is 255 and the pixel value of the second image at a corresponding position is also 255. Then, difference between these pixel values is 0. Generally, a coloured image contains three channels, namely red, green and blue. Pixel difference is to be performed on each channel. If each channel gives 0, then we can say that both images are equal otherwise not equal. In Fig. 1, there is basic working procedure of the image similarity detection. From this, we can identify the process where the similarities of the image can be matched.

Detecting Image Similarity Using SIFT

563

Fig. 1 Block diagram for the image similarity detection

4 SIFT Algorithm The scale-invariant feature transform (SIFT) algorithm is one of the feature detection algorithms which detects and describes the local features that are present in the images. It detects the keypoints and computes its descriptors present in an image.

4.1 Extrema Detection In order to detect the local extrema, images are supposed to search around scale and space. To find this, every pixel is compared with its neighbouring pixels. The word neighbour includes the pixels that present in surroundings and nine pixels of the previous and next scale. This means to find the local extrema every pixel value is to

564

K. H. Sri et al.

be compared with 26 other pixel values. If it is a local extrema, then we can say that it is a potential keypoint.

4.2 Keypoint Selection A final check on these keypoints is required to select the most accurate keypoints. After performing the contrast edge test, we are supposed to eliminate the keypoints that having very low contrast. From the remaining keypoints, eliminate the keypoints that are very close to the edge and contain high edge response. The keypoints that are selected after the elimination are considered as accurate keypoints.

4.3 Keypoint Descriptors To create the descriptor from those accurate keypoints, a 16 × 16 block which is present in the neighbourhood of a certain keypoint is taken. This 16 × 16 block is separated into 4 × 4 sub-blocks. For each one of these 16 sub-blocks of size 4 × 4, orientation histogram of 8 bins is to be generated. So, we have 128 bin values in total for each and every keypoint and these are represented in the form of a vector to get the keypoint descriptor.

4.4 Keypoint Matching For keypoint matching, we used FLANN-based matcher to match similar features present between the images based on their keypoints.

4.4.1

FLANN-Based Matcher

Fast Library for Approximate Nearest Neighbours (FLANN) is used to search the fast nearest neighbour and identify the features of high dimension. FLANN Matcher is the best algorithm for large datasets. FLANN-based matcher gives the good results, and it is also faster than the BF matcher. To increase the speed or precision, parameters of the FLANN can be modified. So, FLANN matcher is used with SIFT algorithm in order to match the similarities. The matcher tries to find the first two best matches of the keypoints. Then, compute the ratio of the first closest distance to the second closest distance. If the ratio is less than the threshold value, then we consider them as good points; otherwise, those points are rejected. The matches of these good points are drawn, and similarity score is calculated with the help of the good points.

Detecting Image Similarity Using SIFT

565

Similarity score = (Length of the good points/number of keypoints) ∗ 100 In Fig. 2, the detailed workflow of the image similarity detection is explained. This workflow shows how SIFT and FLANN algorithms are helpful in order to find the similarity matches and similarity score.

5 Pseudocode Step 1: Step 2: Step 3:

Start Give images as input in which first image must be regarded as original image while second image is to compare with original image. Check whether the input images are equal or not. If the pixel difference between the two images is 0, then both images are equal. Else the images are not equal.

Step 4: Step 5: Step 6: Step 7: Step 8: Step 9:

By using SIFT algorithm, detect the points of extrema, the local contrast points and points along edges. Generate the keypoints and descriptors of the input images. Use FLANN-based matcher to find the matches between the images, and store all the possible matches of the descriptors in an array. Assume the threshold value from which the quality of matches is decided. Find the two smallest points by calculating the distance. Check whether the ratio of two smallest points is less than threshold or not. If the ratio is smaller, consider the smallest keypoint in other image as a matched keypoint called as good point. Else repeat the steps 6, 7, 8 and 9 until all the keypoints in the images are matched.

Step 10: Step 11: Step 12: Step 13:

Store the matches identified. Find the similarity score by the help of good points. Draw all identified good matches. End.

6 Results To find the similarities, we performed some experiment with the help of SIFT and FLANN algorithms. This was done in Python language. We used opencv as it is widely used in detecting images. We take images as an input from various sources

566 Fig. 2 Flow chart for image similarity detection

K. H. Sri et al.

Detecting Image Similarity Using SIFT

567

to perform the experiment. In Figs. 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 and 17, the obtained outputs are from the respective input images. These outputs include similarity matches and the similarity score. 1.

Input Images:

Fig. 3 Input image 1

Fig. 4 Input image 2

Fig. 5 Good matches between apples and similarity score

568

K. H. Sri et al.

Fig. 6 Input image 1

Fig. 7 Input image 2

Here, two input images are taken as shown in Figs. 3 and 4. By using SIFT algorithm, the keypoints of two input images are detected and extracted, and also it will find the number of good matches. Those good matches are drawn using FLANNbased matcher as shown in Fig. 5. The number of keypoints and good matches for the given input images is given in Table 1. Output Image 2.

Input Images:

Here, two input images are taken as shown in Figs. 6 and 7. By using SIFT algorithm, the keypoints of two input images are detected and extracted, and also it will find the number of good matches. Those good matches are drawn using FLANN-based matcher as shown in Fig. 8. The number of keypoints and good matches for the given input images is given in Table 2.

Detecting Image Similarity Using SIFT

Fig. 8 Good matches between flowers and similarity score Fig. 9 Input image 1

569

570 Fig. 10 Input image 2

Fig. 11 Good matches between images and similarity score Fig. 12 Input image 1

K. H. Sri et al.

Detecting Image Similarity Using SIFT Fig. 13 Input image 2

Fig. 14 Good matches and similarity score for translation of images Fig. 15 Input image 1

571

572

K. H. Sri et al.

Fig. 16 Input image 2

Fig. 17 Good matches and similarity score for rotation of images Table 1 Similarity matches for input 1

Table 2 Similarity matches for input 2

Image 1 Keypoints

Image 2 Keypoints

Good matches

Similarity score

145

145

145

100

Image 1 Keypoints

Image 2 Keypoints

Good matches

Similarity score

167

167

2

1.197

Detecting Image Similarity Using SIFT Table 3 Similarity matches for input 3

573

Image 1 Keypoints

Image 2 Keypoints

Good matches

Similarity score

105

145

0

0.0

Output Image 3.

Input Images:

Here, two input images are taken as shown in Figs. 9 and 10. By using SIFT algorithm, the keypoints of two input images are detected and extracted, and also it will find the number of good matches. Those good matches are drawn using FLANN-based matcher as shown in Fig. 11. The number of keypoints and good matches for the given input images is given in Table 3. Output Image 4.

Input Images:

Here, two input images are taken as shown in Figs. 12 and 13. By using SIFT algorithm, the keypoints of two input images are detected and extracted, and also it will find the number of good matches. Those good matches are drawn using FLANNbased matcher as shown in Fig. 14. The number of keypoints and good matches for the given input images is given in Table 4. Output Image 5.

Input Images:

Here, two input images are taken as shown in Figs. 12 and 13. By using SIFT algorithm, the keypoints of two input images are detected and extracted, and also it will find the number of good matches. Those good matches are drawn using FLANNbased matcher as shown in Fig. 14. The number of keypoints and good matches for the given input images is given in Tables 5 and 6. Output Image See Fig. 17 and Tables 5 and 6. Table 4 Similarity matches for input 4

Table 5 Similarity matches for input 5

Image 1 Keypoints

Image 2 Keypoints

Good matches

Similarity score

429

486

282

65.734

Image 1 Keypoints

Image 2 Keypoints

Good matches

Similarity score

429

427

403

94.379

574 Table 6 Accuracy

K. H. Sri et al. Image transformation

SIFT (%)

Translation

65.7

Rotation

94.3

7 Conclusion SIFT algorithm actually detects the keypoints and computes its descriptors. There are many other algorithms for feature matching, but SIFT algorithm has good invariance to image translation and image rotation. FLANN-based matching algorithm contains a collection of optimized algorithms for the search of fast nearest neighbour, and it works faster than other matching algorithms for large datasets but it may not the give the best matching results. However, SIFT algorithm complexity is very high and sometimes it gives false matching points while matching. In future, this work can be extended to eliminate the duplicate images.

References 1. Li, J., Wang, G.: An improved SIFT matching algorithm based on geometric similarity. In IEEE 5th International Conference on Electronics Information and Emergency Communication, 2015 2. Zhao, J., Liu, H., Feng, Y., Yuan, S., Cai, W.: BE-SIFT: a more brief and efficient SIFT image matching algorithm for computer vision. In: IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, 2015 3. Zhong, B., Li, Y.: Image eature point matching based on improved SIFT algorithm. In IEEE 4th International Conference on Image, Vision and Computing (ICIVC), 2019 4. Joo, H.-B., Jeon, J.W.: Feature-point extraction based on an improved SIFT algorithm. In 17th International Conference on Control, Automation and Systems (ICCAS), 2017 5. Tang, H., Tang, F.: AH-SIFT: augmented histogram based SIFT descriptor. In 19th IEEE International Conference on Image Processing, 2012 6. Xiao, P., Cai, N., Tang, B., Weng, S., Wang, H.: Efficient SIFT descriptor via color quantization. In IEEE International Conference on Consumer Electronics—China, 2014 7. Augustine, K.V., Dongjun, H.: Image similarity for rotation invariants image retrieval system. In International Conference on Multimedia Computing and Systems, 2019 8. Regentova, E., Deng, S.: A wavelet-based technique for image similarity estimation. In Proceedings International Conference on Information Technology: Coding and Computing (Cat.No.PR00540), 2000 9. Gilinsky, A., Manor, L.Z.: SIFTpack: a compact representation for efficient SIFT matching. In IEEE International Conference on Computer Vision, 2013 10. Vijayan, V., Pushpalatha Kp, FLANN based matching with SIFT descriptors for drowsy features extraction. In: Fifth International Conference on Image Information Processing (ICIIP), 2019 11. Ratna Bhargavi, V., Rajesh. V.: Exudate detection and feature extraction using active contour model and sift in color fundus images. ARPN J. Eng. Appl. Sci. 10(6) (2015) 12. Routray, S., Ray, A.K., Mishra, C.: Analysis of various image feature extraction methods against noisy image: SIFT, SURF and HOG. In: Proceedings of the 2017 2nd IEEE International Conference on Electrical, Computer and Communication Technologies, ICECCT 2017 13. Patil, J.S., Pradeepini, G.: SIFT: a comprehensive. Int. J. Recent Technol. Eng. (2019)

Detecting Image Similarity Using SIFT

575

14. Koteswara Rao, L., Rohni, P., Narayana, M.: Lemp: a robust image feature descriptor for retrieval applications. Int. J. Eng. Adv. Technol. (2019) 15. Pande, S.D., Chetty, M.S.R.: Linear bezier curve geometrical feature descriptor for image recognition. Recent Adv. Comput. Sci. Commun. (2020) 16. Bulli Babu, R., Vanitha, V., Sai Anish, K., Content based image retrieval using color, texture, shape and active re-ranking method. Indian J. Sci. Technol. (2016) 17. Rao L.J., Neelakanteswar P., Ramkumar, M., Krishna, A., Basha, C.Z.: An effective bone fracture detection using bag-of-visual-words with the features extracted from SIFT. In Proceedings of the International Conference on Electronics and Sustainable Communication Systems, ICESC 2020 18. Somaraj, S., Hussain, M.A., A novel image encryption technique using RGB pixel displacement for color images. In: Proceedings—6th International Advanced Computing Conference, IACC 2016 19. Bhavana, D., Rajesh, V., Kumar, K.K., Implementation of plateau histogram equalization technique on thermal images. Indian J. Sci. Technol. (2016) 20. Gattim, N.K., Rajesh, V.: Rotation and scale invariant feature extraction for MRI brain images. J. Theor. App. Inf. Technol. (2014) 21. Vishal, P., Snigdha, L.K., Bano, S.: An efficient face recognition system using local binary pattern. Int. J. Recent Technol. Eng. (IJRTE) ISSN: 2277-3878, 7(5S4) (2019) 22. Nikhitha, P., Mohana Sarvani, P., Lakshmi Gayathri, K., Parasa, D., Bano, S., Yedukondalu G.: Detection of tomatoes using artificial intelligence implementing Haar Cascade technique. In: Bindhu, V., Chen, J., Tavares, J. (eds.) International Conference on Communication, Computing and Electronics Systems. Lecture Notes in Electrical Engineering, vol. 637, Springer, Singapore (2020)

A Secure Key Agreement Framework for Cloud Computing Using ECC Adesh Kumari, M. Yahya Abbasi, and Mansaf Alam

Abstract Cloud computing has the ability to share data via public channel. Digital identity is needed for clients to use facilities of cloud in cloud computing. Presently, public key and asymmetric cryptography are used for most of cloud-based system to provide safe communication. Due to its characteristics, ID-based cryptography is very important in cloud domain. Our framework has three phases like as initialization, user registration and authentication, and key agreement phase. Also, we have compared our work with other works in the same domain and also found that our work is more secure than others. In our framework, user and server communicates to each other and establish key agreement. Keywords Authentication · Cloud computing · Elliptic curve cryptography · Security and privacy

1 Introduction Cloud computing is internet-assisted and speedily increasing technology. End- users do not have an idea about the development and storage place of data in cloud computing. They can fetch, procedur,e and then store data in the database of cloud. If they have internet connection, then they can use data anywhere and anytime. This technology is very flexible. These technologies are divided into services like Platform as a services, Infrastructure as a services, Integration, Mobile backend as a services, Blockchain as a services, Function as a services, Software as a services, and Serverless computing [1, 2].

A. Kumari (B) · M. Yahya Abbasi Department of Mathematics, Jamia Millia Islamia, New Delhi 110025, India M. Yahya Abbasi e-mail: [email protected] M. Alam Department of Computer Science, Jamia Millia Islamia, New Delhi 110025, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_46

577

578

A. Kumari et al.

In cloud environment, it is very important to authenticate each other by any two parties, when they use the cloud facilities. As soon as any user registers himself with his data, the service provider provides him some identity and then, during the generation of session key they authenticate each other. In 2009, an ECC based IDassisted authentication protocol for mobile is framed by Yang et al. [3]. In 2011, an authentication protocol ECC- assisted for cloud atmosphere proposed by Chen et al. [4]. A client authentication protocol is presented by Chang and Choi [5] for cloud system. A key agreement framework based on identity for cloud environment is presented by Khan et al. [6] and Mishra et al. [7], which shows safety and authenticity based on identity for cloud with different attacks. An authentication protocol based for cloud computing using ECC presented by Luo et al. [8] in 2017. In telecare medical information system two authentication and key agreement protocols based on cloud is presented by Kumar et al. in 2018 [9, 10]. In 2019, an efficient and secure key agreement work in cloud environment for IoT using three-factor is presented by Yu et al. [11]. A lightweight key agreement and authentication scheme based on dynamic pseudonym identity without verification tables is presented by Xue et al. [12] for multi-server architecture. Also, a lightweight authentication protocol in a cloud environment for IoT- enabled devices are presented by Amin et al. [13], in 2018. In 2019, Zhou et al. [14] presented a key agreement work for cloud computing using IoT techniques for cloud computing.

1.1 Motivation and Contribution In this framework, we aim to propose a mutual authentication scheme in cloud environment. Our protocol has many security attributes and features. Our work has the following characteristics discussed as: • Mutual authentication is established between user and cloud server and they agree to session key. • Our protocol is strong against many security attacks. • The proposed work is secure and efficient compare to other related works.

1.2 Organization of the Paper The rest of our paper is framed as. Section 2, preliminary. Section 3, the proposed framework l. Section 4, security analysis, and finally conclusion.

2 Preliminaries We use notations/symbols as indicated in Table 1.

A Secure Key Agreement Framework for Cloud Computing Using ECC Table 1 Notations

Notation

Description

ECC

Elliptic curve cryptography

IDi

The unique identity of user i

E(Fq)

Elliptic curve E over Fq

sk i

The session key generated entities i

U

The User

S

Cloud server

G

ECC-based additive group

g

Generator of G

E

Adversary

||

Concatenation operation



XOR-operation

579

2.1 Background of Elliptic Curve Group Let Fq be the prime finite field with large prime number q. E expresses an elliptic curve over Fq which is denoted by an equation v 2 = u 3 + cx + d mod q, where, c, d ∈ F p . E is said to be nonsingular if 4c3 + 27d 2 mod q = 0. additive elliptic  curve group construed as G =  The (u, v) : u, v ∈ Fq ; (u, v) ∈ E ∪ {}, where the point  is zero elements in G. If N = (u P , v P ) ∈ G and M = u Q , v Q ∈ G, with M = N then,N  + M = (u i , vi ), where u i = μ2 − u P − u Q mod q, vi = μ u P − u Q − v P mod q and μ = v Q − v P /(u Q − u P ). The scalar multiplication on E is explained as t N = N + N + N · · · + N (t times). The more information of ECC is given in [15]. We have taken ECC based hard problems, Elliptic Curve Discrete Logarithms Problem (ECDLP [16] and Elliptic Curve Computational Diffie-Hellman Problem (ECCDHP) [17].

3 The Proposed Framework There are the following phase in the proposed protocol:

3.1 Initialization Phase Cloud server S choose the elliptic curve E q (a, b) over prime finite field Fq defined as: v 2 = u 3 + cu + d mod q where c, d ∈ G with 4d 3 + 27c2 mod q = 0. S chooses hash function and published parameters.

580

A. Kumari et al.

Flowchart 1 User registration phase

3.2 User Registration Phase In this phase, U registers himself/herself with S as follow: Step 1: Firstly, U selects his identity I DU and password P WU . Then, U generates u ∈ Z q∗ , computes, U1 = h(P WU I DU ) ⊕ u. Then, U sends {I DU , U1 , P WU } to S. Step 2: After receiving {I DU , U1 , P WU }, S generates a serial no S RU . Then, S calculates U2 = h(S RU P WU I DU ) ⊕ U1 and stores {U2 , S RU } in database corresponding to I DU . Then, S sends message {U2 , S RU } to U . Step 3: On receiving {U2 , S RU }, U stores this message in his database. The Flowchart 1 is discussed this phase mathematically.

3.3 Authentication and Key Agreement Phase 





Step U inputs identity I DU and password P WU . Then, U computes U1 =  1: Firstly,       h P WU I DU ⊕ u and verifies whether U2∗ =?h S RU P WU I DU ⊕ U1 . Then, U generates x ∈ Z q∗ , calculates A = xg, H1 = h(I DU AP WU ) and send the message {H1 , A, T1 } to S. Step 2: After receiving {H1 , A, T1 } from U , firstly S verifies the condition whether T2 −T1 ≤ T . If this condition is satisfied then S computes H1∗ = h(I DU  AP WU ) and again verifies H1∗ =?H1 . Then, S generates y ∈ Z q∗ and calculates the entities B = yg, H2 = h(BT1 A), S K S = h(y AH2 T2 T1 ). After this, S sends the message {H2 , B, T2 } to U (Flowchart 2). Step 3: On getting {H2 , B, T2 }, U verifies T3 − T2 ≤ T . If the condition is satisfied then U computes H2∗ = h(BT if H2∗ =?H2 . If the condition 1 A1 ) and check   ∗ holds then he calculates S K U = h x BH2 T2 T1 .

A Secure Key Agreement Framework for Cloud Computing Using ECC

581

Flowchart 2 Authentication and key agreement phase

Hence, the session key is S K U = S K S = S K . After the establishment of session key both U and S can communicate to each other on public channel safely.

4 Security Analysis Here, we explain the security features of our framework in this segment as below.

4.1 Session Key Security In authentication and key agreement phase (AKAP) of our frame work, session key by U and S is calculated like: Session key is calculated by S as S K S = h(y AH2 T2 T1 )

582

A. Kumari et al.

  and by U as S K U = h x BH2∗ T2 T1 where H2 = h(BT1 A) and H2∗ = h(BT1 A1 ). H2 and H2∗ can not be computed as hash function are secured. So, session key S K S and S K U can not be executed by any E. Hence, the session key is managed in this framework.

4.2 Message Authentication In this framework message authentication property is verified by U and S as below: • When S receives message {H1 , A, T1 }, then he checks T2 − T1 ≤ T and H1∗ = ?H1 . • When U receives message {H2 , B, T2 }, then he checks T3 − T2 ≤ T and H2∗ = ?H2 . If any attacker tries to change the transmitted information between U and S. Then U and S can recognize it. Hence, our protocol is safe against the message authentication.

4.3 Replay Attack As we have used time stamp and random number in this work and these are common counter-measures in replay attack. Hence, this attack is not possible in our protocol.

4.4 Man in the Middle Attack In AKAP, when U and S authenticate to each other they do not transmit any information to third party. If third party is any attacker, then he can try man in middle attack by transmitting wrong information. But, when U and S authenticate to each other they swap over H2∗ /H2 . To computes H2∗ /H2 , information about S K is prescribed even though, so S K is safe.

4.5 Known Session-Specific Attack In this framework, session key S K U = S K S = S K executed by U and S depandes on y A, x B, H2 , H2∗ , T1 and T2 . These values depends on time stamps, hash function, and the random values x and y. As these values are safe, so our protocol is safe against this attack.

A Secure Key Agreement Framework for Cloud Computing Using ECC Table 2 Performance analysis

583

Protocol

Computation cost

Communication cost in bits

Yu et al. [11]

34TH

2176

Xue et al. [12]

36TH

3840

Amin et al. [13]

30TH

3456

Zhou et al. [14]

43TH

4552

Proposed

8TH

768

TH : Cost of cryptographic hash function

4.6 Key Freshness In our protocol, there is fresh key for each session. So, the condition of key freshness for each session succeeds in our protocol.

4.7 Known Key Secrecy In AKAP, U calculates A and S calculates B. Here, y A = x B = x yg, random number x and y taken from Z q∗ . Hence our framework manages property of known key secrecy.

5 Performance Analysis In this session, we adopt the Yu et al.’s [11] model in cloud computing for computation experiment. We compare the computation and communication in exiting protocols like the Yu et al.’s [11], Xue et al. [12], Amin et al. [13] and Zhou et al. [14]. Table 2 explains the comparisons of computation and communication cost of this work and related work in cloud environment.

6 Conclusion and Future Direction In this paper, we have proposed an efficient and secure authentication and key agreement protocol for cloud computing. The proposed scheme has satisfied many attributes and features like message authentication, key freshness, known key secrecy, man in the middle attack, known session-specific attack, replay attack, session key security. Further, our paper is shown that our protocol is secure over public channel.

584

A. Kumari et al.

Also, this is a real-life application. The smartcard-based authentication protocols are useful in different areas with different applications in network system.

References 1. Dinesha, H., Agrawal, V.K.: Multi-level authentication technique for accessing cloud services. In: 2012 International Conference on Computing, Communication and Applications (ICCCA), pp. 1–4, 2012. IEEE. 2. Jing, X., Jian-Jun, Z.: A brief survey on the security model of cloud computing. In: 2010 Ninth International Symposium on Distributed Computing and Applications to Business Engineering and Science (DCABES), pp. 475–478, 2010, IEEE. 3. Yang, J.-H., Chang, C.-C.: An id-based remote mutual authentication with key agreement scheme for mobile devices on elliptic curve cryptosystem. Comput. Secur. 28(3–4), 138–143 (2009) 4. Zhang, Q., Cheng, L., Boutaba, R.: Cloud computing: state-of-the-art and research challenges. J. Internet Serv. Appl. 1(1), 7–18 (2010) 5. Chang, H., Choi, E.: User authentication in cloud computing. In: International Conference on Ubiquitous Computing and Multimedia Applications, pp. 338–342. Springer. Chen, T.-H., Yeh, H.-l., Shih, W.-K.: An advanced ecc dynamic id-based remote mutual authentication scheme for cloud computing. In: 2011 5th FTRA International Conference on Multimedia and Ubiquitous Engineering (MUE), pp. 155–159, 2011, IEEE, Series.2004.311 p (2011). 6. Khan, A.A., Kumar, V., Ahmad, M.: An elliptic curve cryptography based mutual authentication scheme for smart grid communications using biomet- ric approach. J. King Saud Univ.-Comput. Inf. Sci. (2019). https://doi.org/10.1016/j.jksuci.2019.04.013 7. Mishra, D., Kumar, V., Mukhopadhyay, S.: A pairing-free identity based authenti- cation framework for cloud computing. In International Conference on Network and System Security, pp. 721–727. Springer, (2013). 8. Luo, M., Zhang, Y., Khan, M.K., He, D.: A secure and efficient identity-based mutual authentication scheme with smart card using elliptic curve cryptography. Int. J. Commun. Syst. 30(16), 2017. 9. Kumar, V., Jangirala, S., Ahmad, M.: An efficient mutual authentication framework for healthcare system in cloud computing. J. Med. Syst. 42(8), 142 (2018) 10. Kumar, V., Ahmad, M., Kumari, A.: A secure elliptic curve cryptography based mutual authentication protocol for cloud-assisted tmis. Telematics Inform. 100–117, 38 (2019) 11. Yu, S.J., Park, K.S., Park, Y.H.: A secure lightweight three-factor authentication scheme for IoT in cloud computing environment. Sensors 3598 (2019). 12. Xue, K., Hong, P., Ma, C.: A lightweight dynamic pseudonym identity based authentication and key agreement protocol without verification tables for multi-server architecture. J. Comput. Syst. Sci. 80(1), 195–206 (2014) 13. Amin, R., Kumar, N., Biswas, G.P., Iqbal, R., Chang, V.: A light weight authentication protocol for IoT-enabled devices in distributed cloud computing environment. Future Gener. Comput. Syst. 78, 1005–1019 (2018) 14. Zhou, L., Li, X., Yeh, K.H., Su, C., Chiu, W.: Lightweight IoT-based authentication scheme in cloud computing circumstance. Future Gener. Comput. Syst. 91, 244–251 (2019) 15. Kumar, V., Ahmad, M., Kumar, P.: An identity-based authentication framework for big data security. In: Proceedings of 2nd International Conference on Communication, Computing and Networking, pp. 63–71 (2019). 16. Darrel, H., Alfred, M., Scott, V.: Guide to elliptic curve cryptography. Hankerson Darrel, Menezes Alfred J., Vanstone Scott Springer-Verlag Professional Computing (2004) 17. Nayak, S.K., Mohapatra, S., Majhi, B.: An improved mutual authentication framework for cloud computing. Int. J. Comput. Appl. 52(5), 2012.

Web-Based Application for Freelance Tailor Diana Teresia Spits Warnars, Muhammad Lutfan Nugraha, and Harco Leslie Hendric Spits Warnars

Abstract Sewing experience is sometimes profitable and sometimes negative because there are more rivals and the competition with marketing tactics is not known. This is a very problematic matter, particularly for freelance tailors with limited capital and known as small–medium enterprises (SMEs), in terms of their managerial and marketing deficiencies. The sewing skills are certainly no less competitive and need continuous assistance. A marketplace can unite freelance tailor to the community to share their sewing expertise to increase their income. In addition, this webbased program can assist and arrange good quality fashion for low-to-middle budget consumers. Keywords Information systems · Web-based application · Modeling systems · Unified modeling language · Store application systems

1 Introduction Technology can now be used anywhere and at any time, and it is estimated that our idea is to create applications or companies in the textile industry to facilitate the search, manufacture, transaction, and design, and ready-made clothing, as of December 25, 2018, an idea to build our Vesture D’A application or company in the textile industry [1]. Currently, with the system created users must initially create an account to become a member to access the register menu. Users can choose to be our customers or want to be our partners as tailors. Customers can access the two menus offered and can choose clothes or clothing menus if a customer wishes to use the Vesture D’A application service. Customers can choose the clothes offered by partners (affiliated tailors with us) on the selected menu, while customers can D. T. S. Warnars · M. L. Nugraha Information Systems Department, School of Information Systems, Bina Nusantara University, Jakarta, Indonesia 11480 H. L. H. S. Warnars (B) Computer Science Department, BINUS Graduate Program—Doctor of Computer Science, Bina Nusantara University, Jakarta, Indonesia 11480 e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_47

585

586

D. T. S. Warnars et al.

choose and share their customer-desired designs on the clothing menu. With the design options, the customers were given preference for combination from head to toe design, because after customer chooses, the customers of the menu must go through the sign payment process to prevent fake orders or hit and run. Then one partner (tailor) who receives customer orders will come to the specified place of the customer after payment is completed to purchase changing clothes and material selection from the clothes, which will then be continued with customer payment [2]. E-marketplace is an Internet-based (web-based) online media place for conducting business activities and transactions between buyers and sellers. With the desired guidance, purchasers will identify as many vendors as possible so that they are purchased according to consumer rates. From the definition, it can be concluded that a marketplace is a place that brings together many sellers and buyers to transact with one another in an electronic marketing container [3].

2 Previous Research After that, by using use case business analyst and an overview of the current system in the steel manufacturing industry, which draws the current situation where the data collected from the system, including individuals there the user requirement tools such as interview and observation were described [4]. Data gathering is used to recognize the current problem as knowledge acquisition. A use case diagram and class diagram are used in the method where the use case diagram is used to display the proposed model design while the class diagram is used to show the relationship of all the data used in the proposed system [5, 6]. The information technology growth has become increasingly rapid and a fascinating debate to explore [7]. The use of information technology can provide convenience in processing and disseminating information in real time [8]. Indirectly, the development of information technology has also helped to raise various trends, one of which is the fashion trend in the community. The advancement of technology and the flow of information make people more open to global knowledge. It is evident that in the growth of the fashion world, fashion trends in Indonesia are very influenced by Western culture (weebly.com) [9]. Based on data taken by gbgindonesia.com, according to data from BPS (2013), the number of companies engaged in the fashion sector reached 1,107,955 units. About 10% of them are large companies, 20% are medium-sized companies, and 70% are small companies (kompasiana.com) [10]. Substantially, modes are divided according to the type of process, both industry, traditional, made-to-order, or ready-to-wear [11]. For naming the made-to-order and ready-to-wear only for clothing, accessories, and footwear products: (a)

Made-to-order is a type of process for making fashion products according to orders, either from individuals or groups or in other words done for private clients [12, 13].

Web-Based Application for Freelance Tailor

(b)

587

Ready-to-wear is the process of making fashion products based on standard/general sizes, and their products are marketed as ready-to-use products [14, 15].

The activities in the process of selling wholesale or retail fashion products and supported by sales associations are e-commerce [4]. E-commerce/online is one of the media to make sales by relying on information technology. Finally, e-commerce sales activities have become a new alternative both from the sellers’ and buyers’ side because this method facilitates purchase transactions that can be done anywhere [5]. The payment process is carried out using the provisioning method, such as a transfer via ATM/e-banking. With e-commerce being one of the strongest avenues of sales to capture the target market of tech-savvy young people [6]. The people of Indonesia, however, have been able to produce a wealthy magnet of fashion objects, especially in Southeast Asia, characterized by the existence of both traditional and modern fashion shopping centers, such as the Tanah Abang Jakarta market, Pekalongan Batik market, and shopping centers spread in various big cities in Indonesia [16, 17]. Purchasing any product in different ways, one of which uses e-commerce, along with the times [18].

3 The Proposed Model In this project, there are three steps which are implemented as analysis, design, and implementation. The first step is the analysis which was performed using some user requirement tools such as literature review, interview, and observation. The literature was performed at the research papers which are similar to the idea of freelance, particularly for the tailor. The current government regulation regarding freelance, tailor, and marketplace was examined. In addition, the interviews were performed by interacting face-to-face about their job and income and how the technology can help them boost their revenue with current tailor freelance. The observation was also carried out by looking at the current operations which carried out by a tailor to deal with the transactions of their clients. The second step was done by modeling the business process using a use case diagram as shown in Fig. 1, where the use case diagram is part of Unified Modeling Language (UML) which has been used to model and develop the systems. The last step is an implementation, where in this case, the application was applied in webbased programming using Personal Home Pages (PHP) as server programming and MySQL as for database transactions. Figure 1 shows the use case diagram of the proposed systems where the actors in the systems such as user and freelancer tailor can join with the system by accepting design your own which will meet the user expectation with the freelancer tailor skills. Both actors should do the registration first as the user interface (UI) can be seen in the next figures, then they need to login first based on their email and password in the registration process. User registration’s UI can be seen in Fig. 4, freelance tailor

588

D. T. S. Warnars et al.

Registraon Give Feedback

Login

User

order

Give Rang

Deliver order

Pay order

Design your own

Return order Partner registraon Freelance Tailor Forum

Withdraws

Fig. 1 Use a case diagram of the proposed systems

registration shows in the use case diagram as partner registration use case, and the UI can be seen in Fig. 9. Meanwhile for the login, UI can be seen in Fig. 5. The customer can order their request in which the user can send their rating and/or give input to the order system after purchases, and they must pay for their order and delivery of the product as an alternative way in which the user can physically come to the office, pay for the product, and carry the product on their own. In the meantime, at their request, the user can also order their fashion style, that is, linked to freelancer tailor and show your use case in Fig. 1 as design. In addition, if the order does not fulfill the user’s needs, then they should return their order as the system would report a return without any harm for 7 days, and Fig. 8 shows the return order UI. In addition, a forum is applied to create contact between the user and the freelance tailor where the user can raise their concern about any tailor issues and any details related to the making of clothes as a thread that begins the conversion in the forum and everyone can respond to this thread by giving their ideas and comments related to the subject. The use case withdraws as a use case whenever the tailor wants to withdraw their income from the systems. Figure 2 shows the class diagram from the proposed model systems where this class diagram as a representation of data class diagram which as database model design, and in this Fig. 2, there are 14 classes as means, and there are 14 tables’ databases such as thread, design, tailor, withdraw, user, order, buy, product, reply,

Web-Based Application for Freelance Tailor

589

Fig. 2 Class diagram of the proposed systems

feedback, rating, retorder, payment, and delivery. Based on the use case diagram in Fig. 1, when user doing their order, they will include some tables such as user, order, buy, and product. When the user does design your own in use case diagram, then it will include tables such as user, design, and tailor. Whenever users want to return their order, then it will include tables such as user, order, buy, product, and retorder. When user pays their order, then it will include tables such as user, order, buy, product, and payment, and when the order will be delivered to the user, then it will include tables such as user, order, buy, product, and delivery. After the delivery of the product and the user gets their product, then they can give a rating regarding their product which will be saved in table Rating and their feedback to table Feedback. Whenever the income of the tailor is enough and based on monthly, then they can withdraw their income from the system using tables such as user, design, tailor, withdraw, order, buy, and product. Moreover, tables such as thread and reply are used to support the forum in the system where user can raise their question regarding the product or any question in fashion or clothing industry and other can give their reply. Figure 3 is the main menu of the application and since this mobile application is running in Indonesian territory and specific to accept only Indonesian customers, then the presentation will be delivered in Bahasa. For new users as a new client, they should first register as shown in Fig. 4 where their data such as name, date of birth, gender, address, email address, home phone number, mobile number should be

590

D. T. S. Warnars et al.

Fig. 3 Main menu of the application

entered, and they can optionally enter their figure profile. Since this proposed webbased application is limited to only Indonesian clients, the language displayed using Bahasa is limited to Indonesian clients exclusively. After filling any data requirement, then the user just pushes the register button and the system will check in the table user database; if the email has been registered or not and if the email has been registered, then it will be denied and gives an optional if just in case if the user forgets their password and wants to restore the password which basically will be sent it to the registered email. If the registered email cannot be identified in the table user in the registration process as shown in Fig. 4, then the confirmation will be sent to the email address of the user with a connection where the user can click on the link and confirms that they are using the email address for registration in the web-based application. The link will also be asked the new user to enter their password and password confirmation with a minimum length of six characters; if the user agrees, then the password will

Web-Based Application for Freelance Tailor

591

Fig. 4 Registration menu

be delivered to the system and check to the table user and search based on their email address, and the password will be saved for future login transaction. Figure 5 shows the login menu when the user and freelance tailor want to enter the systems by entering their email address as their user id and password which they had made in the registration process. The system will check the user id and password to the user table database for the user and freelance table database for the freelance tailor; if the email address is registered, then the password will be checked if the same is saved in the record. If the user id is not registered, then there will be a message confirming that the user id is not registered as the email address and giving an alternative if they are a new user and can register as a new user. In addition, if the password is not the same as the saved password in the record, then there will be a

592

D. T. S. Warnars et al.

Fig. 5 Login menu

message verifying if the user forgets their password and needs to send the password to their registered email address as recognized as their user id again. If the user has a privileged or recognized user id and password, then they can continue to other transactions such as order menu or design their fashion style. Figure 6 shows the order menu where the user wants to buy available fashion as shown in Fig. 2, and after choosing the chosen fashion, then they can enter the way that wants to pay such as bank transfer, cash on delivery or credit card, and soon this Fig. 6 shows the summary of the order. Meanwhile, Fig. 7 shows whenever the user wants to design their fashion style based on two options such as available design and create your design. In this case, this customized order will be joined with an available freelance tailor where the user can choose which suitable freelance tailor

Web-Based Application for Freelance Tailor

593

Fig. 6 Order menu

for them. For the next communication, they can communicate using other platform communication such as WhatsApp. Whenever the order of the customer is not sufficient as their wish, then the user has a right to return their order for 7 days and ensure that they can also return their order as provided by the store. Each delivery by the store will be tracked and recorded using pictures for confirmation to the customer and to check whether the order has been wrongly sent or damaged due to damage. Figure 8 shows the process to return the order where they can enter the product id and other information. Figure 9 shows the freelance tailor’s partner registration menu where they need to enter their data such as name, date of birth, gender, address, login user’s email address, home phone number, mobile number, type of fashion work they can do, and

594

Fig. 7 Design your fashion style menu

Fig. 8 Return menu

D. T. S. Warnars et al.

Web-Based Application for Freelance Tailor

595

Fig. 9 Partner registration menu

how long they become a tailor. The owner will have a small meeting with the team on the basis of the freelance tailor registration to determine if the registered freelance tailor is accepted and invite them to have an interview and to consider their tailor ability. Based on the interview process, the owner fashion store will decide if they accept the newly registered freelance tailor to join the team. Meanwhile, Figs. 10, 11, and 12 are reporting for the owner store to monitor the store’s transactions. Figure 10 shows the order reporting which shows how many running transactions which comes from how many customers and which one is customized order handle by freelance tailor including the total payment and the fee for the store. The number of return orders from the customer is shown in Fig. 11, and this will be expected by the owner store to track the order and identify the order

596

D. T. S. Warnars et al.

Fig. 10 Order reporting menu

issues as soon as possible and reduce the complaint of the user order. Figure 12 displays the order in progress to ensure that the order is satisfactory to the needs of the consumer and to ensure an outstanding order to satisfy the user as the potential order would rise at the end of the day, whether by the user or because the user has recommended new user customers.

Web-Based Application for Freelance Tailor

597

Fig. 11 Return reporting menu

4 Conclusion From the start-up business plan that seeks to find the right business model to be able to put consumers and partners together so that inside Vesture D’A, they can connect with each other, not only using websites to interact with purchasing and selling, but also creating an online system where the consumer can easily have a fashion or design that fits the desires of the customer. With this, the admin and partner can increase the profit that will be generated from every transaction that runs on the Vesture D’A website. From the approach taken to produce market needs, the website in such a way makes it easier for customers to use the website and maintain through maintenance

598

D. T. S. Warnars et al.

Fig. 12 Order in progress reporting menu

provided by the Vesture D’A website. It is also possible to incorporate this program using a mobile application to make it convenient for the user or to run at least the application in the mobile application as well. Developing this application will help the tailoring community as a specific marketplace which meets between the people who have skill in sewing and people who are looking to make their outfit. This application will increase the tailor’s income particularly in this current pandemic situation where any job was hit by lockdown or activity reduction during this COVID-19 pandemic.

Web-Based Application for Freelance Tailor

599

References 1. Johnson, R., Kent, S.: Designing universal access: web-applications for the elderly and disabled. Cogn. Technol. Work 9(4), 209–218 (2007) 2. Soegoto, E.S., Purwandani, F.A.: Application of IT-based web on online store. IOP Conf. Ser.: Mater. Sci. Eng. 407(1), 012041 (2018). 3. Gnana Singh, D., Leavline, E.J., Kumar, R.P. Mobile application for general stores. Int. J. Adv. Res. Comput. Sci. 8(5) (2017) 4. Othman, M., Shamsudin, S.N., Abdullah, M.H.A., Yusof, M.M., Mohamed, R.: iBid: a competitive bidding environment for multiscale tailor. JOIV: Int. J. Inform. Visual. 1(4–2), 256–259 (2017). 5. Otaduy, I., Díaz, O.: User acceptance testing for Agile-developed web-based applications: empowering customers through wikis and mind maps. J. Syst. Softw. 133, 212–229 (2017) 6. Athanasiadis, A., Andreopoulou, Z.: E-praxis: a web-based forest law decision support system for land characterization in Greece. Forest Policy Econ. 103, 157–166 (2019) 7. Herikson, R., Kurniati, P.S.: Web-based ordering ınformation system on food store. In: IOP Conf. Ser.: Mater. Sci. Eng. 662(2), 022010 (2019) 8. Sandouka, A.M.: A hybrid mobile application for an e-commerce store. J. Comput. Signals (JCS) 1(1), 16–23 (2020) 9. Cintya, C., Siahaan, R.F.: Implementation of the client-server system for ordering food and beverages with the Android platform using the waterfall method (Case Study: Maxx Coffee Prima Ap Kualanamu Store). J. Intell. Decis. Support Syst. (IDSS) 3(3), 31–39 (2020) 10. Strzelecki, A.: Google Web and image search visibility data for the online store. Data 4(3), 125 (2019) 11. Roslan, S.N., Hamid, I.R.A., Shamala, P.: E-store management using Bell-LaPadula access control security model. JOIV: Int. J. Inform. Visual. 2(3–2), 194–198 (2018) 12. Phimpraphai, W., Tangkawattana, S., Kasemsuwan, S., Sripa, B.: Social influence in liver fluke transmission: application of social network analysis of food sharing in Thai Isaan culture. Adv. Parasitol. 101, 97–124 (2018) 13. Mofidi, S.S., Pazour, J.A.: When is it beneficial to provide freelance suppliers with choice? A hierarchical approach for peer-to-peer logistics platforms. Transp. Res. Part B: Methodol. 126, 1–23 (2019) 14. Manoharan, S.: A smart image processing algorithm for text recognition information extraction and vocalization for the visually challenged. J. Innov. Image Process. (JIIP) 1(01), 31–38 (2019) 15. Dhaya, R., Kanthavel, R.: A Wireless collision detection on transmission poles through IoT technology. J. Trends Comput. Sci. Smart technol. (TCSST) 2(03), 165–172 (2020) 16. Krämer, M., Frese, S., Kuijper, A.: Implementing secure applications in smart city clouds using microservices. Future Gener. Comput. Syst. 99, 308–320 (2019) 17. Sultana, R., Im, I., Im, K.S.: Do IT freelancers increase their entrepreneurial behavior and performance by using IT self-efficacy and social capital? Evidence from Bangladesh. Inf. Manag. 56(6), 103133 (2019) 18. Do, Q.A., Bhowmik, T., Bradshaw, G.L.: Capturing creative requirements via requirements reuse a machine learning-based approach. J. Syst. Softw. 170, 110730 (2020)

Image Retrieval Using Local Majority Intensity Patterns Suresh Kumar Kanaparthi and U. S. N. Raju

Abstract The rapidly growing use of huge image database is becoming possible with the growth of multimedia technologies. Content-based image retrieval (CBIR) is observed as an efficient method for carrying out its management and retrieval. This paper embellishes the benefit of the image retrieval system based on the information as well as key technologies. Compared to the shortcoming that only one feature of the conventional method can be used, this paper proposes a technique for image retrieval, by analyzing a vigorous component descriptor named local majority intensity patterns (LMIP) for texture image retrieval. LMIP is the referenced pixel dependent on the encompassing lion’s share pixels’ conduct included in the image. The proposed LMIP have utilized the wager dominant part of odd and even pixels individually. The exploratory results have demonstrated that the proposed LMIP descriptor has accomplished a superior acknowledgment precision than existing methods by consuming less computation time. Keywords Content-based image retrieval · Color · Texture · Feature extraction · Local ternary pattern · Local binary pattern

1 Introduction Content-based image retrieval (CBIR) systems are considered as an integral tool for large image database with high accuracy and quick search speed. In terms of importance, the main goal is to retrieve the most similar ranked images. The current CBIR systems have two main modules [1] to achieve following objectives: (1) the image definition module, that aims to describe the image contents with discriminative and descriptive features and (2) the image retrieval module that compares descriptor query images to high-precision archive images and returns the most similar images in S. K. Kanaparthi (B) · U. S. N. Raju Department of Computer Science and Engineering, National Institute of Technology Warangal, Warangal 506004, India U. S. N. Raju e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_48

601

602

S. K. Kanaparthi and U. S. N. Raju

a computer-efficient manner. Basically, text-based and content-based are two types of image retrieval systems. The text-based approach derives the outcomes, i.e., keywordbased images, also referred to as text, while the content-based method retrieves images, i.e., attributes, based on image content. In contrast to content-based systems, text-based systems are simple to design and very common but content-based systems derive visually like images and offer much improved outcomes compared to textbased systems. Second, doing the annotation of a large number of annotations is a physical job, on the basis of the keyword, images. In the CBIR system, there is no need for manual annotation and it yields the result in images that are visually similar according to the interest of the user [2]. The method of effectively deriving identical images from the database is called CBIR. Besides the content of the image, i.e., color, shape, texture, CBIR involves matching, querying, indexing, and searching. Since CBIR depends on the effectiveness and completeness of the annotation, CBIR is important for the pattern today. There are mainly two stages in CBIR: extracting features and matching those features. firstly, image features are retrieved and the second step contains the matching of those features. CBIR produces organized information through visual features to efficiently browse, search, retrieve, and quickly search for related images, thereby increasing the use of images for entertainment, education, and commercial purposes. The goals of CBIR are to mitigate the semantic gap among high feature and low level characteristics, thereby reducing computational time to achieve user satisfaction. In Fig. 1, a query image is compared with the image database based on set of features like color, shape, texture, and other features of image. This process retrieves similar image from the database.

Fig. 1 CBIR architecture

Image Retrieval Using Local Majority Intensity Patterns

603

Classification of images: Train the model on a training set called dataset and then test using a dataset that is disjointed from the training set (most important) [3]. Image retrieval: In view of the query image, get the “nearest” image from the database to the query image [4]. Now, the term “nearest” can be used in terms of color, shape, texture, etc. So what is decided by “nearest”-the feature vector of the image that the user calculates according to an algorithm designed to suit user needs. The main difference between classification and retrieval is that, classification requires labels for training information but not retrieval. Figure 1 shows the CBIR architecture. CBIR can be used in many real-life applications like architectural and engineering design, art collections, medical diagnosis, defense, crime prevention, geographical and remote sensing systems, face recognition, photograph archives, retail catalogs, and textile industry.

2 Related Study The local binary pattern (LBP) traits are evolved as a corner stone in the area of texture classification and retrieval. Ojala et al. have proposed LBPs [5] that are converted into rotational invariant version [6, 7] for texture classification. To solve this issue, the local ternary pattern (LTP) [8] has been implemented under different lighting conditions for face recognition. On the basis of the local ternary pattern (LTP) using local majority intensity pattern for even and odd majority patterns, we propose an algorithm for texture image retrieval.

2.1 Local Binary Pattern The LBP operator was employed for texture classification by Ojala et al. [5]. The LBP value is calculated by comparing its gray value with their surrounding neighbors, once center pixel is given. LBP(PR) =

P 

2( p−1) × f 1 (gi − gc )

(1)

p=1

 f 1 (x) =

1 1 1

1, x ≥ 0 0, else

1 0

0 0 0

(2)

195

604

S. K. Kanaparthi and U. S. N. Raju

Output of LBPs: 1 1 0 0 0 0 1 1 = 195 is feature descriptor value. Where gi is gray value of its neighbors, gc center pixel of gray value, P is the number of neighbor pixels (P = 8).

2.2 Local Ternary Pattern Local ternary patterns (LTPs): Tan et al. [8]. In LTP, the gray areas in a zone of width ± t around g c are measured zero, those above (g c + t) greater than are measured to +1, and those below (g c − t) are measured lesser it to −1, as shown in the equation. The LBP is extended to 3-value codes. Therefore, the point of reference f (x) is substituted with a 3-valued function and the binary LBP code is substituted by a ternary LTP code. ⎧ ⎨ +1, f (x) = 0, ⎩ −1,



 t x ≥ g c + x − g c < t x ≤ gc − t

x = gp

(3)

In this case, (t = 5) is a user-mentioned threshold, and LTP codes are not strictly invariable to gray-level transformations, but they are more noise resistant. Upper Binary Patterns 1 1 0 0 0 0 0 0 = 192

Lower Binary Patterns 0 0 0 0 1 1 0 0 = 12

Now we want to get LTP for pixel gc = 54, and t = 5 (gc = 59) [54 − t (49), 54 + t (59)] t = 5 g1 = 78 ≥ 59 = 1, g2 = 99 ≥ 59 = 1, g3 = |50 − 54| < 5 = 0, g4 = |49 − 54| < 5 = 0 g5 = 13 ≤ 49 = −1, g6 = 12 ≤ 49 = −1, g7 = |57 − 54| < 5 = 0, g8 = |54 − 54| < 5 = 0 Output of LTP is −1 1 0 0 − 1 − 1 0 0. For convenience, the conducted experiments shown uses a coding mechanism by separating all ternary into equal positive and negative halves as shown in above figure. These are thereafter viewed as two independent LBP descriptor channels for which similarity metrics and separate histograms are computed, integrating the outcomes only at the end of the computation.

Image Retrieval Using Local Majority Intensity Patterns

605

Finally, upper and lower binary patterns are combined to calculate the histogram and get the feature vector, similarity between each image vice versa.

3 Proposed Approach The relation between the quoted pixel and its surrounding neighbors is encoded by the conventional local binary pattern. In comparison, the proposed local majority intensity pattern approach encodes the correlation among the pixel quoted and all neighbors. Hence, the neighbors for odd position pixels will be five, and neighbors for even position pixels will be three. Consider the majority among the differences with neighbors. In existing methodology (LTP), they are used for threshold values, and in proposed method, we are used majority sign values like binary value. In local majority intensity pattern (LMIP), two different sets of binary values are constructed for reference pixel and central pixel. The central pixel is not considered to build a pattern for the referenced pixel and vice versa. Proposed majority principle works based on majority signs obtained by differentiating each pixels with its neighbors. Here, odd position pixels have five neighbors and even position pixels have three neighbors. So, we replace with 0 if majority sign is −ve, and 1 if majority sign is +ve. In our both approaches, instead of two separate sets of patterns, we build only one set of binary pattern, where the central pixel is also considered as a neighbor, and final binary values are computed based on majority sign among the variations of reference pixel with its neighbors.

 si =

if i mod2 = 0 (g(i−1)mod P, g(i+1)mod P, gc ), g(i−2)mod P, g(i−1)mod P, g(i+1)mod P, g(i+2)mod P, gc else  +1, if(x − y) ≥ 0 sign(x, y) = 0, if(x − y) < 0 Bi (k) = sign(si (k)), gi )∀k ∈ {1 . . . M}

B(i)

⎧ M ⎨ 1, if B ( j) ≥ 0 i = j=1 ⎩ 0 else

(4)

(5) (6)

(7)

606

S. K. Kanaparthi and U. S. N. Raju

Local Majority Intensity Pattern(LMIP) =

P 

2(m−1) × B(m)

(8)

m=1

0

0

23

0

63

90

1

100

123

Finally, our binary patterns are: 0 1 1 1 1 1 0 0 = 124. The referenced pixel value depends on how neighbors’ pixel values behaver with the pixel. To capture the behavior of the referenced pixel with more detailed information, we consider the non-diagonal neighbor of the referenced pixel. Then we compute the difference of the intensities of the neighboring pixels with the intensity of the current pixel to find the binary patterns and get the features descriptor values (124). Apply entire image elements (Pixels) vice versa. Figure 2 is proposed block diagram.

Image Retrieval Using Local Majority Intensity Patterns

Fig. 2 Block diagram of the proposed method

607

608

S. K. Kanaparthi and U. S. N. Raju

3.1 Construct the Histogram These binary patterns are converted into local patterns; after recognizing the local pattern, PTN, the entire image is indicated by creating a histogram using N1  N2  1 f 2 (PTN(i, k), l) HS (l) = N1 × N2 j=1 k=1

l ∈ [0, P(P − 1) + 2]  f 2 (x, y) =

1, if x = y 0, else

(9) (10)

(11)

where N 1 × N 2 represents the size of an input image. Pattern histograms are combined to yield the feature vector [9]. This feature vector is compared for similarity with the trained feature database to get relevant images. Algorithm To construct a feature vector of an image using LMIP. Input: color or gray image; Output: Feature vector. 1. 2. 3. 4. 5. 6.

Convert the input image into greyscale image if it is colored. Find out local differences for each referenced pixel based on its position (odd/even). Construct binary patterns based on the majority principle (majority no. of +ve = 1, or −ve = 0). Form a binary string for the referenced pixel and convert it into the decimal value. Create the vector features and link the image query to images in databases. Based on the best matches, retrieve the images.

4 Results and Discussions 4.1 Precision and Recall For performance evaluation of our proposed method, seven benchmark color and gray datasets are employed. Precision, recall, and F-measure are used for performance comparison. Each of the query image gives a feature vector by following the steps of proposed method during evaluation. As per the Euclidean distance measures, the comparison between the query image feature vector and dataset images’ feature vector is carried out. A rank matrix is obtained from the distances calculated. Size of rank matrix, is N × N, given in Fig. 3, where total number of images in the

Image Retrieval Using Local Majority Intensity Patterns

609

Query Iamges Considered from the Image Dataset Class-1 Image1

Image Numbers in the Dataset

1

Rank 1

2

Rank 102

3

Rank 5

Image2



Image100













Class-10 …

Image999 Image1000

Rank 1 Rank 1

1000

Rank 1

Fig. 3 Rank matrix representation for 1000 images dataset

dataset is denoted by N. Kth identical image in correspondence to ith query image is referred by cell Rank (k, i) in the rank matrix. Considering Nic be the number of total images in a class as Ci and N as the number of total retrieved images for a given query image i from that class, precision and recall are computed using Eqs. (12)– (14). Equations (15) and (16) are used for average computing precision and average recall for jth class, respectively. Equations (17) and (18) are used to compute average precision rate (APR) and average recall rate (ARR) for n images, where Nc represents the total number of different class in the dataset. N 1 f 3 (Rank(k, i)) P(i, n) = n k=1

R(i, n) =

N 1  f 3 (Rank(k, i)) Nic k=1



f 3 (Rank(k, i)) =

1, rank(k, i) ∈ Ci 0 else

(12)

(13)

(14)

Pavg ( j, n) =

Nic 1  P(i, n) Nic i=1

(15)

Ravg ( j, n) =

Nic 1  R(i, n) Nic i=1

(16)

Average precision rate (APR) and average recall rate (ARR): Precision is defined as a ratio between the number of total appropriate images retrieved and to the total number of images retrieved for a given query. Recall is a ratio of the number of total images collected to the total number of images in the same class as the image in the given query. Average precision for different step sizes m1 , m2 , …, mk is known as APR. Similarly, average recall for different step sizes is known as ARR [10]. APR(n) =

Nc 1  Pavg ( j, n) Nc i=1

(17)

610

S. K. Kanaparthi and U. S. N. Raju

Table 1 Different benchmark image datasets considered and their properties Total no. of images Color texture datasets

Natural image datasets

No. of images in each group

No. of groups

Step size

VisTex

640

16

40

4

Brodatz

1856

16

116

4

Stex

7616

16

476

4

Color Brodatz

2800

25

112

5

Corel 1 k

1000

100

10

10

Corel 5 k

5000

100

50

10

Corel 10 k

10,000

100

100

10

Nc 1  ARR(n) = Ravg ( j, n) Nc i=1

(18)

4.2 F-Measure It is represented by a single value to reflect the relationship between precision and recall. It is obtained by assigning equal weight to both precision and recall in the harmonic mean calculation as given in Eq. (19). F-measure(n) =

(2 × APR(n) × ARR(n)) (APR(n) + ARR(n))

(19)

The seven benchmark image datasets considered are given in Table 1 and the detailed results for each of these are given in the remaining part of this section. Dataset-1 (VisTex): This image dataset is the first texture image dataset considered VisTex texture dataset [11] contains a total of 484 images. Out of these 484 texture images, 40 are considered for experimentation. The actual image dimension is 512 × 512. Each image of these 40 is made into 16 non-overlapping subimages where each subimage is of dimension 128 × 128, which results in a total of 640 texture image datasets. From these 640, images 1, 17, 33, 49 … 625 which are the first subimage of each of 40 actual texture images, are shown in Fig. 4. All the performances are obtained on these 640 texture image dataset. Table 2 shows the three performance measure values. Dataset-2 (Brodatz): The database Brodatz texture has 116 various textures. We utilize 109 textures from a photographic album of Brodatz textures [12] and seven textures from the University of Southern California [13]. Each texture is 512 × 512

Image Retrieval Using Local Majority Intensity Patterns

611

Fig. 4 Forty VisTex texture images considered

Table 2 Performance measures for VisTex LTP (VisTex_DB-640)

LMIP (VisTex_DB-640)

APR

ARR

F-measure

APR

ARR

F-measure

Top-4

93.79

23.45

37.52

97.3

24.33

38.92

Top-6

89.22

33.46

48.66

94.51

35.44

51.55

Top-8

85.72

42.86

57.15

91.56

45.78

61.04

Top-10

82.66

51.66

63.58

88.19

55.12

67.84

Top-12

79.26

59.44

67.94

84.95

63.71

72.81

Top-16

71.15

71.15

71.15

77.56

77.56

77.56

LTP Local Tetra Pattern, LMIP Local Majority Intensity Pattern, APR Average Precision Rate, ARR Average Recall Rate, F-measure

in size and these images are divided into 16, 128 × 128 subimages which are nonoverlapping, constructing an 1856 (116 × 16) image database in Fig. 5. Table 3 shows the three performance measure values. Dataset-3 (STex): The other color texture dataset considered is the Salzburg texture image dataset (STex) [14] which contains a total of 476 texture images. All these 476 are considered to test the performance of different methods. Here also, each texture image is made into 16 subimages which are non-overlapping which results in a total of 7616, where each subimage is of dimension 128 × 128. Figure 6 shows forty of these 7616, where each of these forty is considered from 40 different

612

S. K. Kanaparthi and U. S. N. Raju

Fig. 5 Sample images of Brodatz database (one image per category)

Table 3 Performance measures for Brodatz LTP (Brodatz_DB-1856)

LMIP (Brodatz_DB-1856)

APR

ARR

F-measure

APR

ARR

F-measure

Top-4

88.21

22.05

35.29

92.86

23.22

37.14

Top-6

83.23

31.21

45.4

89.68

33.63

48.92

Top-8

79.26

39.63

52.84

86.95

43.48

57.97

Top-10

76.24

47.65

58.65

84.21

52.63

64.78

Top-12

73.24

54.93

62.77

81.37

61.03

69.75

Top-16

66.87

66.87

66.87

75.6

75.6

75.6

LTP Local Tetra Pattern, LMIP Local Majority Intensity Pattern, APR Average Precision Rate, ARR Average Recall Rate, F-measure

actual texture images from 476 texture images. All the three evaluation measures are given in Table 4. Dataset-4 (Color Brodatz): The color Brodatz texture image dataset [15], we made each of the images into 25 non-overlapping subimages, which results in a total of 2800 images. The first subimage from each of these 112 are shown in Fig. 7 and the results are illustrated in Table 5. Dataset-5 (Corel-1 K): This dataset [16] consists of a total of 1000 images with 10 different categories where each category contains of 100 images. The various categories are Africans, Beaches… Food. The size of each image in this image dataset is 384 × 256 or 256 × 384. Three images from each group, a total of 30 images of this dataset, are shown in Fig. 8. The three performance measure values for this image dataset are shown in Table 6. Dataset-6 (Corel-5 K): Corel-5 K image dataset [17] contains of 50 categories, where each category is of 100 images, which total 5000 images. Figure 9 shows a total of 50 images, one image from each of 50 categories. As in the Corel-1 K image dataset, here too all the three performance measures are evaluated and shown in Table 7.

Image Retrieval Using Local Majority Intensity Patterns

613

Fig. 6 Forty of the STex texture images from 7616 images

Table 4 Performance measures for STex LTP (STex_DB-7616)

LMIP (STex_DB-7616) APR

ARR

F-measure

APR

ARR

F-measure

Top-4

68

17

27.2

79.87

19.97

31.95

Top-6

58.92

22.1

32.14

72.12

27.04

39.34

Top-8

52.46

26.23

34.98

65.97

32.98

43.98

Top-10

47.56

29.72

36.58

60.79

37.99

46.76

Top-12

43.62

32.72

37.39

65.18

42.13

48.15

Top-16

37.19

37.19

37.19

48.25

48.25

48.25

LTP Local Tetra Pattern, LMIP Local Majority Intensity Pattern, APR Average Precision Rate, ARR Average Recall Rate, F-measure

Dataset-7 (Corel-10 K): The third natural image dataset considered is the Corel10 K image dataset [17]. This dataset consists of 100 categories with 100 images in each category results in a total of 10,000 images. This dataset was containing a total of 53 varied dimensioned images: 128 × 192, 192 × 128, etc. In Fig. 10, a total of 100 images is given, one from each of 100 categories of this image dataset. Table 8 shows all the three performance measure results of the Corel-10 K dataset.

614

S. K. Kanaparthi and U. S. N. Raju

Fig. 7 112 Textures each from color Brodatz texture images from 2800 texture images

Table 5 Performance measures for color Brodatz LTP (color Brodatz DB-2800)

LMIP (color Brodatz DB-2800)

APR

ARR

F-measure

APR

ARR

F-measure

Top-5

86.71

13.87

23.92

92.67

15.67

26.29

Top-6

81.55

19.57

31.57

88.42

21.22

34.23

Top-8

77.51

24.8

37.58

85.83

27.47

41.61

Top-10

74.35

29.74

42.48

83.35

33.34

47.63

Top-12

71.49

34.31

46.37

80.89

38.83

52.47

Top-16

66.49

42.55

51.89

76.43

48.91

59.65

Top-25

56.49

56.49

56.49

66.78

66.78

66.78

LTP Local Tetra Pattern, LMIP Local Majority Intensity Pattern, APR Average Precision Rate, ARR Average Recall Rate, F-measure

5 Future Work and Conclusion The local majority intensity patterns descriptor applied to texture databases get good results on Brodatz, VisTex, and STex, and natural databases like color 1 K, color 5 K, and color 10 K. The facial recognition accuracy comparison is on various databases. The outcomes showed that the LMIP descriptor accomplished better accuracy when compared with local ternary pattern (LTP) features descriptors. When testing and comparing the average precision, recall, and f -measure performance, it is found that there is an increase in the rates of the proposed model. As future work, the proposed descriptor could be applied to cross-dataset expression recognition and furthermore to different challenges in the face such as pose recognition, and so forth. To design

Image Retrieval Using Local Majority Intensity Patterns

615

Fig. 8 Corel-1 K samples (three images per category)

Table 6 Performance measures for COREL-1 K LTP (Corel-1 k)

LMIP (Corel-1 k) APR

ARR

F-measure

APR

ARR

F-measure

Top-10

53.45

5.34

9.72

59.58

5.96

10.83

Top-12

51

6.12

10.93

57.37

6.88

12.29

LTP Local Tetra Pattern, LMIP Local Majority Intensity Pattern, APR Average Precision Rate, ARR Average Recall Rate, F-measure

algorithms for image retrieval features based on texture by applying segmentation techniques, we are implementing the proposed algorithms in the MapReduce framework to achieve optimal load balancing for storage and efficient retrieval of Corel, Brodatz, and biomedical images dataset under high load.

616

S. K. Kanaparthi and U. S. N. Raju

Fig. 9 Corel-5 K samples (one image per category)

Table 7 Performance measures for CORE-5 K LTP (Corel-5 k)

LMIP (Corel-5 k) APR

ARR

F-measure

APR

ARR

F-measure

Top-10

31.63

3.16

5.75

40.37

4.07

7.41

Top-12

29.08

3.49

6.23

38.23

4.59

8.19

LTP Local Tetra Pattern, LMIP Local Majority Intensity Pattern, APR Average Precision Rate, ARR Average Recall Rate, F-measure

Image Retrieval Using Local Majority Intensity Patterns

617

Fig. 10 Corel-10 K samples Table 8 Performance measures for COREL-10 K LTP (Corel-10 k)

LMIP (Corel-10 k) APR

ARR

F-measure

APR

ARR

F-measure

Top-10

26.55

2.65

4.83

34.31

3.43

6.24

Top-12

24.18

2.9

5.18

31.81

3.82

6.82

LTP Local Tetra Pattern, LMIP Local Majority Intensity Pattern, APR Average Precision Rate, ARR Average Recall Rate, F-measure Table 9 Experimental setup in our system

Tools

Specifications

Computer hardware CPU: Intel ® Core™ i7-3770 @ 3.40 GHz GPU: NVIDIA GetForce GTX 1060 6 GB RAM: 16 GB Operating system

Ubuntu 16.04–64 bit

Developing tools

MATLAB R 2019a

618

S. K. Kanaparthi and U. S. N. Raju

References 1. Dai, O.E., Demir, B., Sankur, B., Bruzzone, L.: A novel system for content-based retrieval of single and multi-label high-dimensional remote sensing images. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 11(7), 2473–2490 (2018). 2. Srivastava, P., Khare, A.: Content-based image retrieval using scale invariant feature transform and moments. In: 2016 IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics Engineering (UPCON), pp. 162–166. IEEE, (2016). 3. Sungheetha, A., Rajesh Sharma, R.: A novel CapsNet based image reconstruction and regression analysis. J. Innov. Image Process. (JIIP) 2(03), 156–164 (2020). 4. Vijayakumar, T., Vinothkanna, R.: Retrieval of complex images using visual saliency guided cognitive classification. J. Innov. Image Process. (JIIP) 2(02), 102–109 (2020). 5. Ojala, T., Pietikainen, M., Harwood, D.: A comparative study of texture measures with classification based on feature distributions. Pattern Recognit. 29(1), 51–59 (1996). 6. Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002). 7. Pietikainen, M., Ojala, T., Scruggs, T., Bowyer, K.W., Jin, C., Hoffman, K., Marques, J., Jacsik, M., Worek, W.: Rotational invariant texture classification using feature distributions. Pattern Recognit. 33(1), 43–52 (2000). 8. Tan, X., Triggs, B.: Enhanced local texture feature sets for face recognition under difficult lighting conditions, IEEE Trans. Image Process. 19(6), 1635–1650 (2010). 9. Rivera, A.R., Castillo, J.R., Chae, O.O.: Local directional number pattern for face analysis: face and expression recognition. IEEE Trans. Image Process. 22(5), 1740–1752 (2012). 10. Kanaparthi, S.K., Raju, U.S.N., Shanmukhi, P., Khyathi Aneesha, G., Ehsan Ur Rahman, M.: Image retrieval by integrating global correlation of color and intensity histograms with local texture features. Multim. Tools Appl. 1–37 (2019) 11. (Sandy) Pentland, A., Adelson, T.: VisTex Dataset. https://vismod.media.mit.edu/pub/VisTex/ 12. Brodatz, P.: Texture: A Photographic Album for Artists and Designers. New Yark: Dover (1996) 13. University of Southern California, Los Angeles: Signal and image processing institute, https:// sipi.usc.edu/database/database.php?volume=textures 14. Kwitt, R.: Salzburg Texture Image Dataset. https://www.wavelab.at/sources/STex/ 15. Dong-Chen, Abdelmounaime, S.: Multiband Texture (MBT) dataset. https://multibandtexture. recherche.usherbrooke.ca/index.html 16. Wang, J.Z.: Modelingobjects, Concepts, Aesthetics and Emotionsin Big Visual Data.: https:// wang.ist.psu.edu/docs/home.shtml 17. Liu, G.-H., et al.: Corel-10k dataset. https://www.ci.gxnu.edu.cn/cbir/Dataset.aspx (Guang-Hai Liu et al., Corel-10k dataset).

A Comprehensive Survey of NOMA-Based Cooperative Communication Studies for 5G Implementation Mario Ligwa and Vipin Balyan

Abstract The evolution of wireless communication has shifted the world towards the modern smart networks of the future which support IoT applications. The deployment of fifth-generation (5G) is on-going and there is a prediction that explosion in the number of data users is going to rise including massive connectivity of billion applications which support the Internet of Things. This will result from the growing demand for high data rates and access to multimedia content Non-orthogonal multiple access (NOMA) scheme as multiple access technologies that aim to increase spectral efficiency and allow for more applications to be connected. Network operators are seeking to provide more effective, reliable, faster service. Scientific evidence has proven that Non-orthogonal multiple access performs better when incorporated with recommended wireless technologies to improve network performance, fairness, reliability, diversity and network efficiency these include cooperative relaying, massive multiple-input multiple-output MIMO, beamforming to improve signal losses, Coding for space–time, and network coding. NOMA approaches are made up of different types, such as power-domain and code-domain. This paper focuses on power-domain NOMA, which uses transmitter superposition coding and successive interference cancellation at the SIC receiver. The NOMA-based cooperative communication for 5G deployment strategies and benefits is comprehensively surveyed in this paper. Keywords Cooperative communication · Multicasting · 5G · NOMA · EMBB · URLLC

1 Introduction Reliability and system performance in the network are both key challenges in wireless communication, to improve that NOMA received many recommendations in both academia and industry. Non-orthogonal multiple access NOMA has also been used M. Ligwa · V. Balyan (B) Department of Electrical, Electronics and Computer Engineering, Cape Peninsula University of Technology, Cape Town, South Africa © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_49

619

620

M. Ligwa and V. Balyan

as the sufficient multiple access NOMA to replace the present traditional orthogonal multiple access OMA [1]. Through radio connection, the key component that connects UE’s with core network is radio access network (RAN). For RAN to function optimum it is desirable to develop a suitable multiple access technique to improve system capacity [2]. Since the implementation of 5G is underway to support multiples access with many applications and to enable the full utilization of spectrum resources. Due to insufficient capability from the existing multiple access, NOMA has received many recommendations due to its ability to serve many users per resources block [3]. Wireless communication enables and exploits radio resources schemes to improve system spectral efficiency and data throughput, over the four-decade mobile communication has advanced with the exponential rise in data application such as smartphones including smart connection seamless connectivity, that scenario has attributed more challenges on the existing multiple access (OMA) scheme [4–6]. In general, the data traffic due to low bandwidth and poor network connection, slow video download which results in bad quality of service (QoS) as well as the quality of experience (QoE) has created a huge demand for network operators to seek a new radio access scheme to handle such volume. Although most of these problems and solutions share common aspects and concept their analytical algorithm in the context of wireless communications is different. As an effective means and to be consistent for improving network reliability, performance and spectral efficiency NOMA has been widely studied in [7–9]. In contrast, the proposed cooperative network guarantees the additional improvement reliability including better cell edge network reception. By introducing better high reliability and high spectral efficiency mechanism, these kind of Algorithm are more suitable and efficient in 5G technology and beyond which is the much anticipated 5G network to be rollout as 5G there more challenges are already been identified and have been researched by global wireless smart handled, more data appreciation, smart city, internet of things and so on. This research provides a comprehensive and up-to-date survey of existing research that applied NOMA technology to a cooperative communication technique for 5G applications. It further presents and classified NOMA-based multicast studies in the context of 5G technology. The paper is structured as follows: Sect. 1 consists of a system structure of NOMA, OMA scheme including uplink as well as coordinated structure, Sect. 2 presents an overview of Cooperative communication and discusses some important design considerations; Sect. 3 lists existing NOMA-based cooperative communication techniques to improve system capacity and reliability. In Sect. 4, several observations regarding the cooperative communication design processes are outlined followed by identified opened areas for possible future research. Section 5 concludes this paper. Figure 1, presents two scenarios of multiple access schemes such as existing conventional OMA and NOMA scheme respectively. The basic benefits of a NOMA system over an OMA are defined. One of the main benefits of using NOMA scheme is that two users can concurrently use 1 Hz BW. However, that is not the case in the existing OMA technique was user 1 uses α Hz of BW. The remaining (1-aa) Hz shall be allocated to user 2 [2]. In NOMA SIC is performed by user 2 and decode

A Comprehensive Survey of NOMA-Based Cooperative Communication …

621

Fig. 1 System structure of NOMA and OMA [2]

the signal for user 1, it can also be noted that user 2 has higher channel gain as compared to that of channel gain of user 1 which subsequently the decoded signal is then subtracted from the received signal of user 2. those result can be used to decode the signal for user 2 [2]. In user 1, SIC is not performed, which simply implies that decoded happens immediately. Similarly, NOMA technique can be utilized in uplink scenarios as shown in Fig. 2. SIC is performed at the base station [2]. With the aid of SIC, the base station decodes the signals of user 1 and user 2 in two phases both user 1 and user 2 signal can be decoded by a base station with the aid of SIC. Initial the signal of user 2 is decoded first considering user 1 signal as an unwanted signal. The receiver minus the decoded user 2 signal from the received signal in the second stage and decodes the signal from user 1 [2]. It should be noted that user 2 signals could be the strongest in the BS since it has the strongest signal channel advantage. Between uplink and downlink scenarios they are fundamental differences in NOMA. The key differences include decoding methods where strong users will

Fig. 2 System structure of uplink NOMA [2]

622

M. Ligwa and V. Balyan

Fig. 3 System structure of NOMA in coordinated system [2]

successively decode and cancel the signal of poor users, in the downlink before decoding their signal to increase throughput gains [2]. However, in the uplink, the BS decodes and cancels strong users signals. Successively before decoding the signals of weak users. Due to current restricted mobile user processing capabilities and the absence of a centralized uplink processing unit, appropriate multi-user identification and cancellation of interference techniques are more difficult compared to downlink situations. However, poor users sometimes are more likely to be impacted by the interference of strong users signals (Fig. 3). In wireless communication, users situated along the cell-edge are mostly encountering a lower information rate compared to its counterpart users that are close to the base station [10]. To enhanced system performance and efficiency enabling coordinated multipoint (CoMP) transmission strategies is required, in this technique various BSs assist cell-edge users at the same time, are typically utilized to expand transmission rates to cell-edge users. The related base station BS’s and for coordinated multipoint CoMP need to assign a similar channel to users which resides close by cell-edge. The SE of the network structure therefore degrades as the total increasing the number of cell-edge users [11]. To prevent this issue, a NOMA technique based on planned organized super coding (CSC) [11] consider super coding (SC) for downlink transmissions to a cell-edge user cluster. inclusive users that are close to the BS synchronously with a typical access channel [12]. BS transmits coded signs to users c (a cell edge user) from Alamouti (space–time) [13], furthermore, every BS sends communication signals to a user’s close to the BS. The main benefit of the Alamouti code implementation of CSCNOMA techniques is to equip cell-edge users with a reasonable transmission rate without lowering user closure rates and increasing the deterioration of SE in the network structure.

A Comprehensive Survey of NOMA-Based Cooperative Communication …

623

2 Overview of Cooperative Communication 2.1 Analysis of NOMA in the Context of 5G The Analysis of NOMA for 5G was surveyed and researched by Yingmin Wang, Bin Ren, Shaohui Sun, Shaoli Kang, Xinwei Yue In 2016, Authors provided a brief survey of NOMA schemes in detailed, in their study they presented a comprehensive survey on NOMA schemes, the study also provided comparison in NOMA schemes in the context of 5G solution approach and challenges [3], the study has provided future research direction. I.

II.

III.

The early study related to NOMA was proposed and investigated in the pioneering work of [14], the author has investigated the SIC receiver as the baseline receiver scheme for strong multiple access, the study aimed to demonstrate NOMA with SIC in downlink scenarios enhances both the capacity and the edge of the cell and system user throughput. The study also demonstrated the improvement in the high volume of mobile traffic as well as system throughput performance when compared with previous access schemes such as OFDMA, OMA, CDMA, FDMA, and TDMA. System performance base on NOMA was first proposed in 2013 [15], the authors provided with the detailed literature which is very similar from the work proposed by [16], although the findings are much similar there are differences in context proposed, the author also conducted the feasibility study in the context of practical consideration of NOMA for future radio access FRA including key sub solution signalling overhead SIC error propagation, such as multi-user power allocation, as well as high mobility scenarios in a practical manner, the findings are similar from the previous work proposed by [16] especially comparison from OMA, OFDMA, FDMA, CDMA Long term evolution LTE, through simulation evaluation the proposed study indicates that NOMA outperforms the previous access schemes, the same study revealed greater improvement with a combination of NOMA with MIMO. In 2017 Boya Di, Lingyang Song, Yonghui Li, and Geoffrey Ye Li, proposed latency and system liability in NOMA, in their research, they investigated how NOMA can be exploited to minimize latency while improving packet reception probability [17], this research focuses on the vehicle broadcast and how to broadcast safety information, authors have formulated key elements such as the issue of centralized scheduling and resource allocation, as predicted by the results of the simulation show that the proposed method outperforms tradition orthogonal multiple access in two ways reliability and latency.

624

M. Ligwa and V. Balyan

Fig. 4 System model of a cooperative relay [19]

2.2 Cooperative Communication Structure Figure 4 has demonstrated the visual structure of cooperative communication where relays are being utilized. The system efficiency for the integration of cooperative communication with NOMA was recommended by [7]. Cooperative communications have received a considerable recommendation in 5G implementation due to its ability to offer many advantages in 5G such as to minimize fading while addressing the problem of implementing more antennas on small communications terminals such as spatial diversity [18]. Several relay nodes are allocated to assist a source in routing information to the respective destinations in cooperative communications [2]. Another key advantage of exploiting cooperative communications in NOMA is that it can add enhanced system performance including efficiency and reliability which are both key challenges in wireless communication.

2.3 Types of Cooperative Communication CC In [20], a cooperative communication is proposed where users with strong channel conditions can act as a relay by decoding signals of channels with poor channel conditions. Some authors proposed cooperation among users which are more suitable in smallrange communication such as ultra-wideband and Bluetooth, to expand the coverage there is need to exploit the advantage of cooperative communication were dedicated relay is being utilized author in [21] proposed a coordinated transmission protocol to assist users to communicate with BS while others relied on a relaying technique to obtain the information being transmitted. Authors including [20], In collaboration, Kim et al. suggested D2D device-to-device aided cooperative communication to enhance system efficiency.

A Comprehensive Survey of NOMA-Based Cooperative Communication …

625

3 Survey of Existing Study on NOMA Based Cooperative Communication Technique 3.1 NOMA-Based Cooperative Communication Technique Based on V2X Multicasting NOMA with cooperative relaying has been studied recently to improve reliability and system capacity [22]. The data is sent through dedicated relays or users that have been successfully decoded. in a method called relay-aided cooperative NOMA or user-aided cooperative NOMA. References [21, 23] has proposed the relay-aided cooperative NOMA. References [4] and [21, 23] investigated the ergodic capacity of full/half-duplex relay-assisted cooperative NOMA to balance user fairness.

3.2 NOMA-Based Multicast for High Reliability in 5G Networks To deliver high resolution and high-quality multimedia content in 5G Multicasting technology offers high efficiency in the use of resources available is regarded as a suitable technique [5]. The main benefit of the multicasting technique is that the base station can deliver the content on shared allocated resources simultaneously, especially when the two users requested the same content in the same cell [6]. This algorithm has a better advantage as it utilizes limited spectral recourses effectively which simple decreasing the transmission power at the base station [24]. In 2017 Authors proposed cooperative multicast relay NOMA to improve downlink of NOMA, the study compare the proposed scheme with existing NOMA in terms of the user’s average bit error rate, authors also investigated about the utilization of modified SIC, authors findings and simulations results indicates that CMR NOMA outperforms CNOMA and NOMA system performance, also the proposed modified SIC was evaluated to reduce system compilation complexity at the receiver UE’s [25]. In 2017 Authors propose another incredible scheme called dynamic cooperative MCR-NOMA, the authors objective and aim was to investigate the performance improvements on the Multicast secondary users who act as relays to enhance both secondary and main network performance, Authors also propose scheduling strategies to increase spectrum diversity [26]. The author’s findings demonstrate the great performance gain in MCR-NOMA, Author’s numerical results to validate the analysis.

626

M. Ligwa and V. Balyan

Fig. 5 System model of cooperative relaying [21]

3.3 Cooperative Relaying in 5G Networks Cooperative relaying has been proposed in 5G shown in Fig. 5, to improve system coverage and spectral efficiency [27]. Full-duplex FD technique has been proposed where it can support Full duplex relay FD which be shared in many multi-source destinations. A decode and forward cooperative relay techniques were proposed by [28]. Reference [29] has proposed relay section mechanism to enhanced reliability in transmission for users with weak channel conditions.

4 Design Observation and Identified Research Gaps 4.1 Key Design Observation from the Survey Considering the above survey, several observations can be outlined concerning NOMA-based cooperation communication. Firstly, regarding the system performance, the algorithm’s design goal is to develop a technique that is suitable best to support such that application of NOMA based cooperative communication in each cell yield the optimal performances desired; and the design process takes in consideration limitation such as the desired number of users per cell/cluster and the channel conditions of all the users. Secondly, regarding the power-allocation process, the power allocation algorithm’s design aim to increase either the total system capacity or the system user fairness; and the design process consider parameters such as the number of user per cell as well as the type of channel-state-information (CSI) estimation, and whether it is a short or a long-term [30, 31].

A Comprehensive Survey of NOMA-Based Cooperative Communication …

627

4.2 Identifying Key Research Areas from NOMA Based Cooperative Communication A few challenges that constitute opened research areas and therefore future possible research topics for cooperative networks related to NOMA-based have been identified from the survey presented above. Note that, a few previous papers have presented a similar concept with challenges and solutions non-proposed similar. The observations made can be listed as follows: a.

b.

c.

d.

The field of study is still fairly new: Only a very few studies to date have looked at the studies of NOMA-based cooperative multicast for high reliability for 5G; which may imply that the research on this domain is still at a very premature stage. Most existing design works focus on system capacity and spectral efficiency: The identified works on the design of NOMA-based cooperative communication for 5G only focus on Improving system coverage and capacity. No work has been reported regarding the system’s user-fairness; which may imply that one of the most important elements of 5G networks is to ensure enhanced fair access to network resources for the farther user and near users, is still not addressed on this domain. The utilization of spectrum resources: Most of the identified works still operate within the microwave frequency band. This may imply that the available spectrum for the network is still relatively limited, and so will be the achievable network capacity; different from the millimetre-wave spectrum which guarantees to the network quite bigger spectrums and therefore higher achievable capacity.

5 Conclusion This paper has presented comprehensive literature with a more updated review of NOMA-related work done on cooperative communication as well as multicast technique. Cooperative communication and multicasting are the key networks enabling mechanism in fifth generation (5G). From the work presented it appears that the work done on this field is still at its early stage and therefore the area is still quite opened for future research. Due to the anticipated 5G network that is ongoing, the integrated networks are desirable to support proposed requirements such as reliability, efficiency, and better system performance in 5G and beyond.

628

M. Ligwa and V. Balyan

References 1. Liu, Y., Qin, Z., Elkashlan, M.: Non-orthogonal multiple access for 5G and beyond, pp. 1–65 (2018). 2. Islam, S.M.R., Avazov, N., Dobre, O.A., Member, S.: Power-domain non-orthogonal multiple access (NOMA) in 5G systems: potentials and challenges. IEEE Commun. Surv. Tuto. 19(2), 721–742 (2017) 3. Wang, Y., Ren, B., Sun, S., Kang, S., Yue, X.: Analysis of non-orthogonal multiple access for 5G. China Commun. 13(10), 52–66 (2016) 4. Balyan, V., Saini, D.S.: Integrating new calls and performance improvement in OVSF based CDMA Networks. Int. J. Comput. Commun. 2(5), 35–42 (2011) 5. Balyan, V., Saini, D.S.: A same rate and pattern recognition search of OVSF code tree for WCDMA networks. IET Commun. 8(13), 2366–2374 (2014) 6. Balyan, V., Saini, D.S.: OVSF code slots sharing and reduction in call blocking for 3G and beyond WCDMA networks. WSEAS Trans. Commun. 11(4), 135–146 (2012) 7. Aldababsa, M., Toka, M., Gökçeli, S., Kurt, G.G.K., Kucur, O.L.: A tutorial on nonorthogonal multiple access for 5G and beyond. 2018, 1–25 (2018). 8. Yuan, Y., Yan, C.: NOMA study in 3GPP for 5G. In: 2018 IEEE 10th International Symposium on Turbo Codes & Iterative Information Processing (ISTC), pp. 1–5, 2018. 9. Manglayev, T.: Optimum power allocation for non-orthogonal multiple access (NOMA). In: 2016 IEEE 10th International Conference on Application of Information and Communication Technologies (AICT), pp. 1–4, 2016. 10. Do, T.N., et al.: Improving the performance of cell-edge users in NOMA systems using cooperative relaying. IEEE Trans. Commun. 66(5), 1883–1901 (2018) 11. Choi, J.: Non-orthogonal multiple access in downlink coordinated two-point systems 18(2), 313–316 (2014). 12. Vanka, S., et al.: Superposition coding strategies Des. Exp. Eval. 11(7), 2628–2639 (2012). 13. Alamouti, S.M.: A simple transmit diversity technique for wireless communications. IEEE J. Sel. Areas Commun. 16(8), 1451–1458 (1998) 14. Saito, Y., Kishiyama, Y., Benjebbour, A., Nakamura, T., Li, A., Higuchi, K.: Non-orthogonal multiple access (NOMA) for cellular future radio access. In: 2013 IEEE 77th Vehicular Technology Conference (VTC Spring), pp. 1–5, 2013. 15. Benjebbour, A., Saito,Y., Kishiyama, Y., Li, A., Harada, A., Nakamura, T.: Concept and practical considerations of non-orthogonal multiple access (NOMA) for future radio access. In: 2013 International Symposium on Intelligent Signal Processing and Communication Systems, pp. 770–774, 2013. 16. Saito, Y., Benjebbour, A., Kishiyama, Y. Nakamura, T.: System-level performance evaluation of downlink non-orthogonal multiple access (NOMA). In: 2013 IEEE 24th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), vol. 2, pp. 611– 615, 2013. 17. Di, B., Song, L., Li, Y., Li, G.Y.: Broadcast communications for 5G V2X services. In: GLOBECOM 2017—2017 IEEE Global Communications Conference, Singapore, pp. 1–6, 2017. 18. Ibrahim, A.S., Member, S., Sadek, A.K., Su, W., Liu, K.J.R.: Cooperative communications with relay-selection: when to cooperate and whom to cooperate With ? IEEE Trans. Wirel. Commun. 7(7), 2814–2827 (2008) 19. Garg, J., Mehta, P., Gupta, K.: A review on cooperative communication protocols in wireless world. Int. J. Wirel. Mob. Netw. 5(2), 107–126 (2015) 20. Ding, Z., Peng, M., Poor, H.V.: Cooperative non-orthogonal multiple access in 5G systems. IEEE Commun. Lett. 19(8), 1462–1465 (2015) 21. Kim, J., Lee, I.: Non-orthogonal multiple access in coordinated direct and relay transmission. IEEE Commun. Lett. 19(11), 2037–2040 (2015) 22. Sendonaris, A., Erkip, E., Aazhang, B.: User cooperation diversity-part II: implementation aspects and performance analysis. IEEE Trans. Commun. 51(11), 1939–1948 (2003)

A Comprehensive Survey of NOMA-Based Cooperative Communication …

629

23. Xu, P., Yang, Z.: Optimal relay selection schemes for cooperative NOMA. IEEE Trans. Veh. Technol. 67(8), 7851–7855 (2018) 24. Mi, D., et al.: Demonstrating immersive media delivery on 5G broadcast and multicast testing networks. IEEE Trans. Broadcast. 66(2), 555–570 (2020) 25. Zhang, Y., Wang, X., Wang, D., Zhao, Q., Deng, Q.: NOMA-based cooperative opportunistic multicast transmission scheme for two multicast groups: relay selection and performance analysis. IEEE Access 6, 62793–62805 (2018) 26. Xiao, K., Wang, F., Rutagemwa, H., Michel, K., Rong, B.: High-performance multicast services in 5G Big Data network with massive MIMO. In: 2017 IEEE International Conference on Communications (ICC), Paris, pp. 1–6, 2017. 27. Gendia, A.H., Elsabrouty, M., Emran, A.A.: Cooperative multi-relay non-orthogonal multiple access for downlink transmission in 5G communication systems. 2017 Wireless Days, Porto, pp. 89–94, 2017. 28. Lv, L., Chen, J., Ni, Q., Member, S., Ding, Z., Member, S.: Design of cooperative non-orthogonal multicast cognitive multiple access for 5G systems: user scheduling and performance analysis. IEEE Trans. Commun. 65(6), 2641–2656 (2017) 29. Kader, F., Shin, S.Y., Member, S., Leung, V.C.M.: Full-duplex non-orthogonal multiple access in cooperative relay sharing for 5G systems. IEEE Trans. Veh. Technol. 67(7), 5831–5840 (2018) 30. Wang, H., Ma, S., Ng, T.: On performance of cooperative communication systems with spatial random relays. IEEE Trans. Commun. 59(4), 1190–1199 (2011) 31. Ding, Z. S. Member, Dai, H., Member, S., Poor, H.V.: Relay selection for cooperative NOMA. IEEE Wirel. Commun. Lett. 5(4), 416–419 (2016).

Analytical Study on Load Balancing Algorithms in Cloud Computing Manisha Pai, S. Rajarajeswari, D. P. Akarsha, and S. D. Ashwini

Abstract With advancements in the Internet field, cloud computing is critical to provide services on a pay-per-use basis. It has become effortless to use due to its attributes such as fault tolerance, high availability, and scalability. Due to these features and their increased popularity, load balancing becomes extremely important to ensure that the tasks are appropriately distributed across servers to prevent confusion. Many people have created and developed various algorithms to provide a good overview and architecture to the cloud service. This paper aims to create a review for different algorithms of load balancing. Keywords Cloud computing · Algorithms · Performance metrics · Load balancing

1 Introduction With the increase in the explosion of information on the Internet, storing the data and computing the data being found have become a herculean task. This situation paved the way for cloud computing, which helped in computing data and storing it on various devices such as tablets, laptops, phones as long as the user is on the same account. Cloud computing involves visualization, distributed computing, load balancing, resource sharing, scheduling QoS management, and many other attributes [1]. It allows many people to store vast amounts of scalable data and can be accessed at any given time on any device connected to the cloud. Cloud computing also finds its application in the day-to-day activities. Suppose its services are used in the best possible way. In that case, a cloud computing model is effective and such efficient use can be accomplished by employing and ensuring proper management of cloud resources. Resource management is achieved by implementing rigorous resource scheduling, allocation, and efficient resource scalability techniques [2]. The cloud infrastructure was operated by a cloud provider during public rollout and made available to the general public. A single entity solely operates the infrastructure and all activities in a private implementation and can work on-premises or M. Pai (B) · S. Rajarajeswari · D. P. Akarsha · S. D. Ashwini Ramaiah Institute of Technology, Bangalore, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_50

631

632

M. Pai et al.

off-premises. The cloud infrastructure consists of public and private cloud features in a hybrid implementation, while the cloud infrastructure is for a specific group deployment [3].

2 Cloud Service Models Cloud computing uses three models to allow flexibility over the product users buys. Each model has a different architecture and service, enabling the consumer to choose whatever suits them the best.

2.1 Software as a Service (SAAS) SaaS allows the user to use applications or software which already has its infrastructure. It reduces a user’s work as it does not need to be installed or maintained by the user, saving time and resources. It is ready to use and fully developed, and its setup can be accessed from various devices if connected to the cloud. As hardware is not required for this model, small businesses or consumers can also use it without paying much. The only disadvantage of SAAS is that it has the lowest speed as it relies on the Internet to function. It also does not give the user much control over the program.

2.2 Platform as a Service (PAAS) PaaS allows users to help the user develop and run small businesses. It also does not require the users to install any hardware or software, making it very cost-effective. It is incredibly scalable and can ensure that applications are developed faster. The only disadvantage of the platform as a service is it does not have any security and maybe incompatible while switching from different types of devices.

2.3 Infrastructure as a Service (IAAS) IaaS allows every provider to maintain framework components traditionally in every data center. It also retains control over the storage and operating systems. It allows for flexibility and simple deployment. This model’s disadvantage is that it is costly compared to the other two models and makes it very difficult to monitor a business or a company [4].

Analytical Study on Load Balancing Algorithms …

633

3 Load Balancing Since the cloud can hold a large amount of data, it may not work efficiently if it is not adequately distributed through its virtual services to ensure service utilization and provide good service to its consumers. Here comes the concept of load balancing, which allows the cloud to share all the data and ensure that each node is given a fair amount of computation and resources to achieve better performance and response [5]. Many algorithms focus on providing proper amounts of data to ensure each node’s speed and efficiency. Equal computation division is distributed to the virtual nodes to have a quick response time, and no server becomes busy fast [6]. This paper provides an overview of all the algorithms proposed by researchers for load balancing, addressing different problems and their solutions.

3.1 Load Balancing Types There are two categories of load balancing algorithms [1]:

3.1.1

Static

There is a very low variation of load in this type of algorithm. All traffic is divided equally among all the servers. However, by using this algorithm, the network needs to have an in-depth knowledge of the server’s resources to give a good performance of the processes. The primary drawback of this type of algorithm reports being that it can only be used after the network is created.

3.1.2

Dynamic

The algorithm checks the entire network, finds the server with the least work, and gives it a load balancing preference. It uses real-time communication, which helps it understand each server’s load. This algorithm’s specialty is to make decisions using the network’s current state.

3.2 Challenges Even after cloud computing is a significant business sector these days, many issues have not been solved. Some of the challenges faced are:

634

3.2.1

M. Pai et al.

Energy Consumption

Since cloud computing uses a tremendous amount of energy, power saving becomes very important. A few algorithms, exceptionally if dynamic, can cause massive energy consumption.

3.2.2

The Complexity of the Algorithm

The more complex an algorithm becomes, the more difficult and challenging it becomes to execute it. It needs to be more detailed and sometimes causes confusion between the servers in the network.

3.2.3

Heterogeneous Data

Most users have now seen using and storing data challenging to segregate. Distributing these data among servers and using load balancing algorithms can be extremely difficult.

3.2.4

Vendor Lock-In

The cloud service may often not be of a good standard or expensive. Entering an agreement with cloud service providers may be easier than leaving it [1].

3.2.5

Downtime

No provider can give a platform without a possible downtime. It makes small business owners reliant on Internet connectivity, without which it becomes impossible to use the cloud services [5].

3.2.6

Data Privacy

Users must have an efficient plan to ensure that sensitive information is handled carefully.

3.3 Load Balancing Metrics Different types of metrics can be tuned to improve the performance and efficiency of load balancing algorithms [7]:

Analytical Study on Load Balancing Algorithms …

3.3.1

635

Response Time

It is described as the time required by an algorithm to behave to a user’s task request. It includes the service hours and the queuing waiting time. Improving the performance can be done by minimizing this metric.

3.3.2

Migration Time

It is described as the time required to bring a heavily loaded node to a lightly loaded node.

3.3.3

Migration Cost

It is the cost of migrating tasks during the uniform distribution of load. An efficient algorithm should minimize the migration cost.

3.3.4

Waiting Time

It is the time spent by a process waiting in a queue before any resource is allocated to it.

3.3.5

Makespan

It is described as the total amount of hours required to finish all scheduling jobs. It includes processing time, shift time, waiting time, and time needed to access and communicate with resources.

3.3.6

Fault Tolerance

The load balancing algorithms can shift the workload from one node to another remotely working node in node failure. The preferred algorithm should have a high stated tolerance.

3.3.7

Scalability

It is the design’s intelligence to scale up or down the workloads depending on the number of nodes.

636

3.3.8

M. Pai et al.

Reliability

A task allocated to a node is probably completed successfully. An efficient algorithm should be highly reliable in case of any failures.

3.3.9

Throughput

It is defined as the element of tasks that have been executed within a period. The aim of the load balancing algorithm should be to achieve maximum throughput.

3.4 Degree of Imbalance It is the degree of imbalance of workload distribution across OS nodes after load balancing. Conversely, the degree of balance is the measure of uniform load distribution.

3.4.1

Resource Utilization

It is the measure of currently used resources to the total available resources. The objective of the algorithm must be to utilize the resources efficiently.

3.4.2

Energy Consumption

It ensures the strength used by each node after the balancing of the load is done. This metric has to be minimized to avoid overheating the data centers.

3.4.3

Carbon Emission

They are used to calculate the carbon oxide emitted by virtual machines in various data centers. An optimal load balancing algorithm can minimize this metric to reduce the environment’s impact.

3.4.4

Service-Level Agreement Violation

SLA is violated when user tasks are not executed. It might occur if all the resources are utilized by the nodes resulting in rejecting the new incoming jobs. A smart algorithm will have a fewer number of SLA violations.

Analytical Study on Load Balancing Algorithms …

3.4.5

637

Performance

If all the metrics have been efficiently satisfied, the system’s performance improves significantly.

4 Existing Load Balancing Algorithms 4.1 An Adaptive Firefly Load Balancing Algorithm Working: Kaur and Kaur in [8] proposed a modified firefly algorithm that was more advantageous than the original algorithm. The fireflies in the adaptive firefly algorithm consider only a variable-sized step for its movement. This algorithm’s basis comes from the observations made on the insects’ behavior. They generate flashes with different intensities. These intensities help the other insects to trace them. As the distances increase, the passions reduce and become less attracted to the far fireflies. This algorithm mainly considers two parameters: the intensity of the light and the attractiveness. In a cloud environment, powers are taken as the sequence of the above parameters that depict virtual machine utilization. Implementation: Using the basic CloudSim, the authors developed a cloud analyst to act as an interface for creating and integrating the components. A cloud analyst contains several data centers that have been placed worldwide. They have virtual machines there that need to be configured. Results: As the number of data centers established worldwide increases in number, then the response hours and the client’s evaluating hours will be reduced. Hence, this algorithm outperforms the ant colony optimization (ACO) algorithm. With the number of data centers escalating, the load also increases. Virtual machines’ scheduling mechanism handles this increased load by sharing burdens between the data centers.

4.2 Bio-inspired Load Balancing Algorithm Working: M. Gamal et al. proposed an osmotic hybrid artificial bee and ant colony optimization (OH_BAC) approach in [9] for balancing the load. In a cloud environment, the hosts that remain active are categorized into underloaded, overloaded, and balanced hosts. The system contains no flat hosts. To monitor the state of the system balance, OH_BAC algorithm is applied. Now the virtual machines VM move from overloaded hosts to the underloaded hosts to obtain load balancing. This algorithm adopts the characteristics of ACO for identifying quick solutions for remote systems and some attributes from an artificial bee colony (ABC) approach for spreading information by waggle dancing and thus forming a knowledge base. The authors have

638

M. Pai et al.

combined both these techniques to sort physical machines (Pm). This combination is spread to all the Pms depending on its energy consumption. This method overtakes the method of OH_BAC that selects Pm randomly. OH_BAC considers the application of the threshold value dynamically. Implementation: CloudSim API was used to implement this approach. Results: Comparative results for different categories are shown below:

4.2.1

Comparison with the Algorithm that Detects Host Overloading

The proposed method is compared with hybrid artificial bee and ant colony optimization (H_BAC), ABC, ACO, and others for strength expenditure, the SLA violation (SLAV), SLA violations time per active host (SLATAH), reduction in the performance due to migration (PDM), number of host’s shutdowns, and number of VMs migrations. The parameters in these experiments are fixed. H_BAC is also proved to consume a high amount of strength when observed against ACO and ABC, whereas [3] consumes less energy. Ala’Anzy and Othman [3] improves H_BAC by using the osmosis technique, which considers the power consumption of each Pm that helps in selecting the least consumption of power. ACO and ABC approaches have comparatively high SLAV value. Also, OH_BAC boosts the PDM and combines with ACO and ABC to generate a better VM to move to the most convenient Pm. SLATAH value of this approach is more than the other algorithms. OH_BAC reduces the active host count, which was occupied and later went to the shutdown state. Thus, this algorithm achieves better results than different algorithms. OH_BAC activates the best osmotic host to reduce the consumption of power. It also considers the consumption of energy of each Pm between active hosts. This algorithm is improved than H_BAC because ACO, ABC combined access prefers the best VM to wander with the help of a convenient, heavily loaded host. Even due to more SLATAH value, it does not affect its performance.

4.2.2

Comparison with Bio-inspired Algorithms

Here the number of efforts assigned is not fixed as the number of actions increases dynamically. All the previous parameters are considered. In this case, also, H_BAC consumes a high amount of energy than ACO and ABC and has a more significant amount of SLAV when associated with OH_BAC and H_BAC approaches. Initially, H_BAC has less PDM checked to ACO and ABC. As the loads in the cloud increases, PDM also increases similar to the ACO algorithm because of the random feature selection. However, in OH_BAC, ABC, and ACO, select the most convenient VM being moved. Also, OH_BAC has a high SLATAH value when checked with H_BAC, ABC, and ACO algorithms. Also, the complexity of time of OH_BAC is proved to be O (n 2 ).

Analytical Study on Load Balancing Algorithms …

639

4.3 Dynamic Load Balancing Algorithm Working: Pooja B Mhaske proposed a method in [10] to show that all virtual machines (VMs) are equal when their workload is concerned. The workload on VM is governed by the load balancer elastic material and de-material of all resources. The makespan time is minimized, whereas many tasks are elevated to attain the deadline. Job Request Handler: The user’s request is taken complete care of receiving it to forward it to the controller. Controller node: It addresses only the controller node’s valid requests for further processing. Matchmaker: The information regarding the users’ job appeal and the Vm is accommodated. Dynamic task scheduler and load balancer: The dynamic task scheduler receives the matchmaker’s mapping list. The active task scheduler gives the VM tasks following the scheduling algorithm. Now, these VMs are monitored by cloud load balancer and also calculates the amount load on all the VM. Virtual instance monitor: Each Vm’s load is monitored continuously, and then it is dispatched to load balancer, which sends for the use of the controller node. Here the dynamic load balancing algorithm follows three algorithms: Algorithm 1: Considers the deadline and sorts the tasks. To validate this, the author considers two arrays. One among them has the task’s length, and the other has the task deadline. Now a sorting algorithm is used to sort the tasks for execution. The target here aims at beheading the maximum number of jobs. Thus, the study with the least deadline is executed first. Algorithm 2: An algorithm that follows an efficient task scheduling principle must allocate tasks to VMs and reduce these tasks’ makespan time. The fundamentals of matchmaker and dynamic task scheduler engage in distributing the study effectively to the VM. Algorithm 3: The threshold and either the overloaded or the underloaded condition are represented. If any VM is overloading or underloading condition, they are sorted in decreasing and increasing order before assigning tasks. Implementation: The simulation is done on CloudSim. Results: The algorithm minimizes the makespan time, whereas tasks are increased to meet the deadline ratio.

4.4 Throttled Load Balancing Algorithm Working: Durgesh Patel and Anand S Rajawat in [4] have proposed that a throttled algorithm, virtual machines (VM), and their states are maintained by the load balance in an index table. Now, a suitable VM is identified to perform the assigned job. The client or the server request is made to the data center in identifying an appropriate VM. To determine a proper VM, the data centers examine the load balancer. Till the

640

M. Pai et al.

first feasible VM is found or a scan is completed, the index table is scanned from top to bottom by load balancer. The data center is not loaded until the VM is found. Then the data center sends its request to the VM. While identifying a suitable VM, if the required one is not available, then the load balancer returns −1 to the data center. There are three phases in the execution process. During the first phase, the VMs are formed, and they are idle with no jobs running and would be waiting for the jobs in the queue to be scheduled by the scheduler. In the second phase, jobs waiting in the line are processed once assigned to the VMs. In the third and final phase, the cleanup process begins, which is the VM’s destruction. Thus, the total number of jobs executed, excluding the VM formation and cleanup time, within a period is the throughput of the proposed model. Also, the algorithm fulfills the demand of resources as and when required. It causes an increase in job executions, hence reducing jobs being rejected. Their proposed model includes three algorithms, i.e., round-robin, ESCE, and throttled algorithm. It contains the following components: 1. 2.

3.

A hash map table is maintained, containing all the current state and expected response times of all the virtual machines. The efficient throttled load balancer checks the received requests forwarded by the data center controller (DCC). Also, it is responsible for allocating resources to the VM. The hash map table is scanned using the efficient throttled algorithm and checks the status of every VM: (a)

If a VM with minimum load and least response time is encountered then, • • • •

(b) 4.

TheVM’s id is sent to the DCC. The VM receives a request from the DCC. The DCC notifies the new allocation in the updated throttle. Hash map index is being updated using an efficient throttled algorithm.

The DCC receives −1 from the throttled algorithm if no VM is found.

Once the submitted request is completed, • The data center controller sends a message regarding completing the VM’s task to the throttled algorithm. • The throttled approach arranges the hash map table according to the updates received.

Implementation: Cloud analyst is a GUI tool developed on CloudSim architecture. Simulation parameters include region, users, data center controller, Internet characteristics, VM load balancer, and cloud app service broker. Results: Average response time, service hours of the data center, and the overall cost of the various data centers are considered for analyzing the conduct of this approach.

Analytical Study on Load Balancing Algorithms …

641

4.5 Hybridization of the Meta-heuristic Load Balancing Algorithm Working: U. Jena et al. in [5] have proposed a hybridization of modified particle swarm optimization (MPSO) and an improved Q-learning algorithm named QMPSO. It proceeds with the following: Q-learning algorithm: This algorithm is referred to be a type of reinforcement learning in the machine learning field. The standard deviation, which measures the effect of load balance, throughput, tasks migrated, measures the performance of cloud and quality of services (QoS), degree of imbalance, idle time, and processing time of jobs based on the computing power of VMS. An improved PSO version (IPSO) helps obtain a balanced allocation for many tasks. It is attained by grouping the submitted tasks in individual batches using a dynamic approach. The usage of the allocated resource depends on the collections. The algorithm’s efficiency has been evaluated in terms of the degree of imbalance, the standard deviation of load and makespan.

4.6 Improved Dynamic Load Balancing Algorithm Based on Least-Connection Scheduling Working: L. Zhu et al. in [6] have proposed an improved approach for dynamic load balancing described as the enhancement_conn algorithm, which is defined on the original least_conn algorithm. Here every server weight is classified into two divisions. The first tells about the working server’s capacity, and the other is the response of the memory capacity. Here, the author calculates the composite load depending on several connections and the response hours. An optimal node can be defined as the proportion of the combined load to the weight—calculations of related indicators such as node’s carrying capacity, load conditions, node weights. Composite gear and calculating the optimal node helps analyze the algorithm’s performance. As the concurrent connections escalate in their number, the average response hours also escalate readily.

4.7 Load Balancing Algorithm for Allocation of Resources Working: S. Mousavi et al. in [11] scheduled a hybrid algorithm for load balancing with a combination of teaching–learning-based optimization (TLBO) and gray wolves optimization algorithms (GWO). This combination contributes to the increase in throughput by using the well-maintained VM. The problem of particles getting trapped into the local minimum is solved. This hybrid algorithm, when compared to different algorithms, has results verified with particle swarm optimization (PSO), biogeography-based optimization (BBO), and GWO. This proposed approach is a

642

M. Pai et al.

combination of TLBO and GWO. Since they are based on an hour and amount of the assets and ability, these algorithms are known as the approximation method. Thus, this hybrid algorithm works to increase the process’s speed, improve local optimization, and improve accuracy. The proposed algorithm considers an initial state which contains an initial population that is uniformly distributed and a basic solution. Each wolf is determined to be the solution. Wolves are divided into three divisions: α, β, and γ . Depending on the originality of the fitness function, every category provides a significant explanation. A meaningful description for fitness function is also discovered. Every equation present in the GWO updates the wolf location. Based on the updated value, new positions are encountered. Along with this, the values β and γ classes are updated. Thus, a new fitness function is generated for the wolves. Also, three new categories of wolf groups are being computed. It can lead to a better solution and can be used in improving the proposed algorithm. Initially, the cultivating and researching algorithm’s solution is obtained from the wolves’ significant explanations. Implementation: The GWO is modeled using the analytical miniature of design, and performance is done using algorithms, i.e., round-robin, least_conn, and enhancement_conn version, which fails requests. The average response hours of the enhance_conn algorithm are less than other approaches due to the increase in the number of concurrent connections. MATLAB and CloudSim tool are used for simulating resource allocation. Results: The hybrid algorithm has proved to achieve the best results compared to all the other methods concerning unimodal and multimodal functions. The computed outputs shown here are concerned with the unimodal operations. The proposed algorithm can solve tasks that do not contain any local optima.

4.8 Water Flow-Like Load Balancing Algorithm Working: H. Alazzam et al. in [12] have proposed an algorithm to develop a powerful load balancer that mimics water behavior. The proposed approach is a metaheuristic algorithm that solves the problems of optimization. This algorithm resembles the flow of water moving from high terrains to low terrains. Additionally, water rushes can branch out to form subflows, which can later be merged when the water flows down toward the landscape. This approach solves the traveling salesman’s problem, back-pinning, and other population-based search problems. A single solution agent is considered the beginning of the algorithm and proceeds to find the neighboring solution to obtain an excellent flow assistant. The search procedure moves from superior to inferior solutions. When large agents gather at a particular location where the subflows merge, the WFA approach reduces the number of repeated searches. The solution search space widens as a result of evaporation. This evaporation will be accumulated and causes rainfall. Similarly, the WFA algorithm increases the number of solution agents to initiate and improve the solution space. This algorithm helps increase the load balancer’s efficiency by identifying the enhanced excellent

Analytical Study on Load Balancing Algorithms …

643

explanation assistants that result in perfect details of administering cloud computing resources. An important parameter is being considered called the fitness function. Fitness function: This algorithm is best suited for reducing the cost function. This minimization in the cost function is done to allocate cloud resources to the user’s jobs. Implementation: Cloud analyst is used as a toolkit for modeling this approach. The proposed algorithm is compared to the genetic algorithm, round-robin, and min-min algorithm in migration hours, response hours, and throughput. The proposed method achieved minimum response time, whereas the round-robin algorithm achieved the maximized result. The GA and the min_min algorithms are noted to achieve the same average response time.

4.9 First Fit Decreasing Method Working: P. H. Raj et al. in [13] have proposed an algorithm in which all the incoming jobs have equal importance. The heap sort method is used to arrange the user jobs in decreasing order. The queue manager who retains all the resources required by the cloud allocates the resources requested by the employment. It also maintains a record regarding the currently running jobs. In this approach, the load across the resources is balanced with the help bin packing’s first fit decreasing (FFD) arrangement. This approach is an NP-hard combinational optimization complication. This approach primarily focuses on setting a pack of items into the least number of bins. The bestfit and worst-fit methods are used, but there might still be some amount of space wasted. A new container will be preferred if the first object does not fit into the first bin. There will be no wastage of space since, in optimal packing, only suitable items are selected into the containers. Results: This approach aims to use the first fit decreasing the offline method to efficiently handle the load by distributing it equally across the available processors. The server’s utilization and throughput are also improved such that its vacant space is reduced considerably. This algorithm can be useful for variable-sized processors in the server.

4.9.1

Based on a Virtual Machine’s (VM) Capacity

Working: R. Kaur et al. in [14] described that when the tasks are submitted to a VM, a load balancer checks if any VMs are exploiting any power. Once the load balancer decides on the VM having the greater processing capacity, the arrived task is allocated to that particular VM. Y. Jiang also noted that the VM having the greater processing capacity is the one with a lower power rate. The arrived task is now dismissed toward the chosen VM. Once the assigned task is fulfilled, the load balancer works toward updating the VM’s power. The arrangement of the functions is made based

644

M. Pai et al.

on first come first serve. The VM continues processing the tasks until all the jobs are performed successfully. Implementation: Cloud Simsimulation with Netbeans IDE was used to implement this approach. Results: When this approach was conducted on a different set of tasks, then the variation in response time, its average, and the turnaround time between the roundrobin and heuristic approach were noted to analyze the performance of each task. Disadvantage: This work does not consider the VM’s MIPS.

4.9.2

Improvised Bat Load Balancing Algorithm

Working: The work done in [15] improvises the bat method of managing the min_min algorithm and max_min algorithm such that a progressive community of constructive bats is generated. The working of these algorithms is as follows: Min–Min algorithm: The conclusion time of all the tasks is determined, and then the functions with the minimum conclusion time are preferred. This selection helps arrange the available resources according to their minimum execution time. A graph is plotted to show the makespan time of the virtual machines. Max–Min algorithm: The maximized completion hour of the job is calculated. Then prefers one with the reduced completion time. This selection helps arrange the available resources according to their minimum execution time. A graph is plotted to show the makespan time of the virtual machines. This approach generated the population using its scheduling approaches. When these two algorithms are applied, a better desirable population is developed to obtain a better optimal result. For this algorithm to work, pulse frequency, pulse rate, and loudness are initialized. An optimal result is obtained when the proposed algorithm is applied continuously. This algorithm uses echolocation and the frequency of ultrasonic waves produced by the bats. The bats have the nature of computing their targets and creating as many obstacles while flying. Along with this, for their guidance, they generate emissions and waves. The frequency of sound generated by them is reduced, and waves’ emission temporarily stops reaching their prey. Results: The proposed heuristic approach can generate an optimal solution as it optimizes the problems and finds multiple solution paths.

5 Conclusion One of the vexing problems within the cloud platform is load balancing. In this computing environment, it ensures reliability and availability. It improves the system’s productivity by dividing the workload evenly between competing processes. The primary objective of load balancing is to minimize time, expense, and optimize throughput.

Analytical Study on Load Balancing Algorithms …

645

Uniform distribution of the load among all the nodes is one of the most critical objections faced by cloud settings. The last few decades have been dedicated to finding an optimal approach to balance the workload. This paper contains a comparative study of load balancing schemes proposed by various researchers. These algorithms’ performance has been evaluated based on multiple parameters such as throughput, resource utilization, and waiting time. The outcome, advantages, and disadvantages of each algorithm have been brought out. We found that majority of the algorithms considered for this study have worked on reducing makespan, increasing throughput, and scalability while utilizing the resources efficiently. However, fewer studies have focused on compressing energy expenditure and carbon emission. This survey will help researchers develop new ideas to create eco-friendly load balancing algorithms that can achieve better performance than the current approaches.

References 1. Jiang, Y.: A survey of task allocation and load balancing in distributed systems. IEEE Trans. Parallel Distrib. Syst. 27(2), 585–599 (2016) 2. Afzal, S., Kavitha, G.: Load balancing in cloud computing—a hierarchical taxonomical classification. J. Cloud Comput. 8(22) (2019) 3. Ala’Anzy, M., Othman, M.: Load balancing and server consolidation in cloud computing environments: a meta-study. IEEE Access 7, 141868–141887 (2019) 4. Patel, D., Rajawat, A.S.: Efficient throttled load balancing algorithm in cloud environment. Int. J. Mod. Trends Eng. Res. 2(3) (2015) 5. Jena, U., Das, P., Kabat, M.: Hybridization of a meta-heuristic algorithm for load balancing in the cloud computing environment. J. King Saud Univ. Comput. Inf. Sci. (2020) 6. Zhu, L., Cui, J., Xiong, G.: Improved dynamic load balancing algorithm based on leastconnection scheduling. In: IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China (2018) 7. Hota, A., Mohapatra, S., Mohanty, S.: Survey of different load balancing approach-based algorithms in cloud computing: a comprehensive review. In: Behera, H.S., Nayak, J., Naik, B., Abraham, A. (eds) Computational Intelligence in Data Mining. AISC, Singapore (2019) 8. Kaur, G., Kaur, K.: An adaptive firefly algorithm for load balancing in cloud computing. In: Deep, K., et al., (eds) Proceedings of Sixth International Conference on Soft Computing for Problem Solving. Advances in Intelligent Systems and Computing, Singapore (2017) 9. Gamal, M., Rizk, R., Mahdi, H., Elnaghi, B.E.: Osmotic bio-inspired load balancing algorithm in cloud computing. IEEE Access 7, 42735–42744 (2019) 10. Mhaske, P.B.: Dynamic load balancing algorithm in cloud computing environment. Int. J. Mod. Trends Eng. Res. 7(4) (2020) 11. Mousavi, S., Mosavi, A., Varkonyi-Koczy, A.R.: A load balancing algorithm for resource allocation in cloud computing. In: Luca, D., Sirghi, L., Costin, C. (eds) Recent Advances in Technology Research and Education. INTER-ACADEMIA 2017. Advances in Intelligent Systems and Computing (2018) 12. Alazzam, H., Alsmady, A., Mardini, W., Enizat, A.: Load balancing in cloud computing using water flow-like algorithm. In: Proceedings of the Second International Conference on Data Science, E-Learning and Information Systems (DATA’ 19). Association for Computing Machinery, Article 29, New York, NY, USA (2019) 13. Raj, P.H., Kumar, P.R., Jelciana, P.: Load balancing in mobile cloud computing using bin packing’s first fit decreasing method. In: Omar, S., Haji Suhaili, W., Phon-Amnuaisuk, S.

646

M. Pai et al.

(eds) Computational Intelligence in Information Systems. CIIS 2018. Advances in Intelligent Systems and Computing (2018) 14. Kaur, R., Ghumman, N.S.: A load balancing algorithm based on processing capacities of VMs in cloud computing. In: Aggarwal, V., Bhatnagar, V., Mishra, D. (eds) Big Data Analytics. Advances in Intelligent Systems and Computing, Singapore (2018) 15. Raj, B., Ranjan, P., Rizvi, N., Pranav, P., Paul, S.: Improvised bat algorithm for load balancingbased task scheduling. In: Progress in Intelligent Computing Techniques: Theory, Practice, and Applications. Advances in Intelligent Systems and Computing, Singapore (2018)

Smart Driving Assistance Using Arduino and Proteus Design Tool N. Shwetha, L. Niranjan, V. Chidanandan, and N. Sangeetha

Abstract In the modern era, the automobile trading has enhanced a lot by adding more safety features to protect the driver and vehicle on the road. Majorly, the accident occurs due to the fault in the system or ignorance of the driver. This paper demonstrates the digital framework, wherein the sensors are connected to the centralized system through the CAN bus with the main controller for leveraging proper alert information to the driver. The primary goal of the proposed system is to make the driver more comfortable to drive by providing the real-time data like status of the traffic signal, vehicle headlight control, leakage of gas in the vehicle, temperature variation, door safety sensors, and vehicle distance sensor. The system is designed to monitor and control the gas leakage by using control values, headlight control for high beam and low beam, door safety for automatic locking mechanism, vehicle-tovehicle interspace maintenance via ultrasonic sensors, and door control via infrared sensors. Keywords Arduino UNO · CAN · LM35 · LDR · MQ6 · MCP2551 · SRAM · PWM · Ultrasonic sensor · Infrared sensor

N. Shwetha (B) · N. Sangeetha Department of Electronics and Communication Engineering, Dr. Ambedkar Institute of Technology, Bangalore, Karnataka, India N. Sangeetha e-mail: [email protected] L. Niranjan Department of Computer Science and Engineering, HKBK, College of Engineering, Bangalore, Karnataka, India V. Chidanandan Department of Computer Science and Engineering, Dr. Ambedkar Institute of Technology, Bangalore, Karnataka, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_51

647

648

N. Shwetha et al.

1 Introduction The world is changing very rapidly; as observed in the same way, the various technologies are also changing the user’s comfort. It makes the life easier and fail proof. The current focus is on vehicle safety and driver comfort [1]. It is improvised continuously for omission of any faults in the system. In this paper, the main concentration is on the mechanism framework, wherein the driver feels more comfortable during the driving with add-on controller over the vehicle. Here, the system comprises of a transceiver unit which are controller via the microcontroller which intern connected to the CAN bus [2]. The same is interlinked with some sensors like LDR, temperature sensor, gas sensor, buzzer, and light dimmer circuit. At the receiver side, it is incorporated with RF module to receive the traffic signal status [1, 3]. Most of the accidents are occurring in a regular interval and people are getting harmed, this is due to the insensitivity of the driver or less consciousness or not abide by the traffic signals or losing regulation of the vehicle [3]. This paper composes a digital system to caution the driver related to the information of the traffic signals which is ahead of the vehicle with safety features like knowing the eminence of any leakage of gas in the gas-powered vehicles [4]. The designed system is not only displaying the message associated to the traffic signal and any leakage of gas; it is also the high-beam and low-beam of the vehicle automatically without disruption of the driver [5]. This adds an advantage to the user while driving without any hassle. The control system is placed in the vehicle where in driver has more communication with the vehicle for safer driving.

1.1 CAN Protocol Controller area network (CAN) was put forward by German company in the name of Robert Bosch in the year 1980s. This was popular for automobile applications for serial communication. The main idea was to make it highly stable and shielded. It was an add-on advantage wherein it reduced the wiring harness and overall weight which in turn increased the fuel efficiency in the vehicles. Since its beginning, the CAN protocol has attained acceptance in industrial automation and automotive applications. Apart from the automobile industry, it was also applied in medical equipment, test equipment, and mobile machines. The main aim of this application is to utilize the benefit of the CAN bus in industrial applications. As seen in automobile most of the communication will take place using CAN protocol as shown in Fig. 1, which is used for communication between the systems. The main advantage of this protocol is it has very high-speed data transfer between the system nodes in duplex modes [2]. The nodes are connected to the microcontroller in serial fashion. The serial bus transmission protocol is used here wherein the data corruption is prevented. The protocol is designed especially for automobile industry to reduce the complexity of circuity in which all the control units are connected. During the transmission, CAN

Smart Driving Assistance Using Arduino and Proteus Design Tool

649

Fig. 1 CAN bus

reduced the noise or electrical interferences which are introduced during transmission. In the proposed system, the CAN bus is responsible for transmit of data from transmission section to the receiver section with all relevant indications. Figure 1 shows the CAN bus protocol wherein the data transfers from the sensors like light sensor, gas sensor, and temperature sensors through the CAN nodes to the centralized system. The centralized system is the Arduino controller wherein the information is compared with the threshold values and the decision is taken care by the controller through the actuators.

2 Proposed System In the proposed system, the Arduino Uno controller is used at both transmitter and receiver as shown in Fig. 2a, b. Traffic signal transmission is sent via Arduino Uno using RF module section [5]. At the first transmitter section, the status of the traffic light is transmitted through the RF module to the receiver section wherein the receiver section compares the information related to the traffic updates during the process. The information is fed to the used via the LCD module [6]. The second transmitter is interfaced with gas sensor for detection of gas leakage in the enclosed environment, and the light sensor was used to detect glaring effect of light from the incoming vehicles to control the high beam and low beam of the vehicle. The ultrasonic sensor is used here to indicate and control the vehicle at proper distance from vehicle-tovehicle distance maintenance. The infrared sensor is used here to indicate the status of the doors of the vehicle as an add-on feature to the system. At the receiver side, the alert system is used to caution the driver regarding the gas leakage, door status, and other sensor information [8]. Both the transmitter and the receivers are connected via the bus to exchange the information at higher speed. The proposed system uses the RF as a medium to transfer the information to the user in a convenient way to avoid accidents because of gas leakage, glaring effect of vehicles, and increase in temperatures within the cabins where in the user may be unaware of it.

650

N. Shwetha et al.

Fig. 2 Proposed system of receiver section. b Transmitter section

3 Hardware Components 3.1 Arduino UNO The Arduino UNO is a microcontroller based on 8-bit ATmega328P controller wherein the operating voltage of the controller is 5 V. The recommended input voltage varies from 7 to 12 V. The voltage limitation of the controller is from 6 to 20v. It has 14 digital I/O pins among these 6 delivers PWM output. It has 6 analogy input pins with DC current of 40 mA and for the load drive 50 mA at 3.3 V. The main advantage of this controller is it has flash memory of 32 k bytes, SRAM of 2Kbytes, 1 KB of EEPROM, and 16 Mhz clock frequency. The Uno has many features to communicate with other Arduino or with personal computers and many more microcontrollers. The Atmega328P offers UART communications which converts the TTL to preferred voltages to communicate with other devices through Tx and Rx pins. The advantage of polyfused in Uno is to protect the USB ports of computers during short circuit and overcurrent surges. Due to this protection features in Uno, it forms an extra layer of protection to the system. It works when the current is greater than 500 mA, it opens the connection until the overcurrent or short is corrected.

Smart Driving Assistance Using Arduino and Proteus Design Tool

651

Fig. 3 LM35 temperature sensor

3.2 LM 35 Figure 3 shows the LM35 temperature sensor. It is named as a precision integrated temperature sensor. The output of the sensor is based on the variations on the temperature around it. It is used to measure the temperatures from −55 to 150 °C. The unit has 3 pins which can be easily integrated to the microcontrollers through ADC which is in build in the system. The sensor output is directly connected to the controller through the analogy pin as it gives the linear temperature variations till the end of the cycle. It works on 50–60 µA current plus has minimal self-heating system. The applications of LM35 temperature sensor includes battery temperature monitoring, temperature measurement for HVAC applications and provides thermal shutdown for a circuit.

3.3 LDR Photoresistor The LDR sensor as shown in Fig. 4 is used to respond to the intensity of light falling on the device which is converted to the respective electrical signals. It works on photoconductivity wherein the resistance of the sensor reduced as the light intensity raises and the resistance increases as the light intensity is decreased. This unit is incorporated in such a way that it can control the high beam and low beam of vehicle headlights automatically. This helps the driver more comfortable while driving in the night time (Fig. 5).

652

N. Shwetha et al.

Fig. 4 Light-dependent resistor

Fig. 5 MQ6 gas sensor

3.4 Gas Sensor Figure 6 shows the MQ-6 gas sensor used to detect butane gas leakage inside compartment of the vehicle. It is designed such that it is capable of detecting gases which are flammable in nature. The detection process is followed by initialization of the sensor with 5v which in turn heats the coil as a load resistance which in turn connected to the ADC. The difference in resistance with respect to the gas present is determined by the controller. The difference values are compared with the threshold values in the controller, and the decision is displayed via the LCD to the user. The information related to the sensor is within the first controller, which helps the controller to Fig. 6 MCP2551 transceiver

Smart Driving Assistance Using Arduino and Proteus Design Tool

653

response must faster than in the secondary controller. The controller via the electronic value controls the flow of gas to the vehicle. If any ambiguity is present, then the controller shuts off the gas via electronics valve. Features of MQ-6 gas sensors are. • • • • • • •

The operating voltage is +5 V. Specially used to detect LPG or butane gas. The analog output voltage is from 0 to 5 V. The digital output voltage for TTL logic. The preheat duration is around 20 s. It can be integrated with digital or analog sensor. A potentiometer is used to vary the sensitivity of digital pin.

3.5 MCP2551 Transceiver Figure 6 shows the transceiver module which is a faster rate CAN. It is a defectpermissive mechanism that works as a terminal b/w a CAN protocol controller and the physical bus. The unit works as a mechanism to transmit and receive information at different levels for the CAN controller and the ISO 11898 standards. It has the ability to transmit and receive the information at a rate of 1 Mb/s which is more than enough to take decision by the controller in emergency situation. It serves as a buffer unit between the CAN and microcontroller.

3.6 Beeper Figure 7 shows the beeper which is used to indicate an audible signal to the user in critical situation. The beeper is an electromechanical device. When applied with voltage it starts vibrating which forms the buzzing sound. The device makes more comfortable for the user to respond to the situation rather visual information. Most of the user in real time prefer audible signal rather visual information during the run time of the vehicle. The buzzer is used to alert the user in real time for better driving experience. Fig. 7 Beeper

654

N. Shwetha et al.

Fig. 8. 16×2 liquid crystal display

3.7 LCD Figure 8 shows the LCD; the liquid crystal display is interfaced with the controller for displaying the information related to the surrounding sensor status. It can support up to 16 characters in single row for the driver and also the status of surrounding environment to the user. The system makes the user more comfortable by displaying the information related to gas condition, temperature in the cabin, and also helps to indicate the status of head lights in the vehicle during the daytime.

3.8 Ultrasonic Sensor Figure 9 shows the ultrasonics sensor. The ultrasonics sensor directs ultrasonic waves from an emitter in the direction of a sensing object, then receives the reflected waves through the detector. After receiving the information, the threshold value is compared and the object nearby is identified. The distance is measured by determining the total time of transmitted signal to the received signal. This information will vary depending on the distance at which the object is present. The ultrasonics sensor detects the distance of vehicle to vehicle in real time and cautions the driver for approaching vehicles and front-end vehicles. This helps the users to drive the vehicle at safe distances there by avoiding accidents. Fig. 9 Ultrasonic sensor

Smart Driving Assistance Using Arduino and Proteus Design Tool

655

Fig. 10 Infrared sensor

3.9 Infrared Sensor Figure 10 shows the infrared sensor. The IR sensor uses pair of transmitter and receiver unit. The transmitter transmits the signal at a frequency and if the object is present nearby, the reflected signal is received at the receiver side. Here, the transmitter unit is Tx tube and receiver unit is Rx tube. The receiver tube receives the transmitted signal and determines the distance in digital values, which are set via potentiometer. The received information in logic is fed to the transistor to drive the load. The output has determined a green LED as an output which is high when an object is determined. The detection range of the sensor can be adjusted by the potentiometer. The sensor is placed in the doors of the vehicles to detect whether the door is closed properly or not. This information is carried out to the user if in case there is a failure of locking mechanism of the door.

4 Flow Diagram of the System Figures 11 and 12 show the flow diagram of the Tx (transmitter) section and the Rx (receiver) section. The transmitter section is initialized from the controller along with initialization of other peripherals like ADC, LCD, and GPIO. The CAN bus for both transmitter and receiver is initialized. At the receiver side, all the parameters of LDR and gas are considered and compared with the threshold values. If any one of the parameters varies, then the output is displayed on the LCD with audible signal. The sensors value from the gas, temperature, ultrasonic, and infrared is calibrated first before initialization. The threshold values are sent and monitored in the real time. If any one of the sensor values crosses the presented value, then the user is intimated via LCD.

656 Fig. 11 Flowchart of transmitter section

N. Shwetha et al.

Smart Driving Assistance Using Arduino and Proteus Design Tool

657

Fig. 12 Flowchart of receiver section

5 Experimental Results The results are calculated using the Arduino serial port interface. The values of all the sensors are compared with real-time and theoretical values. The comparison has extended to different sensors with different threshold values and the same has been noted down. Figure 13 shows the variations of resistance values of light-dependent resistor (LDR1) in real time.

658

N. Shwetha et al.

Fig. 13 Resistance variation of LDR1

Figure 14 shows the variations of resistance values of light-dependent resistor (LDR2) in real time. Figure 15 shows the comparison values and its variations of resistance of lightdependent resistor (LDR1 and LDR2) in real time. Figure 16 shows the variations of resistance values of gas sensor MQ6 coil in real time. Figure 17 shows the variations of resistance values of gas sensor MQ9 coil in real time. Figure 18 shows the comparison values and its variations of coil resistance of gas sensor MQ6 and MQ9 coils in real time. Figure 19 shows the vehicle front-end distance readings in real time from the ultrasonic sensor (US), front center sensor (FC). Figure 20 shows the vehicle rear end distance readings in real time from the ultrasonic sensor (US), rear center sensor (RC).

Fig. 14 Resistance variation of LDR2

Smart Driving Assistance Using Arduino and Proteus Design Tool

Fig. 15 Comparison of LDR1 and LDR2

Fig. 16 Coil resistance variation of gas sensor MQ6

Fig. 17 Coil resistance variation of gas sensor MQ9

659

660

Fig. 18 Comparison of coil resistance of gas sensor MQ6 and MQ9

Fig. 19 Front-end ultrasonic sensor readings

Fig. 20 Rear-end ultrasonic sensor readings

N. Shwetha et al.

Smart Driving Assistance Using Arduino and Proteus Design Tool

661

Figure 21 shows the vehicle front left door infrared sensor readings in real-time setup. Front door left sensor (FDL). Figure 22 shows the vehicle front right door infrared sensor (FDR). Sensor readings in real-time setup. Front door right. Figure 23 shows the vehicle rear left door infrared sensor readings in real-time setup. Rear door left senor (RDL). Figure 24 shows the vehicle rear right door infrared sensor readings in real-time setup. Rear door right sensor (RDR).

Fig. 21 Front left door infrared sensor readings

Fig. 22 Front right door infrared sensor readings

662

N. Shwetha et al.

Fig. 23 Rear left door infrared sensor readings

Fig. 24 Rear right door infrared sensor readings

6 Conclusion In the proposed system, the CAN bus is used for the communication purpose to advance the automobile control system. This is used to control the vehicle and alert the driver in an emergency situation. The status of the sensors is displayed on to the LCD and also a buzzer is used in the system if there occurs any real-time emergency. Some of the status like the information related to temperature, gas leakage, and reduction in glare will be displayed on the LCD. During the experimental process, the main concentration will be on the comfort of the driver and control of the vehicle in a safer manner by continuously observing all the parameters. The distance between the

Smart Driving Assistance Using Arduino and Proteus Design Tool

663

vehicles is maintained by using the ultrasonic sensors and also intimates the driver in real time. The infrared sensors are used to detect any fault in door locking mechanism during the run time of the vehicle. It was also observed that, the information obtained from the sensors in real time with CAN protocol has much faster interaction to the user rather than a normal bus wiring in the system. This is considered as the contribution to the automobile industry for the safety of the passenger and the driver to minimize the accidents due to faults by the driver or faults form the vehicle. When it comes to disadvantage, the number of transmitters will be more for transmitting the traffic signal from one end to another end. But still when it comes to safety of the drivers and vehicles the system, it is an add-on advantage to the near future to control the accidents that occur knowingly or unknowingly.

7 Future Scope The proposed system is not only designed for gasoline vehicles but it can be implemented on electric vehicles without much modification on the existing protocol. The other advantages of the system are, it can be incorporated in any of the vehicles from small cars to heavy trucks in near feature with minor modifications in the number of sensors.

References 1. Niranjan, L., Sreekanth, B., Suhas, A.R., Mohan Kumar, B.N.: Advanced system for driving assistance in multi-zones using RF and GSM technology. Int. J. Eng. Res. Technol. 3(6) (2014) 2. Yin Mar Win Kyaw MyoMaungMaung.: Implementation Of CAN based Intelligent driver alert system. Int. J. Sci. Technol. Res. 5(6) (2016) 3. Shwetha, N., Niranjan, L., Sangeetha, N., Anshu Deepak.: Performance analysis of smart traffic control system using image processing, IJRAR - Int. J. Res. Anal. Rev. (IJRAR), E-ISSN 23481269, P- ISSN 2349-5138, 7(3), 71–77 (2020). http://www.ijrar.org/viewfull.php?&p_id=IJR AR19L1890, http://www.ijrar.org/papers/IJRAR19L1890.pdf 4. Vijayalakshmi, S.: Vehicle control system implementation Using CAN protocol. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 2(6) (2013) 5. Niranjan, L., Suhas, A.R., Chandrakumar, H.S.: Design and implementation of self-balanced robot using proteus design tool and Arduino-Uno. Indian J.Sci.Res. 17(2), 556–563 (2018), ISSN: 2250-0138 (Online). https://doi.org/10.13140/RG.2.2.22205.69604 6. Niranjan, L.: WSN based advanced irrigation vehicle operated using smartphone. Int. J. Eng. Res. Electro. Commun. Eng. 4(6), 2394–6849 (2017), ISSN (Online) 7. Vikas kumarsingh.: Implementation of ‘CAN’ protocol In automobiles using advance embedded system. Int. J. Eng. Trends Technol. (IJETT), 4(10) (2013) 8. Niranjan, L.: Home automation and Scada. Int. J. Res. Electron. Commun. Eng. 4(6), 2394–6849 (2017), ISSN (Online), https://doi.org/10.13140/RG.2.2.31433.16483

Fog Computing—Characteristics, Challenges and Job Scheduling Survey K. Nagashri, S. Rajarajeswari, Iqra Maryam Imran, and Nanda Devi Shetty

Abstract Fog computing has taken some burden off from cloud computing because it is located at the edge devices. It consists of independent wireless devices for communicating with the network for various applications. It helps to increase the performance and better utilization of resources since the time taken for sending entire data to the cloud can be reduced and data can be stored partly in the fog nodes at the user’s end devices. With the rapid technology growth, especially in the Internet of things, this new paradigm is becoming extremely attractive and useful in decreasing the response time. The communication between cloud and fog is gaining a lot of attention. This survey paper first gives a brief introduction to fog computing, and later some of the characteristics, challenges, applications of fog computing, state of art and finally different algorithms for scheduling the jobs in fog computing are discussed. Keywords Cloud computing · Fog computing · Job scheduling · Internet of things · Cloud data center · Micro data center

1 Introduction World of computing has witnessed a makeover from traditional to latest technology computing during recent times. Every organization is dependent on computers and smart phones for their day-to-day tasks. Due to the demand in the field of Internet of things (IoT), the applications and the data being produced by the sensors are seeing an extensive increase. During 2012, researchers in CISCO invented a prototype called fog computing which is an extension of cloud computing [1]. Decentralized computing concept is applicable to fog computing since it does not rely exclusively on central components like the cloud computing does. High latency problem of cloud can be easily overcome using the resources of the devices that are idle and present nearby to the users. Fog computing depends heavily on the cloud for performing complex

K. Nagashri (B) · S. Rajarajeswari · I. M. Imran · N. D. Shetty Computer Science and Engineering, M S Ramaiah Institute of Technology, Bengaluru, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_52

665

666

K. Nagashri et al.

Fig. 1 Fog computing model

processing [2]. Access control is vital in fog computing for ensuring security which allows or restricts users of the system to access it [3]. Figure 1 shows a simple fog computing model [1]. The essential computational factors for the fog computing method are edge devices, fog node and the cloud. Any device having the capabilities of storage, which has computational powers and networking capabilities, can act as fog nodes. The edge devices have got computing devices like switches, routers, base station and mobiles which are at the end users’ vicinity. Between cloud and edge device, the service translation responsible nodes which are fog nodes are managed by the cloud servers.

2 Related Works In this section, some works done on fog computing are described. Elavarasi and Silas have done a study on scheduling jobs in fog computing [1]. The paper deals with an overview of fog computing, its applications, the issues faced and research related to fog computing. Naha et al. provide a survey paper which

Fog Computing—Characteristics, Challenges and Job Scheduling Survey

667

gives a description of fog computing, trends in research, differences between cloud computing and fog computing, various architectures, research gaps in scheduling, allocation of resource, fault tolerance and drawbacks of existing works which lead a pathway for future work [2]. Pham and Huh proposed an algorithm to efficiently execute large-scale offloading applications of the user by collaborating with fog devices and the cloud nodes [4]. A heuristic-based approach was suggested while maintaining a balance between completion time and cost. Allocating tasks for computation was done strategically at every node of every layer. Mukherjee et al. provide a survey on the basics of the architecture of fog computing [5]. They provide methods to deal with fog computing issues like services and resource allocation. The ongoing research trends in the field and the state of art is also provided in detail. Rahman and Wen summarized and gave an overview of the model of fog computing, its features, other available related paradigms, applications and the security challenges of fog computing [6]. Kumari et al. gave a survey on related fog computing topics with features of fog and many issues persisting in the system like privacy and security, managing the resources, quality of service and interfacing [7]. Prakash et al. discussed about the concepts, applications and features of fog computing in their paper [8]. Zhang et al. have given a survey on the fog computing access control along with security issues, architecture and characteristics [3]. Harish et al. provided a review on fog computing which highlights many applications of fog computing like smart traffic, connected automotive, smart home and so on [9]. Bonomi et al. have put forth some fog applications like connected vehicle, smart grid, wireless sensors and actuator networks [10]. Singh et al. have proposed a job scheduling algorithm for real-time security-aware scheduling in the fog networks [11]. Similarly, Liu et al. proposed another job scheduling algorithm for multiple jobs and light path provisioning [12]. Fizza et al. proposed a privacy-aware job scheduling algorithm in a heterogeneous fog environment [13]. Rahbari et al. put forward a security-aware hyper heuristic job scheduling algorithm in fog [14]. Benblidia et al. introduced a fuzzy logic algorithm for job scheduling in fog by ranking the nodes [15]. In our paper, we discuss about concept of fog, its characteristics, challenges and applications of fog computing, state of art. Job scheduling in fog computing has not been talked about much in any of the papers. We discuss this important concept of fog which is very much essential to manage the resources efficiently in the system by bringing out the different scheduling approaches presented by various researches.

3 Fog Computing Characteristics In this section, some of the fog computing features are discussed.

668

K. Nagashri et al.

3.1 Heterogeneity Support System of fog computing composes heterogeneous nodes [6]. These heterogeneous nodes are end nodes, set-top boxes, access points, edge routers and also highend servers. These nodes are configured with specific duties in specific environments. They include servers having high performance, access points, edge routers and gateway which are run by the various operating systems that have distinct computing power and storage capacities. Virtualized platforms are also provided by fog computing where fog nodes can be a virtual node like a virtual network node or a computing node.

3.2 Geographical Distribution A widespread deployment is required for Internet of things (IoT) applications and services in fog computing compared with centralized cloud computing [3]. Fog nodes are distributed geographically [6]. This plays an important role in providing streaming of data in high quality for the vehicles at high speed through access points found along the highways and tracks.

3.3 Decentralization Decentralization is another important feature of fog computing [6]. The fog nodes are known to be self-organized, and they deliver end users with real-time IoT applications since there is no availability of a central server in the fog computing for managing computing services and resources.

3.4 Mobility Support A huge number of mobile devices like vehicles or smart phones are all connected to the network due to the broad geographical distribution. This makes it very essential for fog computing applications to be able to communicate directly with mobile devices [3]. With direct connection of mobile devices using protocols like locator/ID separation protocol, mobility support can be ensured [6].

Fog Computing—Characteristics, Challenges and Job Scheduling Survey

669

3.5 Proximity to Users The service demands of the users in fog computing are predictable on the basis of users’ location [3]. Considering the example of a user who is in a mall, he/she is often interested in restaurants, discount shops, etc. With the arrangement of the fog devices in the shopping mall for offering high rate services, this problem is solved in fog computing by pre-caching the local contents.

3.6 Real-Time Interaction Real-time interaction is another characteristic since fog computing supports this instead of batch processing [6]. Augmenting reality, real-time stream processing and gaming are included in real-time processing. The fog computing offers information on traffic, status and also information about the condition of the local network because of it being close to the edge.

3.7 Edge location The location awareness is implemented in fog computing since the deployment is in threshold [1]. It provides the end device with a good network service which includes application which needs low latency like in gaming. The placement of fog devices in the edge makes it more suitable for applications to avoid network traffic and transfer.

3.8 Low Latency Data produced by sensors in cloud and IoT environment are usually transmitted to cloud data centers (cdc) which are remotely located [6]. This results in delay in transmission, delay in analyzing of data and delay in response from cloud to the user. Therefore, cloud computing has a huge delay. Whereas in fog environment, the IoT device and fog computing node are near which provide computing services as well as it arrives at conclusions on the basis of the local data without services of a cloud. Hence, response delay in fog environment is lesser in comparison with cloud computing. Table 1 gives some of the variations in the cloud and fog computing characteristics [3, 6].

670 Table 1 Characteristics of cloud versus fog computing

K. Nagashri et al. Characteristics

Cloud computing

Fog computing

Deployment

Network core

Network edge

Architecture

Centralized

Distributed

Target user

General Internet users

Mobile users

Node devices

Large servers

Server running in base stations

Latency and jitter

High

Low

Scalability

Average

High

4 Challenges of Fog Computing In this section, some issues faced by fog computing are discussed.

4.1 Models for Programming Mobile edge computing is moving the tasks to edge devices, enabling the development of an elastic, scalable edge-based mobile application [1]. When the application is broken, the decision to offload is easy. The offloading has led to these objectives (1) to increase the total computing speed (2) to conserve energy (3) to preserve bandwidth and (4) to ensure low latency.

4.2 Reliability and Security Authentication of fog nodes at different layers is a threat to security [1]. Application sandbox execution faces a new challenge: privacy and trust. Through fog, the application requires third-party hardware/software to process user data.

4.3 Management of Resources It determines the end system’s usage of resources and assigns resources relying upon user actions in addition to the usage frequency during the consumption of resource [1]. The fog is focused on a flexible system since fog computing does not provide adequate computing and storage space as opposed to cloud computing. Efficient allocation of resources is, therefore, an important subject of study in fog computing.

Fog Computing—Characteristics, Challenges and Job Scheduling Survey

671

4.4 Scheduling the Jobs Job scheduling takes on a crucial role in enhancing the fog system’s flexibility and reliability [1]. The basic reason to schedule a job with a certain time limit is to find the best solution for completing and executing the multiple jobs in the best possible sequence. Job scheduling in fog computing means to choose the best available resources for executing tasks or for allocating computer.

4.5 Heterogeneity Since the lowest layer in fog computing environment comprises numerous end devices like a smart phone, smart watch, virtual sensor node and intelligent devices, in addition to autonomous vehicles, smart home devices, the heterogeneity problem arises due to collection of data, format of data and capability of data processing [5]. Nonetheless, fog nodes comprise gateways, switches, routers and other devices with varying computing and networking facilities in the fog cluster. Heterogeneity is a significant aspect of design in the fog computing architecture. Managing various formats of data and complex communication protocols to deal with semi or unstructured data is becoming a major challenge. Further, managing and coordinating between the networks are also a challenge.

4.6 Incentives Achieving better quality of service (QoS), like high bandwidth, ample storing capacity, less network delay and quick computing are often expensive [5]. But, payas-you-go appears to be an important feature for users sharing their computing and storage resources. In fog computing, it is worth researching suitable money control and correctly bidding the storage and computing cost for a trustworthy collaboration.

4.7 Standardization There is no standard mechanism existing today so that each network member like edge points, terminals could declare their presence to host others’ elements of software [7]. So, this is another challenge of fog.

672

K. Nagashri et al.

4.8 Fog Servers A correct positioning of fog servers is necessary for them to provide full service. Evaluating the fog node’s demand and function before positioning it would help to decrease the maintenance costs. An analysis on the tasks carried out in every server node before their placement is necessary to decrease the cost of maintenance [8].

4.9 Consumption of Energy In fog computing, consumption of energy may become more if the number of fog nodes existing in the fog environment is huge. Also since they are distributed, energy efficiency decreases. Hence, reducing the fog nodes’ energy demand so that they can be more energy-efficient and cost-saving is a challenge [8, 9].

5 Applications This section deals with different applications of fog computing in some fields.

5.1 Smart Traffic Control System Street lights can be automatically turned on, and tracks can be opened to clear the way for the ambulance to drive across traffic because of the video camera which detects the blinking lights on the ambulance [6].

5.2 Video Streaming Systems The mobile users are allowed to view the latest videos on the screen by using video streaming applications in fog computing [6]. For effective processing and fast decision making, fog computing contributes a significant part. For instance, rather than transferring the live stream of video to the cloud application, several drone video stream targets will be sent to the closest fog node. Here, any of the mobile devices may behave as a fog server like laptops and mobiles.

Fog Computing—Characteristics, Challenges and Job Scheduling Survey

673

5.3 The Pressure of Water in Dams The data sent to the cloud through sensors that are mounted in dams are interpreted to alert the officials in case of any anomalies that arise [8]. The issue here is the information delay. Fog is used because the information is easier to send, evaluate, and it also offers direct feedback because it is close to the end systems.

5.4 Health care For transmission of data among hospitals, it is necessary to have high data confidentiality. It can be done by fog as the data are locally transmitted. The laboratories will use these fog nodes to update the lab records of the patient, accessible conveniently from nearby hospitals [8].

5.5 Mobile Big Data Analytics The collection of data is made in large amounts in IoT and cannot be saved in the cloud, and it is quite not practical [8]. Therefore, instead of cloud computing, fog computing can be used in such cases because the fog nodes will be in close proximity of end systems. Additional concerns are avoided such as lag, congestion, speed of processing, data transmission, processing of data, time to deliver, as well as the time to respond.

5.6 Connected Car Automatic vehicles seem to be the latest thing that is happening on the road. An automatic steering system is used to allow the vehicle’s actual hands-free operations. From testing and launching features of self-parking which do not involve a human behind the wheel, fog computing can be the safest way for every IoT vehicles since it communicates in real time. Cars, access points and traffic lights can communicate with one another, making them secure for all [9, 10, 16].

6 State of Art Fog computing is still a growing technology where there is a lot of scope for research. It is quickly developing and reaching great heights owing to the edge-level processing

674

K. Nagashri et al.

and heterogeneous character. A significant differentiation between cloud and fog computing is that the former follows centralization and the latter is decentralized. A number of research studies carried out in recent years given by [2] are presented in this section.

6.1 Allocation of Resources and Scheduling In Fog Computing Some works which are done on the resources and scheduling are described below. Aazam et al. suggested some dynamic resource prediction algorithm by incorporation of the historical cloud-service customer (CSC) record in fog, based on relinquish likelihood (probability) [17]. The minimum relinquish likelihood rate is 0.1, which is improved on the basis of the user history. But relinquish likelihood is 0.3 for new customers for fair resource estimation. The attributes of current and returning consumers are known beforehand; therefore, the likelihood value can be effortlessly measured. It will mitigate the underutilization of resources and reduce the risk of business loss. In order to satisfy QoS, software and hardware specifications, a FogTorchπ Tool was proposed by Brogi et al. before the composite application in the fog network was deployed [18]. The prototyped tool manipulates the Monte Carlo simulations and just takes the communication link QoS into consideration. The preprocessing and backtracking search phases are the basis of the presented algorithm. The preprocessing phase made use of input from the backtracking search algorithm for finding the eligible deployment. In comparison with the consumption of resources and communication links, the accessibility and bandwidth are more significant in the fog environment. Taneja and Davy suggested a module mapping algorithm that can deploy IoT applications efficiently in the combined fog–cloud environment, which aims at ensuring effective utilization of network infrastructure and resources in fog–cloud infrastructure [19]. Lower-bound searching was used, and function algorithms were compared for finding a suitable network node in the fog–cloud. A map with nodes that are suitable for the computation process was produced by the module mapping algorithm. The application is deployed near the source device if the processing requires higher speed. For this, parameters like time to respond and the resources availability will have to be considered. Pooranian et al. suggested an algorithm for finding the optimal solution for allocating resources [20]. They perceived the problem to be a bin packing penalty-aware problem where the VMs signify packs and servers signify bins. Each server would be palatalized and rewarded depending upon idle energy, peak frequency and peak energy. The VMs would be handled depending on time constraints and frequency. The penalty will result in the punishment of the server being disallowed to use for a few iterations. The server returns to the stream to start further computation once the

Fog Computing—Characteristics, Challenges and Job Scheduling Survey

675

iteration freeze has passed. The strategies of penalties and rewards are introduced for decreasing the consumption of energy which otherwise increases exponentially.

6.2 Faults Tolerance in Fog Computing The device failure rate in fog system is high because of its distributed heterogeneous architecture [2]. Some works done on fault tolerance are described below. Abdulhamid and Latiff suggested a scheme for cloud scheduling which is built on the basis of a check-pointed league championship algorithm [21]. They made use of a migration system for handling the execution failure of an independent task. The job need not begin from the start when it fails because the system state will be saved from time to time and can pick it up from wherever it had failed. When the task fails, an underloaded VM is allocated, and the league championship algorithm would be used for scheduling the failed tasks. Jiang and Hsu suggested a two level cloud server failure management standby architecture [22]. The system had cold and warm standby of the system. If a server crashes, it removes the failed server and sends it to the repair room, and a warm standby system will be replaced. The system is placed in the cold standby category after repairing is done. The research proposed a model that decides how many warm and cold standby systems were required in the cloud. But, this method of hardware failure management, however, is not ideal for fog since these devices would not fall in the fog provider’s domain most of the time. Therefore, task migration seems to be the best option for hardware failure, and in most cases, this ought to be reactive, barring situations where the fog system is owned by the provider.

6.3 Mobile Computing Based on Fog The number of users of smart phones and mobile apps has seen a huge surge in urban and rural areas too [2]. As a consequence, smart phone users have been collaboratively asking for high volume content. The high amount of simultaneous requests for material just worsens the situation. One solution for dealing with the problem is by offloading content near the consumers thereby customers are ensured with good service. Mobile fog computing may help this content offloading process. Khan et al. investigated content offloading well into the mobile fog [23]. Mobile fog has been described as co-located self-organizing mobile nodes offering distributed edge resources. The purpose of the study was for collaborating nodes for data caching to increase the data availability and reduce operational charges. Roman et al. provided a solution for the issue of privacy and security for all edge level computing [24]. It involves a regular thread in a mobile fog network system involving additional steps like authentication, trust, access control, network security and protocol.

676

K. Nagashri et al.

6.4 Tools for Simulating in Fog Computing Simulation and modeling are still in their beginning stage in fog computing [2]. One such work which has been done on the simulation of fog computing is described below. Gupta et al. were responsible for the creation of iFogSim, the first fog simulation toolkit [25]. This toolkit was used for simulation and modeling in fog and edge computing paradigms for the IoT resource management techniques. The design of techniques for resource management is the most complex issue that defines the analytical distribution of applications among edge devices, which leads to increasing of performance and reducing the latency.

7 Job Scheduling Approaches If only cloud is used, the amount of latency of the applications in this environment would be high. For real-time applications requiring stricter deadlines for a task, the scheduling at the fog devices plays a vital role. This section deals with different job scheduling techniques in fog computing. Singh et al. have proposed an real-time security-aware scheduling on the network edge (RT-SANE) algorithm [11]. This algorithm takes into account the deadline of the interactive and batch type of applications and also considers the constraints in security. When compared to other algorithms of a similar kind like the cdc-only and iFogstor, using the Czech CERIT Scientific Cloud System’s workloads on real-time data for evaluation, it is shown that the RT-SANE gives better performance because it provides a greater success ratio (the jobs that are completed successfully). It uses the protocol and architecture that has a distributed orchestration as the basis. RT-SANE selects the cdc and micro data center (mdc) close to the user by considering the security tags and network delay. Jobs that a user submits are given tags like public, semi-private and private. The mdcs and cdcs can be divided into untrusted, semitrusted and trusted. The interactive private applications are executed by RT-SANE on a local mdc of a user, whereas batch applications which are private are restricted to run on cdc which is private. It executes the public jobs over mdc or cdc which is remotely placed. Semi-private applications are run on local cdc which can be either public or private. The private jobs get executed in the local mdc and cdc, and this is how the security of jobs is ensured. Security and performance are addressed in the algorithm. Straggler jobs, the jobs which hinder other jobs execution in the system needlessly, are also taken into consideration to evaluate performance. Liu et al. have proposed a multiple jobs scheduling and lightpath provisioning (MJSLP) optimization algorithm [12]. It is used to minimize the average finishing time in the fog computing micro data center (FC-mdc) in the distance-adaptive modulation elastic optical networks environment. It has two main sub-methods. Firstly, to minimize the finish time of a job which corresponds to the slowest time taken

Fog Computing—Characteristics, Challenges and Job Scheduling Survey

677

by a task to complete the job, mathematical and heuristic models are given. Later, a formulation for linear programming, a heuristic routing algorithm, level of modulation and slots for frequency are suggested on the basis of the time taken by a single job to end. When the comparison is made by taking into consideration the frequency slots and average completion time with the routing alone and scheduling alone algorithms, it is shown that MJSLP gives a better performance. The single job completion time’s mathematical model is discussed into a multi-commodity flow problem. All the tasks constituting a job are ensured to reach an FC-mdc destination concurrently. The burden of data storage on the FC-mdc is also taken care of in the algorithm. Simulations showed that a lot of time taken to complete the jobs was saved and it showed predominantly in the results. Using the distance-adaptive modulation between power consumption and average completion time proved to be a good trade-off. Fizza et al. have proposed a privacy-aware scheduling in a heterogeneous fog environment (PASHE) algorithm [13]. Real-time applications which have privacy restrictions are scheduled by the algorithm in a heterogeneous environment consisting of mdc and cdc. Here also, three classifications of tasks are considered which are public task, semi-private task and private task. The algorithm identifies the most appropriate location, local or remote mdc or centralized cdc for carrying out the job by taking the job requirements like completion deadline and security into account. The local mdc executes private tasks of the user which has tight deadlines. The preferred remote mdc executes semi-private tasks that are having tight deadlines. The cdc executes public tasks which are having loose deadlines. The algorithm makes provision for bandwidth reservation on remote mdcs. User mobility is also taken into account across mdcs. Computation resources are reserved on remote mdcs by the algorithm for the execution of a job if the user’s mobility pattern is predictable. When a comparison is made with other scheduling algorithms with respect to the heterogeneity of mdc, mobility of the user and the security of the application in a fog computing environment, PASHE exhibited excellent performance during simulation of their result. The evaluations are done using the iFogSim, a fog simulator which is widely in use. To support planning the capacity of mdcs, this result can be made use of at the network edge. Rahbari et al. proposed a security-aware scheduling algorithm using hyperheuristic approach in fog computing [14]. Scheduling in fog computing heterogeneous conditions is subjected to risk caused by intruders. In order to prevent this, the algorithm considers the security features like authenticating the user, maintaining the integrity and ensuring the confidentiality in fog devices. But on implementing these features on the device, there is an overhead in terms of time and computation. In their approach, the scheduling is done using data mining techniques for heuristic algorithms to run on devices using fog environments. The evaluation parameters are overhead in the utilization of CPU, bandwidth and security. Comparison with many algorithms using the heuristic approaches is done with their proposed algorithm. When compared to algorithms like particle swarm optimization, ant colony algorithm and simulated annealing algorithms, the average energy consumption of the

678

K. Nagashri et al.

proposed algorithm is improved by a percentage of 62.81, 70.28 and 61.72, respectively, whereas the average cost is improved by 54.28, 53.84 and 53.92, respectively. Their argument is that the execution cost will increase if the security is at decreased risk though the time for simulation and consumption of energy is decreased and the resource allocation decision-making ability is increased in accordance with the workflow types. Benblidia et al. proposed a scheduling algorithm which not only considers fog– cloud requirements which are mainly job completion time and energy consumption but also considers preferences of the user [15]. In this algorithm, the fog nodes are ranked from highest to lowest satisfactory levels using the linguistic and fuzzy quantified approach by combining fog requirements and user preferences in polynomial time. Least satisfactory proportion and greatest satisfactory proportion are the parameters used to differentiate the similarities. The results satisfy the users by giving satisfactory preferences in addition to reducing the delay and consumption of energy. They say that the system can be improved by performing scheduling distributed tasks among fog nodes keeping the delay and consumption of energy to a minimum. Casquero et al. proposed a customized algorithm to schedule mainly Kubernetes orchestrators wherein there is the distribution of task scheduling among the nodes that are used for processing using the multi-agent system (MAS) platform [26]. Compared to the approach adopted by the K8s default scheduler which uses a centralized approach for scheduling, this algorithm is shown to be faster, but it is done only on one pod scheduling. If two or more pods are scheduled, this result may change, and hence, they show that it is not a good practice to generalize. It is used in fog in the loop applications. Filtering and ranking jobs in nodes are transferred to MAS, thereby achieving task scheduling distribution. The algorithm can prove to be more beneficial when the number of nodes is increased and the nodes are chosen based on certain priorities to validate the agentified scheduler.

8 Conclusion As we have discussed, fog computing is gaining a significant prominence due to its shift of data from cloud to the edge devices which decreases the response time. In this paper, we showed the meaning of fog computing, providing an outline and discussed some characteristics of fog computing. Since it is still a booming paradigm, there are still some issues and challenges of fog which were put forth where researcher needs to provide appropriate solutions for resolving the same. Further, a few applications using fog computing were explored. The state of art in fog computing was studied. At last, different algorithms for scheduling the jobs in the fog environment were analyzed.

Fog Computing—Characteristics, Challenges and Job Scheduling Survey

679

References 1. Elavarasi, R., Silas, S.: Survey on job scheduling in fog computing. In: 3rd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, pp. 580–583 (2019). https://doi.org/10.1109/icoei.2019.8862651 2. Naha, R.K., Garg, S., Georgakopoulos, D., Jayaraman, P.P., Gao, L., Xiang, Y., Ranjan, R.: Fog computing: survey of trends, architectures, requirements, and research directions. IEEE Access 6, 47980–48009 (2018) 3. Zhang, P., Liu, J.K., Yu, F.R., Sookhak, M., Au, M.H., Luo, X.: A survey on access control in fog computing. IEEE Commun. Mag. 56, 144–149 (2018) 4. Pham, X., Huh, E.: Towards task scheduling in a cloud-fog computing system. In: 18th AsiaPacific Network Operations and Management Symposium (APNOMS), Kanazawa, Japan, pp. 1–4 (2016). https://doi.org/10.1109/apnoms.2016.7737240 5. Mukherjee, M., Shu, L., Wang, D.: Survey of fog computing: fundamental, network applications, and research challenges. IEEE Commun. Surv. Tutor. 20, 1826–1857 (2018) 6. Rahman, G., Chuah, C.W.: Fog computing, applications, security and challenges, review. Int. J. Eng. Technol. 7, 1615 (2018) 7. Kumari, S., Singh, S., Radha, : Fog computing: characteristics and challenges. Int. J. Emerg. Trends Technol. Comput. Sci. 6, 113 (2017) 8. Prakash, P., Darshaun, K.G., Yaazhlene, P., Ganesh, M., Vasudha, B.: Fog computing: issues, challenges and future directions. Int. J. Electr. Comput. Eng. 7, 3669–3673 (2017) 9. Harish, G., Nagaraju, S., Harish, B., Shaik, M.: A review on fog computing and its applications. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 8 (2019) 10. Bonomi, F., Milito, R., Addepalli, S.: Fog computing and its role in the internet of things. In: Proceedings of the MCC Workshop on Mobile Cloud Computing (2012). https://doi.org/10. 1145/2342509.2342513 11. Auluck, N., Rana, O., Nepal, S., Jones, A., Singh, A.: Scheduling real time security aware tasks in fog networks. Trans. Serv. Comput. IEEE (2019) 12. Liu, Z., Zhang, J., Li, Y., Bai, L., Ji, Y.: Joint jobs scheduling and lightpath provisioning in fog computing micro datacenter networks. IEEE/OSA J. Opt. Commun. Network. 7, 152–163 (2018) 13. Fizza, K., Auluck, N., Rana, O., Bittencourt, L.: PASHE: Privacy aware scheduling in a heterogeneous fog environment. In: 6th International Conference on Future Internet of Things and Cloud (FiCloud), pp 333–340. IEEE, Barcelona, Spain (2018). https://doi.org/10.1109/FiC loud.2018.00055 14. Rahbari, D., Kabirzadeh, S., Nickray, M.: A security aware scheduling in fog computing by hyper heuristic algorithm. In: 3rd Iranian Conference on Intelligent Systems and Signal Processing (ICSPIS), pp. 87–92 Shahrood, Iran (2017). https://doi.org/10.1109/ICSPIS.2017. 8311595 15. Benblidia, M.A., Brik, B., Merghem-Boulahia, L., Esseghir, M.: Ranking fog nodes for tasks scheduling in fog-cloud environments: A fuzzy logic approach. In: 15th International Wireless Communications & Mobile Computing Conference (IWCMC), pp. 1451–1457. IEEE, Tangier, Morocco (2019). https://doi.org/10.1109/IWCMC.2019.8766437 16. Fog Computing and Real World Applications. https://www.techiexpert.com/fog-computingand-real-world-applications Accessed 10 June 2020 17. Aazam, M., St-Hilaire, M., Lung, C.-H., Lambadaris, I., Huh, E.-N.: IoT resource estimation challenges and modeling in fog. In:Fog Computing in the Internet of Things pp. 17–31. Springer, Cham (2018) 18. Brogi, A., Forti, S., Ibrahim, A.: How to best deploy your fog applications, probably. In: 1st International Conference on Fog and Edge Computing (ICFEC), IEEE, Madrid, Spain, pp. 105–114 (2017). https://doi.org/10.1109/ICFEC.2017.8 19. Taneja, M., Davy, A.: Resource aware placement of IoT application modules in fog-cloud computing paradigm. In: IFIP/IEEE Symposium on Integrated Network and Service Management (IM), IEEE, Lisbon, Portugal, pp. 1222–1228 (2017). https://doi.org/10.23919/INM. 2017.7987464

680

K. Nagashri et al.

20. Pooranian, Z., Shojafar, M., Naranjo, P. G. V., Chiaraviglio, L., Conti, M.: A novel distributed fog-based networked architecture to preserve energy in fog data centers, In: 14th International Conference Mobile Ad Hoc Sensor Systems (MASS), pp. 22–25. IEEE, Orlando, USA (2017). https://doi.org/10.1109/MASS.2017.33 21. Abdulhamid, S.M., Latiff, M.S.A.: A checkpointed league championship algorithm based cloud scheduling scheme with secure fault tolerance responsiveness. Appl. Soft Comput. J. 61, 670– 680 (2017) 22. Jiang, F.-C., Hsu, C.-H.: Fault-tolerant system design on cloud logistics by greener standbys deployment with Petri net model. Neurocomputing 256, 90–100 (2017) 23. Khan, J.A., Westphal, C., Ghamri-Doudane, Y.: Offloading content with self-organizing mobile fogs. In: 29th International Teletraffic Congress (ITC), Genoa, Italy, pp 223–231. IEEE (2017). https://doi.org/10.23919/ITC.2017.8064359 24. Roman, R., Lopez, J., Mambo, M.: Mobile edge computing, Fog et al.: a survey and analysis of security threats and challenges. Fut. Gener. Comput. Syst. 78, 680–698 (2018) 25. Gupta, H., Dastjerdi, A.V., Ghosh, S.K., Buyya, R.: iFogSim: a toolkit for modeling and simulation of resource management techniques in Internet of Things, edge and fog computing environments. Softw. Pract. Exp. 47, 1275–1296 (2017) 26. Casquero, O., Armentia, A., Sarachaga, I., Pérez, F., Orive, D., Marcos, M.: Distributed scheduling in Kubernetes based on MAS for fog-in-the-loop applications. In: 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1213– 1217. IEEE, Zaragoza, Spain (2019). https://doi.org/10.1109/ETFA.2019.8869219

A Review on Techniques of Radiation Dose Reduction in Radiography B. N. Shama and H. M. Savitha

Abstract This document gives an insight on the methods followed to decrease the radiation dose in radiography. Risk of developing cancer increases with frequent exposure to imaging diagnosing technologies like X-ray and CT scan. Risk is twice in case of pregnant women and in children. Thus, reducing the radiation dose plays a vital role in protecting the public health. Radiation dose can be controlled through hardware or software techniques. Different software algorithms are analyzed. Focus is to reduce the noise and enhance the clarity in image by using the reconstruction algorithms and filters, which in turn help the doctors to diagnose the patients with less radiation dosage. Keywords Radiation · X-ray · CT scan · Image reconstruction

1 Introduction Diagnostics imaging technologies like X-ray, computed tomography (CT) scan are very helpful for doctors to identify their patients’ problems. But frequent exposure to these radiations can have many side effects. Researchers prove that, probability of developing cancer increases with the rise in radiation dosage. Risk is high in case of children and pregnant women. Several types of cancer are caused due to diagnostic scans. Main reason for noise in medical imaging is secondary radiation caused due to scattered radiation from X-ray machine and by the object where film reaches. Noise is also caused due to the number of photons detected. Low light conditions and limited exposure types are the reasons to increase noise in medical images. Reducing the radiation dose increases the noise in image. If the noise is increased in an image, it becomes difficult for the radiologist to identify the issues in the patients X-ray. Aim of the paper is to analyze different radiation reduction methods and reconstruction techniques for good image clarity. Noise filtering and reduction techniques are also incorporated. B. N. Shama (B) · H. M. Savitha Department of Electronics and Communication Engineering, St Joseph Engineering College, Vamanjoor, Mangaluru, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_53

681

682

B. N. Shama and H. M. Savitha

2 Literature Survey Literature survey was conducted with various papers on Springer, Elsevier, IEEE digital library, and NCBI resources. Significant papers are discussed below.

2.1 Model-Based Deep Medical Imaging Author Jing Chen et.al in their paper titled “Model-based Deep Medical Imaging: the roadmap of generalizing iterative reconstruction models using deep learning” focuses on novel deep learning methods for reconstruction. Authors used deep learning (DL) methods to quicken magnetic resonance imaging (MRI), reduce the radiation dose in CT and positron emission tomography (PET) [1]. Modern technology uses DL for medical image reconstruction. It is exhibited that DL can accelerate magnetic resonance (MR) reconstruction and lower radiation dose. Limitation of this method is that this approach requires a large set of training data. Steps of the algorithm by traditional image reconstruction model are as follows: Step I is relaxing the conditions in the design and revealing the repetitions of remodeling to discover main structure of network. Step II relaxes the issues of information flexibility in the design, allowing the grid to understand the information constancy openly. Lastly, the rigid architecture of variables in the design is fragmented, and the sequence is analyzed by the network in step III. Deep learning is introduced through the alternating direction method of multipliers (ADMM) algorithm. Here, algorithm has three learning stages, namely ADMM NetI, ADMM Net-II, and ADMM Net-III. In ADMM Net-I, the sparse realization is liable, because the data flexibility is consistent by the L2 model (the change among the predicted and the captured data at the sampling neighborhood). When flexibility in k-space is analyzed by the network with the training information, then the norm is more specific. Figure 1 shows the viewable connections about simulated CT information. Filtered back projection (FBP) reformation shows clear marked works, ADMM Net-I reformation exhibits exclusively less works. The reformation of ADMM Net-II and ADMM Net-III had no viewable works, whereas the recent conserve higher particulars. Film artifacts are the distortion or darkness in an image caused due to manipulation. As the size of training data is increased, optimal performance is achieved. Challenge is into resolving the arithmetical ambiguity in the reformation setup being the excellent scarce conversions, scarce realization in the transfer sphere, realization framework, and the framework of the improvised design.

A Review on Techniques of Radiation Dose Reduction …

683

Fig. 1 Viewable connections about simulated CT info

An author in this paper has combined interference with deep neurological network. This increases the latent of DL and design-based restoration methods for medicinal imaging. Paper also illustrates on image quality enhancement with DL, depending on iterative restoration designs.

2.2 Model-Based Iterative CT Image Reconstruction on GPUs Author Amit sabne et.al in their paper titled “Model-based Iterative CT Image Reconstruction on GPUs” targets CT image reconstruction on graphics processing units (GPUs). Model-based iterative reconstruction (MBIR) using iterative coordinate descent (ICD) is a computed tomography design that produces updated results in terms of quality of image [2]. Super-voxels are a recent terminology used in image processing and computer vision, which carries more information than a pixel. Authors analyzed the initial GPU-positioned design for ICD-situated MBIR. This model has an advantage over the newly recommended concept of super-voxels and accurately attains the three layers of parallelism feasible in MBIR to improve the utilization of GPU hardware assets. The MBIR technique is positioned on ICD algorithm. Here the design repeatedly revises voxels (3D pixel) in the regenerated model. A 3D pixel renew lacks study of information that is common amid different voxels, and so that ICD design does not support accurate information agreement. The repetitive nature of the design produces it flaw resilient, as well as an accurate handpicked voxels grants

684

B. N. Shama and H. M. Savitha

them to be revised in parallel. Super-voxels are formed by aligning nearby voxels into large portions. Besides the above improvements, the GPU performance attains a geometric mean speed of 4.43X over the modern parallel operation on a 16-core central processing unit (CPU) [3]. Additional optimizations are analyzed to increase the attainment of GPU-ICD. ICD and its modifications have been exposed to be robust and fast coinciding design for performing the MBIR restoration, and there is a notion that ICD will not be effectively mapped to deeply parallel GPU algorithms. Paper describes that rapidly converging ICD expansion models which can be effectively realized on GPUs. Implementation of ICD-based MBIR helps in producing high-quality CT reconstruction images but have the limitation of high computational cost. The author presents GPU-ICD, the initial GPU-based eminently parallel algorithm for MBIR. The design exploits different levels of parallelism in MBIR, particularly, intra-voxel parallelism, intra-SV parallelism, and inter-SV parallelism. The work focuses on information layout alteration which gains consolidated access at GPUs. GPU-specific three levels of optimizations were performed to improve the quality of image.

2.3 Feasibility of Low-Dose Computed Tomography with Testicular Cancer Victims Authors Kevin P. Murphy et.al in their paper titled “Feasibility of low-dose CT with model-based iterative image reconstruction in follow-up of patients with testicular cancer” applies low-dose CT for testicular cancer patients. Testicular cancer/testis cancer is a type of cancer in male organs that make male hormones and sperms. Modern iterative reorganization with fewer doses computed tomography has encountered appreciable investigation and is extensively used. The operation of deep, model-based iterative reconstruction in computed tomography has been newly explored sanctioning dose reductions in surplus of 65%. A prospective research is carried out to analyze the achievements of MBIR in low-dose computed tomography estimate of victims enduring surveillance for stage-I or stage-II testis cancer. The low-dose protocol (LD) was analyzed in consultation with the manufacturers by using phantom information to convey a radiation dose of 20–30% of a conventional administrative computed tomography. The second, conventional dose (CD) protocol was analyzed to convey an effective dose (ED) of 70–80% of a conventional computed tomography scan. LD computed tomography information was reconstructed using a deep iterative reorganization model. Individual restoration connected to an inclined patient was together loaded to center and region of interest (ROI) was arranged on a data’s, which naturally generates region of interest on the balanced images on the other reformations. Average depletion and standard deviation (SD) were registered for all data’s. Signal-to-noise ratio (SNR) was determined.

A Review on Techniques of Radiation Dose Reduction …

685

Advantages observed in the case of iterative reconstruction method include reducing image noise and increased spatial decision [4]. The realization of deep model-based iterative restoration with decreased-dose computed tomography for patients beside initial-stage testis cancer was analyzed. MBIR promotes a 67% decrease in radiation dose while generating models that are proportionate/admirable to traditional dose research without loss of required information.

2.4 High Completion Model-Based Image Restoration Authors Xiao Wang et.al focused on algorithms that produce high-quality images with general CT scanner geometries, with more recently develop approaches [5]. But some of limitations in implementing MBIR are its high computational cost and long running time. Hence, this makes it impossible to implement practical implementations. This paper outlines a new MBIR application which greatly decreases the computation charge of MBIR while engaging its advantages. It defines an oval grouping of the scanner information into super-3D pixel that connected with a super3D pixel buffer, perfecting and enables parallelism over 3D pixels thereby increasing the speed. Image restoration methods are categorized into two groups: Direct methods, analogues to filtered back projection, and iterative methods such as MBIR. Operation of iterative methods includes processing of superior clarity images in explosive detection systems (EDS), medical imaging, scientific imaging and materials imaging. MBIR also works adequately along a wide range of scanner calculations. There are two methods to MBIR: simultaneous approach and iterative coordinate descent approach. Simultaneous approach trail by iteratively protruding is an absolute volume to be reconstructed into the sonogram space. It is called the sonogram space as the analysis of a 3D pixel footprints a sinusoidal arrangement in the memory. Simultaneous approaches have a list of limitations. To increase convergence, these algorithms should be implemented with preconditioning approaches and should be custom arranged for individual computed tomography imaging system and geometry. Investigation on high conduct CT image restoration can be outlined to address two asserts, which raises the measure of parallelism and increasing the locality of reference. The ICD design produces important threat in meeting the set probably challenging goals. Initial efforts to accelerate ICD concentrated on parallelizing the design by waiving the dependencies between 3D pixel updates. Models are designed to detect “loosely coupled voxels” dividing very less or no information in common. Hence, a great solution must empower excellent cache locality which enables favorable parallel allocations. The author contributes toward the performance challenges in MBIR, recommends, and explains the concept of the super-3D pixel and super-3D pixel buffer to boost locality and perfecting, it analyzes and discusses feasible parallelism in MBIR. It introduces a parallel super-3D pixel ICD design (PSV-ICD) that together forwards

686

B. N. Shama and H. M. Savitha

the limitations of parallelism and locality; it also proves that processing speed can be increased. The performances of three algorithms mentioned above are studied. Performance issues of MBIR are identified. Paper suggests introducing parallelism in MBIR. The concept of the super-3D pixel and super-3D pixel buffer is introduced to boost locality and prefetching.

2.5 Model-Based Iterative Image Restoration Author Lu Liu in their paper titled “Model-based Iterative Reconstruction (IR): A Promising Algorithm for Today’s Computed Tomography Imaging” reviews on radiation reduction using model-based iterative restoration design. This is a progressive algorithm which includes extraneous variables which reduces computations and increases speed of scan. The paper mainly focuses on study of paper in the clinical computed tomography field, fundamentals of MBIR, dosage and noise subtraction possibilities, imaging attributes, and its challenges. Berrington de Gonzalez et al. [6] concluded that 29,000 future cancers in the USA will be caused by computed tomography scans observed in the year 2007. Brenner and Hall [7] estimated that 1–2% of all cancers in America are caused by computed tomography exams. In the area of pediatric Computed Tomography, Pearce et al. [8] reported that the risk of cancerous illness of white blood cells and brain cancer tripled in children after receiving cumulative doses of about 50 and 60 mGy, respectively. Some of the well-known dose reduction methods are dual-source computed topographic scanners, adaptive interference reduction filters, tube current modulation, and prospective cardiac electrocardiography (ECG) modulation. Dual-source CT scanners use two sources of X-ray at right angles to each other with two detectors. Voltage of one tube is set at 80 kV and other at 140 kV. This helps the doctors to characterize different materials such as tissue, bone, implants with higher precision. This doubles the speed of the test receiving less radiation creating sharper images. Applications include analyzing tumors and investigation of blood vessels. Adaptive noise reduction filters: Adaptive filers are linear filers whose transfer function is controlled by different parameters, and these parameters are bestowed to algorithm. Noise elimination is a change of ideal filtering that affects producing a prediction of the interference by filtering the related input and then detecting this interference predicts from the initial input containing both signal and interference. Tube current modulation feature: Dose modulation and reduction methods differ by scanner manufacturer, model, and software version. Automated tube current modulation is an innovative approach toward dose reduction. The purpose of automated tube current modulation is to manage same image quality despite of patient attenuation characteristics, thus reducing the radiation dose to patients.

A Review on Techniques of Radiation Dose Reduction …

687

Most recent algorithm in radiation reduction is iterative reconstruction algorithms, which intend to overcome the limitations of earlier methods. In filtered back projection, the X-ray source and detector were very small. Each regular grid in three-dimensional spaces (Voxel) has no shape or size. Statistical iterative reconstruction is also called a two-dimensional cluster of raw data that containing computed tomographic projection is dependent on data of incidental changes in sinogram analysis. Statistical and model-based methods focus on information collecting, correlating, and revising the cycle into the restoration. This method helps doctors to perfectly diagnose output CT images. IR algorithm includes three primary components. Acceptable range is defined based on the threshold value. Progress rate of the IR design is as follows (Fig. 2). CT dose index (CTDI) was measured employing phantoms. Measurement was done based on objective image quality and intuitive image nature. Objective image nature is obtained precisely against CT number and image turbulence. Intuitive image nature is assessed by pantologists in terms of characteristic element. Chest and abdominal area were considered as regions of interest. The dose decrement effect decreased in patients with high body mass index (BMI) compared with patients lesser BMI. Different algorithms were analyzed, and the following observations were noticed on chest area as per Table 1. Different algorithms were analyzed, and the following observations were noticed on the abdomen area as per Table 2. Different algorithms were analyzed, and the following observations were noticed on cardiac CT scanning as per Table 3. Advantages of MBIR include reduced radiation dose, decreased image turbulence, improved spatial and contrast settlement, and reduced noise correlated with both the ASIR and FBP designs. It is also computationally costlier, and it lacks high level of accuracy to attain its proposed achievement.

2.6 Nonlinear Diffusion Method Frequent use of computerized X-ray imaging procedures into clinical applications has led to overdose cases in patients’ particularly in children [9] and has exponential increase in number of examinations. Dose reduction is also based on patient weight and number of examinations undergone previously [10] Authors focus on lowering radiation doses for pediatric applications without altering characteristic principal of image quality. Dose reduction is done by lowering the current in milliamperes (mA), and noise reduction is done using nonlinear diffusive filters. As radiation dose is decreased, quantum noise is increased in image. Acceptability of digital images is directly dependent on noise content. Algorithm

688

B. N. Shama and H. M. Savitha

Fig. 2 Flowchart of iterative reconstruction technique

1.

2. 3.

Consider, image A with exposure current in milliampere-seconds (mAs). mAs is a measure of radiation formed (milliamperage) up a set of time(seconds) along an X-ray duct. Pass image A to a nonlinear diffusion filter. Consider output image as A*. Compute Gaussian interference to image B (mAs) and access new image B* such that (B > A)

A Review on Techniques of Radiation Dose Reduction …

689

Table 1 Analysis of different algorithms and observations on chest area Sl. No.

MBIR

ASIR

FBP

1

Low streak observation

High streak observation

No comments

2

High objective noise

Low objective noise

No comments

3

Graded by radiologist as “diagnostically unacceptable”

Graded by radiologist as “fully unacceptable or probably acceptable”

No comments

4

Image noise is reduced

No comments

Image noise is not reduced

Table 2 Analysis of different algorithms and observations on abdomen area Sl. No.

MBIR

ASIR

FBP

1

Low image noise

High image noise

High image noise

2

Low objective noise

High objective noise

High objective noise

Major intuitive image nature improvement

Minor intuitive image nature improvement

Minor subjective image quality improvement

3

Superior at detecting damaged organ

Inferior at detecting damaged organ

Inferior at detecting damaged organ

4

Graded by radiologist as slightly lower overall performance

No comments

Graded by radiologist as slightly higher overall performance

Table 3 Analysis and observations on cardiac CT scanning Sl. No.

MBIR

1

On the whole image nature On the whole image nature On the whole image nature is automatically is not automatically is not automatically

ASIR

FBP

Finer

Finer

Finer

2

Lower subjective noise

Higher subjective noise

Higher subjective noise

3

Sharpness was improved

Sharpness was not improved

Sharpness was not improved

4

Contrast to noise ratio is increased and image turbulence is reduced

Contrast to noise ratio is decreased and image turbulence is increased

Contrast to noise ratio is decreased and image turbulence is increased

5

Signal-to-noise ratio is automatically better

No comments

Signal-to-noise ratio is not good

4. 5. 6.

Correlate the noise analogous to A mAs along with B* Compare the noise analogous to B mAs along with A* Nonlinear diffusion equations are implemented as follows: I (u, β, μ, ε) =

  

  (β 2 + ∇u2 ) + μ/2(μ − 10)2 + ε/2 ∇μ2 dx (1)

690

7.

8.

B. N. Shama and H. M. Savitha

Io is the noticed image (with interference), u is the filtered image, μ and ε are constant, and  is a convex region of R2 establishing the backing space of the level u(x,y), characterizing the image. The initial term in the functional for β = 1 identifies the area of the surface which reports for the image, the second term gives report of the distance along the noticed image and the desired result u, and the third term curbs the consistency of the result. Radiation exposure levels were tested with a real human skeleton casted with soft tissue simulating material. This soft tissue simulated material has the similar absorption rate as human tissue Five images of chest phantom are analyzed for 0.4, 0.5, 0.6, 0.8, and 1 mAs exposure current. Results imply that as the exposure current increases, noise in image decreases.

Radiation doses were reduced over a nonlinear Gaussian filter for images taken from X-ray computed radiography. This paper deals with a positive result toward pediatric applications by reducing the dose (mAs) and is then to filter the image with nonlinear diffusion approach. As a future work, ball scale filtering can be used to get a smoother image. Authors also suggest that the image nature evaluation for diagnostic acceptance can be done by a radiologist.

2.7 Dose Index and Dose Length Product Level From 1980 to 2013, number of scans in the UK increased by 20 times while in the USA, the number increased to 43 times [9]. Rates of absorbed dose to body tissue are above the minimum dose label which increases the probability of cancer. According to a survey performed in the USA in 2007 implies that 29,000 fresh cases of cancer were generated by radiology tests [10]. Computed tomography dose index is a radiation exposure index measured in gray (Gy). The estimated dose depends on patient size. DLP reports for the duration of radiation output on the z-axis. The z-axis is the long axis of the patient. Algorithm 1.

CTDI was obtained as follows: ⎛ CTDI = 1/⎝ N ∗ T

−50  mm

⎞ D(Z )d(z)⎠

(2)

50 mm

D(z) = Radiation dose at Z-direction. N = number of active detectors at each 360° rotation of X-ray source T = slice thickness.

A Review on Techniques of Radiation Dose Reduction …

2.

Weighted CT dose index, CTDIw was obtained as follows, CTDIw = 1/3 ∗ CTDIc + 2/3 ∗ CTDI p

3.

691

(3)

CTDIc = Amount of CTDI at central bore of head and body phantom. CTDIp = CTDIp is the standard value of CTDI deliberated at 9, 6, 3, and 12 o’clock positions of head and body phantom. DLP indicates patients’ overall dose received during an entire process of computed tomographic scan. It is calculated as follows, (a)

For axial scan: DLP =

(b)



nCTDI ∗ T ∗ N ∗ C(mGy, cm)

(4)

For helical scan: DLP =



nCTDI ∗ T ∗ A ∗ t (mGy, cm)

(5)

where nCTDI = CTDIw//mAs T = Thickness of slice in cm. N = Number of slices of individual protocol C = X-ray source current over radiation terminology in mA. Parameters considered: Mode: Axial or Helical. Protocol: Head, Paranasal sinuses (PNS), Pelvis, Chest, and Abdomen. Scan guideline: Milliampere-seconds (mAs), Slice Thickness (T), Length of scan (L). Results indicate that PNS protocol design has margin mAs, slice thickness, and length of scan space. 4.

Average value of dose length product and weighted CT dose index for various models was analyzed in axial and helical modes. Results imply that CTDIw for head and PNS phantom is more compared with that of body phantom in axial mode. Head is of smaller diameter, and radiation is distributed over a smaller area. In helical mode, CTDIw and DLP were measured for the chest, pelvis, and abdomen region. Pelvis measures lower CTDIw and DLP, whereas chest measured highest CTDIw and abdomen has high DLP.

The use of reduced mAs and greater kVp helps in reducing the patient’s dose aiming toward the image quality and noise level. Reducing the scan area compared to the whole skeletal area and scaling down number of slices are suggested. Helical mode is preferred over axial mode scan. Shortening the total scan time also contributes to lower dose in patients [11].

692

B. N. Shama and H. M. Savitha

3 Discussion Different methods for radiation dose reduction are analyzed. Reconstruction algorithms are implemented to provide good quality image. Image quality enhancement with less radiation exposure current is a challenge. Helical mode of scan is preferred. If the scan area is minimized, then less radiation dose is absorbed. As scan duration is less, radiation exposure can be minimized. Image quality evaluation for diagnostic purposes can be done by a radiologist. MBIR is an advanced algorithm which includes extraneous variables which reduces computations and increases speed of scan. Phantom is used for dosage analysis. Dosage varies for head, pelvis, chest, and abdominal areas. Reconstruction algorithm uses filtered back projection, which minimizes the blurring effect. Reducing the radiation in dental radiography plays a vital role in pediatric applications. It also helps to reduce the radiation absorbed by the thyroid glands. MBIR also suggests that DLP obtained through direct measurements using CTDIvol using phantoms can be converted into adequate dose with Monte Carlo simulations as evaluation of radiation for the full body of the patient. Reconstruction models using deep learning also play a vital role in medical imaging [12]. Dr. Ryohei Nakayama in his paper titled “Using Deep Learning to Reduce Radiation Exposure Risk in CT Imaging.” Author has designed a software technique based on convolutional neural network (CNN) retrogradation which employ ultra-low-dose computed tomography scans as input but creates images similar to a normal-dose computed tomography scan. The method helps radiologist with an acceptable level of diagnostic information while decreasing victims’ radiation dose by as much as 95% [13]. Literature Table 4: Sl. No. Paper description

Author

Proposed year Approach

1

Generalizing iterative Jing Chen et al. reconstruction models using deep learning

2019

Deep learning: alternating direction method of multipliers design

2

CT image reconstruction on Graphics processing units

2017

ICD-based MBIR

3

Low-dose computed Kevin P. Murphy et al. 2016 tomography with model-based iterative image reconstruction in patients with testicular cancer

Low-dose protocol (LD) by using phantom data

4

High-performance image reconstruction (Model based)

Parallel super-3D pixel ICD design (PSV-ICD)

Amit sabne et al.

Xiao Wang et.al

2016

(continued)

A Review on Techniques of Radiation Dose Reduction …

693

(continued) Sl. No. Paper description

Author

Proposed year Approach

5

Model-based iterative Lu Liu reconstruction algorithm for CT Imaging

2014

Adaptive interference reduction filters and tube current modulation

6

Deep learning to reduce radiation exposure risk

Dr. Ryohei Nakayama 2021

Convolutional neural network (CNN) regression

7

A survey of computed Akbar Aliasgharzadeh 2018 tomography dose et.al index and dose length product level in usual CT

Based on CTDI and DLP

4 Conclusion and Future Scope Reducing the radiation dose results in lower-quality images. Thus, radiologist and doctors find it difficult to diagnose the issues of patients. Objective of this preliminary research is to reduce the radiation dose maintain the quality of the image. Noise in image is reduced using deep neural network. Quality of image can be enhanced using model-based iterative reconstruction methods. Implementation of ICD-based MBIR helps in producing high-quality CT reconstruction images, but has the limitation of heavy computational cost. The first GPU-based highly parallel algorithm of MBIR was released. The algorithm exploits different layers of parallelism in MBIR. Algorithm focuses on information layout transformation that gains consolidated access on GPUs. GPU-specific three levels of optimizations were performed to improve the quality of image. The realization of deep model-based iterative restoration with shorted-dose CT for patients with early-stage testis cancer was analyzed. MBIR facilitated a 67% decrease in radiation dose while generating images that are proportionate or admirable to traditional dose research without damage of characteristic adequacy. The concept of the super-voxel (SV) and super-voxel buffer (SVB) is introduced to raise locality and prefetching. Parallel super-voxel ICD design was introduced together addresses the limitation of parallelism and locality. Experimental solution demonstrates that PSV-ICD gives an average 14 speedup on a single core and an average 187 speedup on 2 Intel Xeon E5 processors with 20 cores in sum total. Advantages of MBIR include reduced radiation dose, decreased image turbulence, improved spatial and contrast resolution, and reduced noise compared with both the ASIR and FBP designs. It is also computationally costlier, and it lacks higher level of optimization to carry out its planned performance. Radiation doses were reduced over a nonlinear Gaussian filter for images taken from X-ray computed radiography. This paper deals with a positive result toward pediatric applications by reducing the

694

B. N. Shama and H. M. Savitha

exposure (mAs) and is then to filter the image with nonlinear diffusion approach. As a future work, ball scale filtering can be used to get a smoother image. The image character evaluation for indicative recognition can be done by a radiologist. The use of reduced mAs and greater kVp helps in reducing the patient’s dose aiming toward the image quality and noise level. Reducing the scan area is compared to the whole skeletal area and scaling down number of slices help in reducing the radiation dose. Helical mode is preferred over axial mode scan. Shortening the total scan time also contributes to lower dose in patients. Research gap identified includes that the hardware techniques to reduce the radiation dose are yet to be analyzed. Software techniques need to design good quality images with reduced radiation dose.

References 1. Chen, J., Wan, H., Zhu, Y., et al.: Model-based deep medical imaging: the roadmap of generalizing iterative reconstruction model using deep learning. In: Preliminary work presented at MICCAI, International Conference on Medical Image Computing & Computer Assisted Intervention (2019) 2. Sabne, A., Sherman Kisner, X., et al.: Model-based iterative CT image reconstruction on GPUs. ACM J. (2017). 978-1-4503-4493-7/17/02 3. Radiology imaging, Imaging technology news [Online], Available: https://www.itnonline.com/ channel/radiology-imaging 4. Murphy, K.P., Crush, L., O’Neill, S.B., et al.: Feasibility of low-dose CT with model-based iterative image reconstruction in follow-up of patients with testicular cancer. Euro. J. Radiol. Open 3, 38–45 (2016) 5. Wang, X., Sabne, A., Kisner, S., et al.: High performance model based image reconstruction. ACM J (2016). 978-1-4503-4092-2/16/03 6. Berrington de Gonzalez, A., Mahesh, M., Kim, K.P., et al.: Projected cancer risks from computed tomographic scans performed in the United States in 2007. Arch. Intern. Med. 169, 2071–2077 (2009) 7. Brenner, D.J., Hall, E.J.: Computed tomographydan increasing source of radiation exposure. N. Engl. J. Med. 357, 2277–2284 (2007) 8. Pearce, M.S., Salotti, J.A., Little, M.P., et al.: Radiation exposure from CT scans in childhood and subsequent risk of leukaemia and brain tumours: a retrospective cohort study. Lancet 380, 499–505 (2012) 9. Chandy, A.: A review on IoT based medical imaging technology for healthcare applications. J. Innov. Image Process. (JIIP) 1(01), 51–60 (2019) 10. Manoharan, S.: Improved version of graph-cut algorithm for CT images of lung cancer with clinical property condition. J. Artif. Intell. 2(04), 201–206 (2020) 11. Aliasgharzadeh, A., Mihandoost, E., et al.: A survey of computed tomography dose index and dose length product level in usual computed tomography protocol. J. Cancer Res. Therapeut. 14(3), 549–552 (2018) 12. Berrington de González, A., Mahesh, M., Kim, K.P., Bhargavan, M., Lewis, R., Mettler, F., et al.: Projected cancer risks from computed tomographic scans performed in the United States in 2007. Arch. Intern. Med. 169, 2071–7 (2009) 13. Using Deep Learning to Reduce Radiation Exposure Risk in CT Imaging. Mathworks.com (2021). [Online]. Available: https://www.mathworks.com/company/newsletters/articles/usingdeep-learning-to-reduce-radiation-exposure-risk-in-ct-imaging.html. Accessed: 05- Feb- 2021

Application of NLP for Information Extraction from Unstructured Documents Shushanta Pudasaini, Subarna Shakya, Sagar Lamichhane, Sajjan Adhikari, Aakash Tamang, and Sujan Adhikari

Abstract The world is intrigued by data. In fact, huge capitals are invested to devise means that implements statistics and extract analytics from these sources. However, when we examine the studies performed on applicant tracking systems that retrieve valuable information from candidates’ CVs and job descriptions, they are mostly rule-based and hardly manage to employ contemporary techniques. Even though these documents vary in contents, the structure is almost identical. Accordingly, in this paper, we implement an NLP pipeline for the extraction of such structured information from a wide variety of textual documents. As a reference, textual documents which are used in applicant tracking systems like CV (Curriculum Vitae) and job vacancy information have been considered. The proposed NLP pipeline is built with several NLP techniques like document classification, document segmentation and text extraction. Initially for the classification of textual documents, support vector machines (SVM) and XGBoost algorithms have been implemented. Different segments of the identified document are categorized using NLP techniques such as chunking, regex matching and POS tagging. Relevant information from every segS. Pudasaini (B) Advanced College of Engineering & Management, Kupondole Road, Lalitpur, Nepal e-mail: [email protected] S. Shakya Institute of Engineering, Tribhuvan University, Pulchowk, Lalitpur, Nepal e-mail: [email protected] S. Lamichhane Herald College Kathmandu, Hadigaun Marg, Kathmandu, Nepal e-mail: [email protected] S. Adhikari Nagarjuna College of Information Technology, Bangalamukhi, Lalitpur, Nepal e-mail: [email protected] A. Tamang Patan College for Professional Studies, Kupondole, Patan, Nepal e-mail: [email protected] S. Adhikari NAMI college, Gokarneshwor, Kathmandu, Nepal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_54

695

696

S. Pudasaini et al.

ment is further extracted using techniques like Named Entity Recognition (NER), regex matching and pool parsing. Extraction of such structured information from textual documents can help to gain insights and use those insights in document maintenance, document scoring, matching and auto-filling forms. Keywords NLP · Information extraction · Segmentation · Named Entity Recognition (NER) · GaussianNB · SVM · spaCy

1 Introduction A document consists of multiple information where some of it can be very important and some can be less. Previously data was collected in unstructured repositories of texts but with the boom of the Internet, most of current data is collected from online platforms [1]. Although we can find data everywhere on the Internet, it is humanly impossible to read all of them and extract the required information. In order to overcome this limitation, the approach of Information extraction has come to great attention among the NLP developers. Information Extraction is a process of identifying appropriate information from the input document and converting information into representation suitable for storage, process and retrieval via computational methods. The input is a collection of documents like (News Articles, Research Papers, Reports, emails) and extracted information is representation of relevant information from source document according to specified criteria [2]. Extracting information from a given textual document has been a frequently performed NLP task. Every time, new methods and techniques are implemented to create a model or an approach that would extract only the valuable information from a given corpus. However, documents can never be of a fixed format. It can either be structured or unstructured. Extracting information from unstructured format is more challenging than from the structured one as it can be in any format. Also, it is very important to know the types of information that we are going to extract from the document. The information that is to be extracted can vary from the type of information the document contains. Since it is a demanding task to identify the important information from all sorts of documents, we decided to target a specific one. The type of document that we are mainly focusing on is CV and information on job vacancy. To make it more precise, for this research purpose, CV and job vacancy details related only to the IT (Information Technology) field have been used. As both these documents are widely used for the recruitment purpose, our final result could be beneficial for easing the recruitment process. Our approach could lead the way to automate the manual process of selecting the right candidate for a job. To achieve our goal, we have come up with a set of methods that could help us in extracting the required information from the given document. Our defined methods will help us in extracting the major points such as personal information, educational

Application of NLP for Information Extraction …

697

background and previous working experience from a given CV. Similarly, from job vacancy, we will extract the job position, skills, responsibilities, education and work experiences demanded. The methods that have been applied to achieve our goal has been properly discussed in the sections below.

2 Literature Review The recent advances in Natural Language Processing have made it possible to understand the difficult patterns in textual corpus. This led to achieving the state-of-the-art result in different tasks of NLP such as Named Entity Recognition (NER), Text Classification, Part of Speech (POS) Tagging and Information Extraction. As information extraction deals with extracting informative texts from a given document, it has been used for many commercial purposes too. One of which is CV parsing. Several commercial products related to HR automation like Sovren [3], Daxtra [4], Akken [5] and many others have been performing CV parsing. There are several methods implemented for CV parsing, namely entity-based information extraction methods, rule-based information extraction methods, statistical information extraction methods, learning-based methods. In previous implementations, CV parsing was performed by building a knowledge base system with a huge amount of data and retrieving keywords from a new CV by looking upon the knowledge base [6]. Big data tools combined with text analytical tools and Named Entity recognition have also been applied for CV parsing [7] which was able to achieve F-measure around 95% on CV parsing. In a paper which is titled “Application of Machine Learning Algorithms to an online Recruitment System”, similar research has been done with the vision of making online recruitment effective and efficient. The author has tried multiple approaches like Linear Regression, M5 Model Tree, REPTree, SVM with polynomial kernel and SVM with PUK universal kernel. Among all of these approaches, the author states that the SVM with PUK universal kernel gave the best result [8]. Canary, an application that leverages the NLP features to extract the medicalrelated information from a document has applied user-defined grammars and lexicons for information extraction. The major steps which were used include normalization of text and mapping acronyms and synonyms, defining vocabulary using user-specified words, creating a grammar rule that combines the words to form target phrases and setting a specific condition for extracting the information. The result extracted from this application can be used for biomedical research, clinical decision support for further processing [9]. In the paper “FoodIE: A Rule-based Named-entity Recognition Method for Food Information Extraction” [10], the authors have proposed a rule-based NER approach which is based on computational linguistics and semantic information for extracting food-related information. The steps that they have done include cleaning of text, using “coreNLP” and “USAS semantic tagger” for POS tagging which is further

698

S. Pudasaini et al.

processed and then tag the tokens related to food using the “USAS semantic tagger” and a custom-defined rule. They stated that as food tokens are mostly either noun or adjective, they used this idea to improve their false positive rate by focusing on these POS tags. They have achieved a result of 97% precision, 94% recall and 96% F1 score while testing on 200 datasets. The paper “Using Stanford NER and Illinois NER to Detect Malay Named Entity Recognition” [11] has provided us the comparative result of the two NER tagging models. The models were tested in Malay language which is spoken in Brunei, Indonesia, Malaysia and Singapore. Altogether, 4 tags (PERSON, MISC, LOC and ORG) were used. In the end of the test, the author found that the Stanford NER gave the better tagging result with higher Precision and F1 score. Whereas, In the journal “Named Entity Recognition Approaches and Their Comparison for Custom NER Model” [12], the author Shelar, Kaur, Heda and Agrawal had performed a NER comparison between spaCy, Apache OpenNLP and TensorFlow. There the authors concluded that the NER model trained with spaCy provided better results.

3 Methods 3.1 Custom SpaCy Pipeline All the tasks that have been performed for parsing a document have been implemented using the spaCy pipeline component. spaCy is an open-sourced python library which is vastly used for Natural Language Processing (NLP) related tasks. We have created a custom spaCy pipeline component as per our need and added them in spaCy’s NLP pipeline. Figure 1 shows the custom pipeline components that we had built and added in the spaCy’s pipeline. The task of each of the component are: • Category Component This component helps in identifying the document’s type.

Fig. 1 Custom spaCy pipeline components for parsing information

Application of NLP for Information Extraction …

699

• Segmentation Component This component is used for segmenting the CV into different sections • Profile NER parsing Component This component is used to extract the name and address from a given text • Profile Pattern Matching Component This component is used to extract the other personal information from a given text • Experience and Education NER Parsing Component This component is used to extract information regarding experience and education • Skills Pattern Matching Component This component is used to extract the different skills from a given text • Word Embedding component This component is used to extract the embedding value of the words Whenever we pass a text document on our spaCy object, it goes through these pipeline components and returns spaCy’s Doc object. With the use of the Doc object, we can access its attribute which contains our result. Taking in consideration about the execution time, we have created separate methods for using the separate pipeline components, which means that when we use the “Segmentation Component”, the other remaining pipeline components will be disabled. This has vastly helped us in improving the overall execution time.

3.2 Document Categorization A document can be of different types and it is essential for the system to know whether the examined data is appropriate or not. As we are focusing on the CV and job vacancy-related documents, we have created a classification model that would help us in identifying a given document. We divided the document into three classes, i.e., CV, job vacancy detail and others. The class “others” means any document that does not come under the other two classes. A total of 10,670 documents were used for training, in which 3754 were CVs, 3512 were job vacancy detail and 3404 were others like news articles and training certificates. 75% of the data has been used to train the model and remaining were used for testing. While training data were preprocessed by performing tokenization, removing stopwords and unwanted characters (punctuations, emails and bullet points), lemmatizing the words and converting them to lowercase. The data was linearly separable, thus SVM model identified most of the classes accurately with accuracy of 98.7%. We had also tried other ML algorithms like Naive Bayes and Random Forest, but SVM was the one that outperformed others. After the identification of the document type (i.e., either CV or job vacancy detail), the remaining major tasks are segmentation and information extraction. CV Segmentation

700

S. Pudasaini et al.

Fig. 2 The different titles in which a CV will be segmented

Fig. 3 Result of the profile section segmented from a CV

Fig. 4 Result of the education section segmented from a CV

The segmentation of CVs is the first major task. From a human perspective, we can split a CV into different parts on the basis of the textual information they contain. We can separate personal information, experience, objective, education part just from looking at them. And normally, these are the basic information contained in a CV. To achieve segmentation of CVs similar to humans, we have created a function which segments CVs on the basis of different titles and provides their respective information. In order to identify, if the word belongs to the title or not, we created a Machine Learning model using the GaussianNB classifier algorithm. When a CV is read, it checks possible titles. If the model classifies a word as a title, then the textual information that follows would be allocated under it until a new title is found (Figs. 2, 3 and 4). Segmenting CVs in different parts has vastly helped the system. With the segmented information, we are able to use a certain part of a CV for a respective task. If we want to extract personal information of the user, then for that we can only use the ‘profile’ segment. Parsing of Information using NER After successfully segmenting the CV into different parts, we had to extract the required information from them. In retrieving the required information, one of the components that has been widely used is the Stanford NER model. We have created a custom Stanford NER model with the help of about 350 CVs. While training the NER model, the Conditional Random Field (CRF) algorithm has been used. Conditional Random Field (CRF) is

Application of NLP for Information Extraction …

701

Fig. 5 Confusion matrix of the tagged NER result

Fig. 6 Classification report of the NER model while testing

a probabilistic graphical model which is widely used in Natural Language Processing (NLP) areas such as neural sequence labeling, Parts-of-Speech tagging (POS), and Named Entity Recognition (NER) [13]. The Conditional Random Field (CRF) algorithm has been providing good results in many NER tagging tests. In the research paper “Named Entity Recognition using Conditional Random Fields”, [14] the NER tagging performed in Marathi Language using CRF algorithm has provided a very good result. When tagging NER components, different tags that were used are as follows, PER (person), LOC (location), DATE, ORG (organization), DESIG (designation), EXP (experience), DEG (education), UNI (University), O (Unwanted tokens/text) After the NER model was created, we tested it with 40 new CVs of similar format. The result from the testing was quite good. We have provided the confusion matrix and the classification report below (Figs. 5 and 6). Beside the NER model, we have also implemented CSV parsing for extracting information like skills, nationality and languages. We created a set of csv files, and

702

S. Pudasaini et al.

listed down all the information regarding them. For skills, we separated it into two parts—technical skills and soft skills. So, if we want to retrieve the list of skills from a CV, we would read the csv file containing the skills and try matching the token between them. Same process was done for extracting the nationality and language from a CV.

3.3 Job Vacancy Information Parsing From the job vacancy type document, we worked on extracting the information like organization name, job location, job designation, educational requirement, work experience requirement and the skills (soft skills and technical skills) requirement (Fig. 7). Here, to extract the information, we have used the custom trained spaCy NER (Named Entity Recognition) model. spaCy uses the Deep learning techniques for NER and can be summarized into four steps: encode, embed, attend and predict. Initially, it performs feedforward neural network to get the word embedding (embed). Then, CNN (convolutional neural network) is used to get the context dependent word embeddings for the tokens (encode). The embeddings are then reduced into a single vector representation using the rule-based self-attention mechanism (attend). At the end, single vectors are used for predicting the class using a neural network (predict) [15]. While training, 400 job vacancy documents were used having the parameters epoch = 40, optimizer = ‘sgd’ and dropout = 0.35. From the spaCy NER model, we could extract the ‘Organization Name’, ‘Job Location’, ‘Job Designation’, ‘Educa-

Fig. 7 Block diagram representing job vacancy information parsing module

Application of NLP for Information Extraction …

703

tion’ and ‘Experience’. Whereas for the technical skills and soft skills, we extracted them using the pool parsing.

4 Conclusion and Future Work Through this paper, we have demonstrated an efficient and accurate structuredinformation extraction from textual documents. Such accurate information extraction is possible by the use of NLP techniques with the application of Machine Learning (ML) and Deep Learning (DL) models. With the reference of textual documents like job vacancy information and CV, which are frequently used in applicant tracking systems, we have developed this system and tested it with documents of similar type. The evaluation metrics obtained are higher and the execution time for such information extraction is also better. Such highly optimized and accurate information extraction systems can be used in various other fields like research publications, job portals, etc. However, the data used for training ML and DL models should be tweaked according to corresponding applications of the system. Although the approaches that we have found now have solved our issue, we might have to change it in the near future, in case the requirement changes or new types of data are received. There are still some methods which are yet to be tested such as embedding using the BERT model, trying document similarity instead of token similarity and so on. Also, in the next stage, we will be focusing on extracting information from unstructured documents of other domains too.

References 1. Patil, N., Patil, A., Pawar, B.: Named entity recognition using conditional random fields. Procedia Comput. Sci. 167, 1181–1188 (2020) 2. Pathak, K.: A tour of conditional random field. [online] Towards AI—The Best of Tech, Science, and Engineering (2020). Available at: https://towardsai.net/p/machine-learning/a-tourof-conditional-random-field-7d8476ce0201. Accessed 24 Sept. 2020 3. Singh, S.: Natural Language Processing for Information Extraction (2018). arXiv, [online] Available at: https://arxiv.org/pdf/1807.02383.pdf. Accessed 20 Jan. 2021 4. Zeroual, I., Lakhouaja, A.: Data science in light of natural language processing: an overview. Procedia Comput. Sci. 127, 82–91 (2018) 5. Sovren.com.: Home (2020). [online] Available at: https://www.sovren.com/. Accessed 24 Sept. 2020 6. Technologies, D.: Daxtra—CV Parsing (2020). [online] Cn.daxtra.com. Available at: http://cn. daxtra.com/. Accessed 24 Sept. 2020 7. AkkenCloud.: Top Staffing And Recruiting Software Solution|Akkencloud (2020). [online] Available at: https://www.akkencloud.com/. Accessed 24 Sept. 2020 8. Chandola, D., Garg, A., Maurya, A., Kushwaha, A.: Online Resume Parsing System Using Text Analytics (2015). [online] Jmdet.com. Available at: http://www.jmdet.com/wp-content/ uploads/2015/08/CR9.pdf. Accessed 24 Sept. 2020 9. Malmasi, S., Sandor, N., Hosomura, N., Goldberg, M., Skentzos, S., Turchin, A.: Canary: an NLP platform for clinicians and researchers. Appl. Clin. Inform. 08(02), 447–453 (2017)

704

S. Pudasaini et al.

10. Popovski, G., Kochev, S., Seljak, B., Eftimov, T.: FoodIE: A rule-based named-entity recognition method for food information extraction. In: Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods (2019) 11. Sulaiman, S., Wahid, R., Sarkawi, S., Omar, N.: Using Stanford NER and Illinois NER to detect Malay named entity recognition. Int. J. Comput. Theory Eng. 9(2), 147–150 (2017) 12. Shelar, H., Kaur, G., Heda, N., Agrawal, P.: Named Entity Recognition Approaches and Their Comparison for Custom NER Model. Science & Technology Libraries (2020) 13. Das, P., Pandey, M. and Rautaray, S.: A CV Parser Model Using Entity Extraction Process and Big Data Tools (2018). [online] Mecs-press.net. Available at: http://www.mecs-press.net/ ijitcs/ijitcs-v10-n9/IJITCS-V10-N9-3.pdf. Accessed 24 Sept. 2020 14. Honnibal, M.: Embed, Encode, Attend, Predict: The New Deep Learning Formula For Stateof-the-Art NLP Models · Explosion. [online] Explosion (2016). Available at: https://explosion. ai/blog/deep-learning-formula-nlp. Accessed 25 Sept. 2020 15. Faliagka, E., Ramantas, K., Tsakalidis, A.: Application of Machine Learning Algorithms to an online Recruitment System (2012)

Scoring of Resume and Job Description Using Word2vec and Matching Them Using Gale–Shapley Algorithm Shushanta Pudasaini, Subarna Shakya, Sagar Lamichhane, Sajjan Adhikari, Aakash Tamang, and Sujan Adhikari

Abstract The paper introduces a quick-witted system that assists employers to find the right candidate for a job and vice-versa. Multiple approaches have to be taken into account for parsing, analyzing, and scoring documents (CV, vacancy details). However, In this paper, we have devised an approach for ranking such documents using word2vec algorithm and matching them to their appropriate pair using Gale–Shapley algorithm. When ranking a CV, different cases are taken into consideration: skills, experience, education, and location. The ranks are then used to find an appropriate match of employers and employees with the use of Gale–Shapley algorithm which eases companies for higher best possible candidates. The methods experimented for the scoring, and matching is explained below on the paper. Keywords Natural language processing · NER · word2vec · Gale Shapley · CBOW · Word embedding · Cosine similarity

S. Pudasaini (B) Advanced College of Engineering and Management, Kupondole Road, Lalitpur, Nepal S. Shakya Institute of Engineering, Tribhuvan University, Pulchowk, Lalitpur, Nepal e-mail: [email protected] S. Lamichhane Herald College Kathmandu, Hadigaun Marg, Kathmandu, Nepal S. Adhikari Nagarjuna College of Information Technology, Bangalamukhi, Lalitpur, Nepal A. Tamang Patan College for Professional Studies, Kupondole, Patan, Nepal S. Adhikari NAMI College, Gokarneshwor, Kathmandu, Nepal © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0_55

705

706

S. Pudasaini et al.

1 Introduction To hire an employee, CVs are screened manually by the HR, and based on the qualifications, the certain persons are hired. This system has been running for a long time. After the advancement in natural language processing, CV parsing has become an integral part of it. Furthermore, if the CV can be judged similar to the human judgment, then it would be a more sophisticated system, as the system can process and manipulate large CVs in a short period of time. This paper describes a method to identify the standard of the CV and match them based on the requirements of a certain company. So, the scoring model has been proposed in this paper to provide a score to the CV. Several criteria have been displayed to provide the score to the CV. The designation of the job seeker, progress score, location, skills, and score plays vital roles in scoring of the CV. These factors were selected after several experiments, and suitable conditions have been applied. Similarly, to make it easier for job providers and seekers to find the best match among themselves, the Gale–Shapley is in action. It helps to find the best pair among the both parties. As this algorithm has been used for finding pairs for marriage, this paper aims to use it for pairing of employees with employers and vice-versa.

2 Literature Review Document ranking or comparing the similarity between the documents has been identified as one of the major tasks in the natural language processing-related activities. Different measures with different ideas have been implemented for getting a better result. In the paper [1], the author has focused on measuring the similarity between sentences. The paper shows that even though word2vec and cosine similarity gives appropriate results, it is still possible to improve the similarity result of the sentences by combining the word embeddings with the hand-engineered features. Similarly, on the paper titled as [2] shows the way of making online recruitment more efficient and effective. The paper [3] shows the way for recruiters to find the appropriate candidate by their skills using word2vec algorithm. The skills were extracted from vacancy details and classified according to the vacancy details with the same content. Then, it was trained on the word2vec algorithm. The paper [4] defines a way of hiring using the combination of word embeddings and NER model. Jayashree et al. [5] are a system for personality evaluation, and CV analysis, where users have to manually fill up the form, gives aptitude tests and upload their CV on the website. The evaluation is done on the reference to machine learning algorithms. Rauch et al. [6] propose many methods to solve the problem of profiling applications according to a specific work offer, based on vectorial and probabilistic models. Their target is a framework capable of reproducing the recruitment consultant’s judgment. By using ROC curves, we have evaluated a number of similarity measures to rank candidates.

Scoring of Resume and Job Description Using Word2vec and Matching …

707

The paper [7] matches the employee with the offered position based on their qualification with the use of Gale–Shapley algorithm. Arcaute and Vassilvitskii [8] also put a new way for creating a social network and matching of the jobs with the use of Gale–Shapley algorithm. Rodriguez and Chavez [9] describe a procedure to screen and match the CV with profiles, skills, and other features available in the CV with clustering algorithms. The matching model was made from 2283 CVs and job requirements.

3 Methods 3.1 Scoring To perform scoring between two documents, we have proposed the following approaches: • Scoring of description with multiple CV • Scoring of one CV with multiple vacancy details • Scoring of CV content with vacancy details content Before getting into these approaches, we will discuss two techniques that we have implemented in getting our result. They are: Word Embedding using Word2Vec Algorithm To match the CV with vacancy details, we have used the help of word embeddings, which is just the numerical representation of words in the form of vectors. Word embeddings have been proven to be very effective when working with textual data in the field of natural language processing. Here, we have implemented the word2vec model which implements the neural network architecture for generating the word embeddings of the documents [10] and has been providing the state-of-the-art word embeddings results. The embedding vectors that we receive from the word2vec model is a dense one. Whereas, other embedding approaches like one hot encoding give a sparse vector as its result. In the context of capturing the relation between different words, dense vectors give a better result than the sparse vector [11]. And as we want to capture the relationship between the words, word embedding using word2vec model was the appropriate choice for us. A word2vec model can be generated with two different approaches: • Skip-gram • CBOW (Common Bag of Words) The functionality of the skip-gram model is that it takes a certain word and tries to predict its surrounding word or context words. Whereas in CBOW, it takes the context and predicts the target word (Fig. 1).

708

S. Pudasaini et al.

Fig. 1 Neural network architecture of CBOW and skip-gram model

The word2vec model is trained with a data corpus comprising 12,000 CVs using the CBOW model. Measuring Similarity To perform the text similarity, there are three different approaches, i.e., string based, corpus based, and knowledge based. Here, we will be following the string-based similarity approach, where we measure how similar or dissimilar a pair of strings are [12]. In the string-based similarity method, similarity can be measured using various approaches like cosine similarity, dice’s similarity, euclidean distance, and jaccard similarity. Among them, cosine similarity is the one which is frequently used for measuring the distance between the two given vectors in a vector space. n AB i=1 Ai Bi  =  cos(θ) = n n AB 2 2 (A ) i i=1 i=1 (Bi ) If the pair of strings are close to each other, then we will get a value of θ smaller (closer to 0) giving us the output value closer to 1. And if the strings are not closely related to each other, the value of θ will be higher giving the output value closer to 0 (Fig. 2). Scoring of one vacancy details with multiple CV This scoring module is used when a score for vacancy details is required in comparison to multiple CVs.

Scoring of Resume and Job Description Using Word2vec and Matching …

709

Fig. 2 Most similar tokens of a given word

The result we receive is of a json type, having two keywords, i.e., ‘user_profiles’ and ‘job.’ The keyword ‘user_profiles’ holds the record of multiple CVs that includes their id, personal information, experience details, and technical and soft skills details. The other keyword ‘job’ holds the information of the vacancy details. It includes its id, job title, vacancy details, and location. The score which is generated from the model is based on multiple entities (condition). Each entity holds a certain weightage that has been assigned to it after appropriate testing. Higher the score, higher similarity between the vacancy details and CV. The entity along with their respective weightages is: Scoring entity Skills score Experience score Designation score Distance score Progress score Total score

Description 25 15 40 5 15 100

710

S. Pudasaini et al.

Fig. 3 Flow diagram of the designation scoring model

In the scoring function, the skill sets are used to calculate the skills score. And the locations are used to calculate the distance score. The nearer the CVs location, higher the score. Likewise for scoring the designation, the system checks if the job title is provided on the json, and the designation from the job content is parsed using the custom trained NER tagger then added together to form a designations list. The system checks if any values in user-designation and the job designation list match and if they do the score is generated (Fig. 3). The procedure of scoring on the basis of ‘Progress score’ is quite different from the others. It first visualizes the ‘designations’ and ‘designation_dates’ graphically and calculates the slope of the graph. Working on this method, we got to know about the priority of categorizing a designation as per its hierarchy. However, there was not any predefined way to do this. So, we created a rule-based file parsing method which takes a file containing every possible designation and categorizes them into junior employee, senior employee, intermediate employee, or manager. This normalizes the designation, and the relevant slope can be calculated. The slope provides the stability of candidates for the designations. Scoring of one CV with vacancy details content One of the major differences between this scoring approach and the other previous approaches is that here we do not parse the documents. The CV contents and vacancy details contents are preprocessed with steps such as text cleaning, stopwords removal, lemmatization, and so on. Then, the filtered tokens from the CV and vacancy details are passed to the custom trained Word2Vec model. Evaluation of word embeddings

Scoring of Resume and Job Description Using Word2vec and Matching …

711

of every token is performed, and the mean embedding representing whole textual content is calculated. After getting the mean embedding from both CV and vacancy details, cosine distance similarity is applied to calculate the similarity score.

3.2 Matching Using Gale–Shapley Algorithm After the completion of generating scores for the required document, we use those scores for recommending appropriate CVs to any given job-descriptions and viceversa using an accredited pairing algorithm called ‘Gale–Shapley’ which was devised by David Gale and Llyod Shapley in 1962. This algorithm is capable of solving any pairing problems across the world like students choosing best universities in online educational platforms or women looking for men and vice-versa in dating sites and connecting users to an Internet service in the smallest amount of time. In all of these cases, we are required to create a stable set of pairs that needs to satisfy a given criteria. Theoretically, a stable pair is formed when a pair (A, B) has no better options than each other. We made use of the proposed pseudocode for the stable marriage problem, to solve the pairing problem in our system. Like in the proven example of stable marriage, we have two sets of pairs in our system, CV, and job-descriptions. And each element has to find the best candidate on the other set, i.e., finding the best candidate for a given company or finding the best company for any given candidate. These two sets of data are acquired as a json file in our system with a keyword representing its respective section. The first keyword holds the data for each jobdescription with its respective score for each CV. This score is provided by the scoring endpoint that scores one job-description with multiple CV (Fig. 4). Similarly, the other keyword holds the data for each CV with its respective score for each job-description. This score is manually provided by the user to the respective job-description (Fig. 5). Values in both of these sets are ordered in the descending order, i.e., the most preferred CV or jd is at first. However, this data represents more than one stable set of arrangements. And this is where we make use of the algorithm that later provides a single set of arrangement for each CV or job-description (Fig. 6)

Fig. 4 json representation of one jd and multiple CV

712

S. Pudasaini et al.

Fig. 5 json representation of one CV and multiple jd Fig. 6 Stable set of arrangement of CV and job-description

4 Conclusion and Future Works In this paper, the word2vec algorithm using CBOW model is implemented for creating the required embedding vectors and the cosine similarity for measuring the similarity score between CV and vacancy details. There are various approaches to generate such word embeddings like skip-gram model, TF IDF model, and so on. However, the custom Word2Vec model trained on our custom data produced highly relevant results. Google pretrained Word2Vec model was also used for generating such word embeddings. However, a more accurate result was obtained by training such a custom Word2Vec model. The word2vec model for the algorithm was created using the Gensim library which was trained on 1220 documents consisting of CV and vacancy details. The result obtained on the test data for the scoring reflected a satisfactory result. While measuring the similarity of tokens, sentences or documents using the cosine similarity gave better results than other methods. Such better results were obtained due to the contextual awareness of such trained custom Word2Vec models. The word2vec model could be replaced by other powerful embedding moldes such as ELMO, BERT, etc. Such method can give better result than the current one. The data for creating the word embeddings can also be increased to get better result.

References 1. Misra, A., Ecker, B., Walker, M.: Measuring the Similarity of Sentential Arguments in Dialog (2017) 2. Faliagka, E., Ramantas, K., Tsakalidis, A.: Application of Machine Learning Algorithms to an Online Recruitment System (2012)

Scoring of Resume and Job Description Using Word2vec and Matching …

713

3. Van-Duyet, L., Minh-Quan, V., Quang-An, D.: Skill2vec: Machine Learning Approaches for Determining the Relevant Skill from Job Description (2017) 4. Suhas, H.E., Manjunath, A.E.: Differential Hiring Using a Combination of NER and Word Embedding (2020) 5. Jayashree, R., Sudhir, B., Pooja, Y., Nirmiti, P.: Personality evaluation and CV analysis using machine learning algorithm. Int. J. Comput. Sci. Eng. 7, 1852–1857 (2019). https://doi.org/10. 26438/ijcse/v7i5.18521857 6. Rauch, J., Ra´s, Z.W., Berka, P., Elomaa, T.: (Lecture Notes in Computer Science) Foundations of Intelligent Systems Volume 5722—Job Offer Management: How Improve the Ranking of Candidates (2009) 7. Elviwani, E.,Siahaan, A.P.U., Fitriana, L.: Performance-based Stable Matching using GaleShapley Algorithm (2018). https://doi.org/10.4108/eai.23-4-2018.2277597 8. Arcaute, E., Vassilvitskii, S.: Social Networks and Stable Matchings in the Job Market (2009). https://doi.org/10.1007/978-3-642-10841-9_21 9. Rodriguez, L.G., Chavez, E.P.: Feature selection for job matching application using profile matching model. In: 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS) (2019) 10. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space (2013) 11. Chiu, B., Crichton, G., Korhonen, A., Pyysalo, S.: How to Train Good Word Embeddings For Biomedical NLP, 166–174 (2016) 12. Gomaa, H., Fahmy, A.: A survey of text similarity approaches. Int. J. Comput. Appl. (2013)

Author Index

A Abarna, K., 361 Abed, Mohammed Hamzah, 65 Abiko, Aschalew Tirulo, 227 Adhikari, Sajjan, 695, 705 Adhikari, Sujan, 695, 705 Akarsha, D. P., 631 Alagarsamy, Saravanan, 21 Alam, Mansaf, 577 Al-Asfoor, Muntasir, 65, 139 Anand, R., 375 Andika, Galang, 393 Anwar, Nizirwan, 393 Ashwini, S. D., 631 Avinash, N. J., 511 Avvari, Venkata Sai Yashwanth, 97

B Balachandra, Mamatha, 297 Balyan, Vipin, 619 Bandhu, Kailash Chandra, 87 Bano, Shahana, 561 Bansal, Sanchita, 57 Barani Sundaram, B., 227 Barik, Lalbihari, 1 Barik, Rabindra Kumar, 1 Bhansali, Ashok, 87 Bhat, Sowmya, 511 Bhawana, 271 Bhuvaneswary, N., 153, 169 Bobba, Praveen Kumar, 21 Briskilal, J., 113

C Chaithanya, J., 153 Chidanandan, V., 647

D Dasari, Hema Teja Anirudh Babu, 97 Devarapalli, Sanath Reddy, 21 Dwarakanatha, G. V., 375

E Erick, 403

G Gali, Sowmya, 477, 493 Ganesh Kumar, S., 183 Gedela, Vamsy Vivek, 75 Geetha, M., 311 Genale, Adola Haile, 227 George, Jossy, 445 Ghali, Sriram, 97 Goyal, Pratul, 271 Gupta, Gunjan, 523

H Hebbar, Harishchandra, 297 Hima Bindu, K., 153 Holidin, Ahmad, 393

I Imran, Iqra Maryam, 665

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 I. Jeena Jacob et al. (eds.), Expert Clouds and Applications, Lecture Notes in Networks and Systems 209, https://doi.org/10.1007/978-981-16-2126-0

715

716 J Jayakumar, M., 215 Jayalakshmi, S., 321 Jayapriya, V., 169 Jayashree, C. S., 375 Jha, Shweta, 199 Jha, Srirang K., 49, 57 Jolad, Bhuvaneshwari, 35 Jonnadula, Pradeep, 21 Jotwani, Varsha, 283 Jowda, Fouad, 139 Juneja, Pradeep Kumar, 271

K Kamalakkannan, S., 125 Kanaparthi, Suresh Kumar, 601 Karthika, P., 227 Kavitha, V., 247 Kavya, M. K., 11 Keerthana, B., 297 Khanai, Rajashri, 35 Kiran, Damarla Kanthi, 413 Kiran, P., 11 Kishore Kumar, K., 349 Kumari, Adesh, 577 Kumari, Shalini, 1 Kumar, Preetham, 311 Kumar, S. Sanjay, 49 Kumutha, K., 321

L Lakshmanan, S. A., 545 Lamichhane, Sagar, 695, 705 Laxmi, N., 413 Ligwa, Mario, 619

M Malathi, V., 247 Manasa, Guttikonda Tulasi, 561 Mardika, Ersa Andhini, 403 Meenakshi, M., 261 Megalingam, Rajesh Kannan, 75, 97 Metilda Florence, Stanley, 237 Mittal, Amit, 271 Mohankumar, N., 215 Mohideen AbdulKader, M., 183 Mohith, N., 11 Monisha, M., 361 Moorthy, H. Rama, 511 Mounika, V., 169 Mua’azi, M. Naufal, 403

Author Index Muniyal, Balachandra, 297 N Nagashri, K., 665 Nair, Akhil M., 445 Najeeb Ahmed, G., 125 Nalajala, Sunanda, 413 Naresh, R., 261 Nasreen Banu, Mohamed Ishaque, 237 Negi, Pushpa Bhakuni, 271 Niranjan, L., 647 Nirmala Devi, M., 215 Niveda, A., 361 Nugraha, Muhammad Lutfan, 585 P Pai, Manisha, 631 Panda, Anmol, 1 Pandey, Amit, 227 Patil, Aseem, 335 Patkar, Hrishikesh R., 511 Patra, Sudhansu Shekhar, 1 Pinto, Renita, 511 Pranitha, Bhimireddy, 413 Pratama, Adrian Randy, 403 Pravallika, S., 169 Pudasaini, Shushanta, 695, 705 R Rachana, Tummeti, 413 Rai, Shwetha, 311 Rajarajeswari, S., 631, 665 Raju, U. S. N., 601 Ramya Shree, A. N., 11 Rastogi, Umang, 227 Rathi, Kavita, 531 Reddy, Guntaka Greeshmanth, 561 S Sangeetha, N., 647 Savitha, H. M., 681 Senthil, T., 207 Shah, Hecate, 375 Shakya, Subarna, 695, 705 Shama, B. N., 681 Shetty, Nanda Devi, 665 Shrivastava, Keerti, 283 Shwetha, N., 647 Silahudin, Didin, 431 Singh, Parvinder, 531 Sreedevi, A. G., 545

Author Index Sreenidhi, P., 511 Sri, Kurra Hima, 561 Subalalitha, C. N., 113 Subbarayudu, Yerragudipadu, 459 Subramanian, R. Raja, 21 Sudarsanam, P., 375 Sudheesh, Sankardas Kariparambil, 75 Sunori, Sandeep Kumar, 271 Sureshbabu, Alladi, 459 T Tamang, Aakash, 695, 705 Thanvi, Gopalam Nagasri, 413 Thirukkuralkani, K. N., 361 Trinadh, Vempati Biswas, 561 V Vasundhara, M., 153 Venkatabhanu, M., 153 Venkateshkumar, M., 545

717 Venkateswerareddy, H., 349 Venkatram, N., 477, 493 Vijay Ganesh, P. C., 207 Vijaykumar, Janga, 227

W Warnars, Diana Teresia Spits, 403, 585 Warnars, Harco Leslie Hendric Spits, 393, 403, 431, 585 Warnars, Leonel Leslie Heny Spits, 431

Y Yahya Abbasi, M., 577 Yathish, S., 445 Yogesh kumar, K. R., 545

Z Zyl Van, Robert, 523