Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences: PCCDS 2020 9811575320, 9789811575327

This book presents best selected papers presented at the International Conference on Paradigms of Computing, Communicati

358 72 40MB

English Pages 1023 [972] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Committees
Patron
Advisory Committee
Organizing Committee
Organizing Chairs
Organizing Secretaries
Publicity Chairs
Foreword
Preface
Acknowledgements
Contents
About the Editors
Part IComputing
1 Real-Time Implementation of Enhanced Energy-Based Detection Technique
1 Introduction
2 System Architecture
3 Hardware Implementation
4 Experimental Results
4.1 Results of Simulation of Proposed Algorithm
4.2 Results of Real-Time Implementation of Proposed Algorithm
5 Conclusion
References
2 Pipeline Burst Detection and Its Localization Using Pressure Transient Analysis
1 Introduction
2 ProposedAlgorithm
2.1 Wavelet De-Noising
2.2 Wavelet Analysis for Burst Detection
2.3 CUSUM Algorithm
2.4 Burst Localization
2.5 Node Matrix Analysis
3 Results
4 Conclusion
References
3 Improving Machine Translation Using Parts-Of-Speech Tags and Dependency Parsing
1 Introduction
2 Proposed Machine Translation System
2.1 Parts-Of-Speech Tags
2.2 Dependency Parsed Tags
2.3 Mechanism of Machine Translation System Proposed
3 Experiment
3.1 Tagger System
3.2 Dataset
3.3 Training
3.4 Evaluation
4 Results
4.1 Advantages
4.2 Limitations
5 Conclusion and Future Work
References
4 Optimal Strategy for Obtaining Excellent Energy Storage Density in Polymer Nanocomposite Materials
1 Introduction
1.1 Ceramics
1.2 Polymers
2 Factors Affecting the Value of k, Eb of the Nano Composite
2.1 Size and Shape of Nano Fillers
2.2 Loading of Nano Fillers
2.3 Dispersion of Nano Fillers
2.4 Interfacial Relationship Between Nanoparticles and Polymer
3 Core Shell Structure
4 Multilayer Structure
5 Conclusion
References
5 A Study of Aging-Related Bugs Prediction in Software System
1 Introduction
2 Related Work
3 Elements of Experimental Methodology
3.1 Instance Filtering Technique (SMOTE)
3.2 Standardization
3.3 Used Classification Algorithms
3.4 Experimental Datasets
3.5 Performance Evaluation Measures
4 Results and Analysis
5 Conclusions
References
6 Mutual Authentication of IoT Devices Using Kronecker Product on Secure Vault
1 Introduction
2 Related Work
3 Security Issues and Challenges
4 Proposed Methodology
4.1 Assumption
4.2 Kronecker Product
4.3 Secure Vault
4.4 Pre-processing the Matrix
4.5 Deployment Phase
4.6 Key Calculation
4.7 Authentication Mechanism
4.8 Changing the Secure Vault
5 Performance Analysis
5.1 Storage Cost
5.2 Communication Cost
5.3 Computation Cost
6 Security Analysis
6.1 Construction of Kronecker Matrix
6.2 Man-in-The-Middle Attack
6.3 Next Password Prediction
6.4 Side-Channel Attack
7 Comparison of Authentication Mechanism
7.1 Storage of Secure Vault
7.2 Computation Cost
8 Conclusion and Future Work
References
7 Secure and Decentralized Crowdfunding Mechanism Based on Blockchain Technology
1 Introduction
2 Related Works
3 Preliminaries
3.1 Blockchain
3.2 Smart Contract
3.3 Ethereum
3.4 Asymmetric-Key Cryptography
3.5 Hash Function
3.6 Merkle Tree
3.7 Consensus Algorithms
4 The Proposed Crowdfunding Mechanism Based on Blockchain Technology
5 Discussion
6 Conclusion and Future Work
References
8 Efficient Use of Randomisation Algorithms for Probability Prediction in Baccarat Using: Monte Carlo and Las Vegas Method
1 Introduction
1.1 Methods to Generate Randomness
1.2 Classification of Randomisation Algorithms
1.3 Pthread Library
1.4 Baccarat
2 Related Works
3 Proposed Approach
3.1 Using Monte Carlo Method
3.2 Using Monte Carlo Method with Data Structures
3.3 Using Las Vegas Method
3.4 Using Las Vegas Method with Multithreading
3.5 Simulation of Baccarat
4 Result Analysis
5 Comparison of Algorithms
6 Conclusion and Future Scope
References
9 Comparative Analysis of Educational Job Performance Parameters for Organizational Success: A Review
1 Introduction
2 Educational Job Performance Data Mining Phases
3 Related Studies
4 Comparative Study of Educational Data Mining Tools, Techniques and Parameters
5 Analysis
6 Conclusion
References
10 Digital Anthropometry for Health Screening from an Image Using FETTLE App
1 Introduction
2 Related Work
3 System Overview
3.1 Adulthood Space
3.2 Baby Care Space
3.3 Immobilized Patients Anthropometry
3.4 Health Trends
4 High Level Implementation Architecture
5 Methodology
6 Experimental Results
7 Performance Analysis
8 Conclusion
References
11 Analysis and Redesign of Digital Circuits to Support Green Computing Through Approximation
1 Introduction
2 Previous Works and Rationale of Our Proposal
3 Proposed Method
4 Experimental Evaluation
5 Conclusion
References
12 Similarity-Based Data-Fusion Schemes for Missing Data Imputation in Univariate Time Series Data
1 Introduction
2 Related Work
3 Proposed Model
4 Results and Discussions
5 Validation of Imputation Technique
6 Conclusion and Future Work
References
13 Computational Study on Electronic Properties of Pd and Ni Doped Graphene
1 Introduction
2 Computational Methods
3 Results and Discussion
3.1 Geometry Optimization and Binding Energies
3.2 Electronic Properties
4 Conclusions
References
14 Design of an Automatic Reader for the Visually Impaired Using Raspberry Pi
1 Introduction
2 Related Works
3 System Design
3.1 Working Principle
3.2 Raspberry Pi 4 Model B
3.3 Used Components in Reader
3.4 Proposed Hardware Model
3.5 GPIO Interfacing
3.6 Proposed Approach
4 Result and Discussion
4.1 Getting Input Images and Output Text
4.2 Extracted Images and Generated Output
5 Conclusion and Future Work
References
15 Power Maximization Under Partial Shading Conditions Using Advanced Sudoku Configuration
1 Introduction
2 PV System Modeling
3 TCT connection
4 Advanced Sudoku
5 Performance Evaluation
6 Conclusion
References
16 Investigations on Performance Indices Based Controller Design for AVR System Using HHO Algorithm
1 Introduction
2 Automatic Voltage Regulator System Model
2.1 Linerazed Modelof AVR System [7]
3 HHO Algorithm [9]
3.1 Exploration Phase
3.2 Transition from Exploration to Exploitation
3.3 Exploitation Phase
4 HHO PID Controller
5 Results
6 Conclusion
References
17 Ethereum 2.0 Blockchain in Healthcare and Healthcare Based Internet-of-Things Devices
1 Blockchain in Healthcare
1.1 Literature Survey: Blockchain in Healthcare and Internet-of-Things
2 Proposed System Design for POS-Blockchain Based Healthcare System with IoT
3 Conclusion and Future Work
References
18 IoT-Based Solution to Frequent Tripping of Main Blower of Blast Furnace Through Vibration Analysis
1 Introduction
2 Problem Description
3 Extrapolative Study of the Problem
4 Root Cause of the Problem
5 Corrective Actions
6 Results
7 Conclusion
References
19 A Survey on Hybrid Models Used for Hydrological Time-Series Forecasting
1 Introduction
2 Hybrid Models for Forecasting Time Series Data: A Review
2.1 Review of Parallel Hybrid Structure in Time Series Forecasting
2.2 Review of Series Hybrid Structure in Time Series Forecasting
2.3 Review of Parallel-Series Hybrid Structure in Time Series Forecasting
3 Review of Latest Hybrid Models Used for Hydrological Time Series Forecasting
4 Future Work
5 Conclusion
References
20 Does Single-Session, High-Frequency Binaural Beats Effect Executive Functioning in Healthy Adults? An ERP Study
1 Introduction
2 Methodology
2.1 Procedure
2.2 EEG Data Acquisition
2.3 EEG Data Analysis
3 Results
3.1 Behavioral Measures
3.2 ERP Measures
3.3 Correlation Analysis
4 Discussion
5 Conclusion
References
21 Optimized Data Hiding for the Image Steganography Using HVS Characteristics
1 Introduction
2 Related Work
2.1 Optimized Data Hiding Technique
2.2 HVS Characteristics
3 Proposed Technique
4 Experimental Results
4.1 Visual Perceptibility Analysis
4.2 Visual Quality Analysis Parameters
4.3 Comparative Analysis with the Existing Techniques
5 Conclusion and Future Work
References
22 Impact of Imperfect CSI on the Performance of Inhomogeneous Underwater VLC System
1 Introduction
2 System Model
3 Outage Probability
4 ASEP Analysis
4.1 NI timesNM-RQAM
4.2 M-ary PAM
5 Numerical and Simulation Results
6 Conclusion
References
23 Pre-configured (p)-Cycle Protection for Non-hamiltonian Networks
1 Introduction
2 Basics of p-Cycle
3 Conventional Approach
3.1 Problems with the Conventional Approaches
4 Our Work
5 Results
6 Conclusion
References
24 A Novel Approach to Multi-authority Attribute-Based Encryption Using Quadratic Residues with Tree Access Policy
1 Introduction
2 Related Work
3 Motivation
4 Proposed Multi-authority Attribute-Based Encryption Scheme
4.1 Quadratic Residues
4.2 Tree-Based Access Policy
5 The Basic Steps of the Proposed Scheme
6 An Illustration of the Proposed Scheme to Cloud-Based Environment
7 Analysis and Limitations
8 Conclusion
References
25 An Improvement in Dense Field Copy-Move Image Forgery Detection
1 Introduction
1.1 Types of Image Forgeries
1.2 Copy-Move Image Forgery Detection
2 Related Work
3 Methodology
3.1 Preprocessing
3.2 Feature Extraction
3.3 Feature Matching
3.4 Post-processing
4 Experimental Setup, Results, and Discussions
5 Conclusion
References
26 Scheduling-Based Energy-Efficient Water Quality Monitoring System for Aquaculture
1 Introduction
2 Related Work
2.1 Power-Saving Methods
3 System Overview and Working
3.1 Hardware
4 Energy Saving Strategy
5 Performance Analysis
6 Conclusion
References
27 A Study of Code Clone Detection Techniques in Software Systems
1 Introduction
2 Clone Terminologies
2.1 Clone Relation Terminologies
2.2 Types of Clones
3 Literature Survey
4 The Rationale for Code Duplication
4.1 Advantages and Disadvantages of Clones
5 Clone Detection Process and Techniques
5.1 Clone Detection Process
5.2 Clone Detection Techniques
5.3 Discussion
6 Code Clone Evolution
7 Conclusions and Future Scope
References
28 An Improved Approach to Secure Digital Audio Using Hybrid Decomposition Technique
1 Introduction
1.1 Related Work
2 Proposed Methodology
2.1 Discrete Wavelet Transform (DWT)
2.2 Discrete Wavelet Transform (DCT)
2.3 Singular Value Decomposition (SVD)
2.4 Concept of Cyclic Codes
2.5 Arnold’s Cat Map Algorithm
3 Algorithm Design
3.1 Watermark Embedding Algorithm
3.2 Algorithm for Extraction
4 Experimental Results
5 Conclusion
References
29 Study on the Negative Transconductance in a GaN/AlGaN-Based HEMT
1 Introduction
2 Simulation Strategy
3 Results and Discussion
4 Conclusion
References
30 Hybrid Anti-phishing Approach for Detecting Phishing Webpage Hosted on Hijacked Server and Zero-Day Phishing Webpage
1 Introduction
1.1 Phishing Attack Process
1.2 Why Phishing Attack Works
2 Related Work
2.1 Blacklist Based Methods
2.2 Heuristic Based Methods
2.3 Machine Learning Methods
2.4 Hybrid Methods
2.5 Search Engine-Based Methods
3 Proposed Solution
3.1 Design Objective
3.2 System Architecture
3.3 Algorithm for Our Proposed Solution Is as Follows
4 Experiment Results
4.1 Implementation Detail
4.2 Classification of Webpages
4.3 Comparison with Existing Methods
5 Discussion
6 Conclusion
References
31 FFT-Based Zero-Bit Watermarking for Facial Recognition and Its Security
1 Introduction
2 Preliminaries
2.1 Method of Fast Fourier Transformation (FFT)
2.2 Singular Value Decomposition (SVD)
3 Proposed Scheme
3.1 Algorithm 1: Embedding Process
3.2 Algorithm 2: Extraction Process
4 Experimental Results
4.1 Correlation Coefficient
4.2 Normalized Correlation (NC)
4.3 Bit Error Rate (BER)
5 Conclusion
References
32 Comparative Analysis of Various Simulation Tools Used in a Cloud Environment for Task-Resource Mapping
1 Introduction
2 Related Work
2.1 Comparative Guidelines for Cloud Simulators
3 Cloud Environment
3.1 Cloud Simulators
4 Comparative Analysis of Various Variants of CloudSim on Different Parameters
4.1 Comparative Discussion
5 Conclusion and Future Scope
References
Part IICommunication
33 Study of Spectral-Efficient 400 Gbps FSO Transmission Link Derived from Hybrid PDM-16-QAM With CO-OFDM
1 Introduction
2 Simulation Setup
3 Numerical Results
4 Conclusion
References
34 4 × 10 Gbps Hybrid WDM-MDM FSO Transmission Link
1 Introduction
2 Simulation Setup
3 Results and Discussion
4 Conclusion
References
35 Task Scheduling in Cloud Computing Using Hybrid Meta-Heuristic: A Review
1 Introduction
2 Task Scheduling in Cloud
3 Optimization Techniques
3.1 Genetic Algorithm (GA)
3.2 Harmony Search Algorithm (HS)
3.3 Tabu Search (TS)
3.4 Particle Swarm Optimization (PSO)
3.5 Cuckoo Optimization Algorithm (COA)
3.6 Artificial Bee Colony (ABC)
3.7 Ant Colony Optimization (ACO)
3.8 Simulated Annealing (SA)
3.9 Bacteria Foraging Optimization Algorithm (BFO)
3.10 Gravitational Search Algorithm (GSA)
3.11 Lion Optimization Algorithm (LOA)
3.12 The Harmony Tabu Search (THTS)
3.13 Cuckoo Harmony Search Algorithm (CHSA)
3.14 Harmony-Inspired Genetic Algorithm (HIGA)
3.15 Genetic Algorithm-Particle Swarm Optimization (GA-PSO)
3.16 Multi-objective Hybrid Bacteria Foraging Algorithm (MHBFA)
3.17 Simulated Annealing Based Symbiotic Organisms Search (SASOS)
3.18 The Technique for Order of Preference by Similarity to Ideal Solution-Particle Swarm Optimization (TOPSIS-PSO)
3.19 Artificial Bee Colony Simulated Annealing (ABC-SA)
3.20 Genetic Algorithm Artificial Bee Colony (GA-ABC)
3.21 Cuckoo Gravitational Search Algorithm (CGSA)
3.22 Oppositional Lion Optimization Algorithm (OLOA)
3.23 Fuzzy System—Modified Particle Swarm Optimization (FMPSO)
4 Literature Review
5 Comparison of Performance Metrics
6 Conclusion
References
36 Modulation Techniques for Next-Generation Wireless Communication-5G
1 Introduction
2 5G Waveform Candidates
3 FBMC
4 UFMC
5 Conclusion
References
37 Muscle Artifact Detection in EEG Signal Using DTW Based Thresholding
1 Introduction
2 Methodology
2.1 Dynamic Time Warping (DTW)
2.2 Performance Parameters
3 Results
4 Conclusion
References
38 Human Activity Recognition in Ambient Sensing Using Sequential Networks
1 Introduction
2 Related Works
3 Methodology
3.1 Mini-Batch LSTM Approach
3.2 Deep LSTM Approach
3.3 Dataset
4 Results
5 Conclusion and Future Work
References
39 Towards the Investigation of TCP Congestion Control Protocol Effects in Smart Home Environment
1 Introduction
2 Related Work and Problem Formulation
3 Proposed Approach
4 Experimental Analysis
4.1 Simulation Tools
5 Conclusion and Future Works
References
40 Efficient Information Flow Based on Graphical Network Characteristics
1 Introduction
2 Related Work
3 Network Characteristics-Based Measures and Solutions
3.1 Graphical Modeling
3.2 Mapping Real-World Scenarios to Graph
3.3 Graphical Analysis Using Stanford Network Analysis Platform (SNAP)
4 Proposing Algorithmic Approaches for Information Flow in Network
4.1 Scenario 1: Rapid Information Propagation Through High Influential Nodes
4.2 Scenario 2: Controlling Information Propagation Through Nodes of Low Influence or Having Less Out-Degree
5 Conclusions and Future Scope
References
41 Tunable Optical Delay for OTDM
1 Introduction
2 All Pass Filter Delay Line
3 Literature Survey
4 APF Tunability
5 Three Stage All Pass Filter
6 OTDM System
7 Simulation Results
8 Conclusion
References
42 Game Theory Based Cluster Formation Protocol for Localized Sensor Nodes in Wireless Sensor Network (GCPL)
1 Introduction
2 Related Work
2.1 Localization of Sensor Node
2.2 Clustering Technique
3 Proposed Work
3.1 Definitions
3.2 Localization of Sensor Nodes Without GPS
3.3 Cluster Formation and Cluster Head Selection
4 Simulation Results
5 Conclusions
References
43 SGBIoT: Integration of Blockchain in IoT Assisted Smart Grid for P2P Energy Trading
1 Introduction
2 Related Work
3 Proposed Model
3.1 Basic Components
3.2 Overview
3.3 Architecture
3.4 Working Flow
3.5 Algorithm Explanation
4 Experimental Setup
5 Conclusion
References
44 Software Defined Network: A Clustering Approach Using Delay and Flow to the Controller Placement Problem
1 Introduction
2 Related Work
3 Problem Statement
4 Solution Approach and Algorithm
5 Results
6 Conclusion and Future Work
References
45 Netra: An RFID-Based Android Application for Visually Impaired
1 Introduction
1.1 System Architecture
2 Results
46 Efficient Routing for Low Power Lossy Networks with Multiple Concurrent RPL Instances
1 Introduction
2 Related Work
3 MEHOF
4 Results and Discussion
4.1 Simulation
4.2 Observation
5 Conclusion and Future Work
References
47 Deep Learning-Based Wireless Module Identification (WMI) Methods for Cognitive Wireless Communication Network
1 Introduction
2 System Model
2.1 Real-time Signal Acquisition
3 Deep Learning Models
3.1 CNN Architecture Model
3.2 LSTM Architecture Model
3.3 CLDNN Architecture Model
3.4 Implementation Detail
4 Results and Discussion
4.1 Performance Matrices
5 Conclusion and Future Work
References
48 Style Transfer for Videos with Audio
1 Introduction
1.1 What is Style Transfer?
1.2 Potential Applications of Style Transfer
1.3 Organization of This Paper
2 Related Work
2.1 Early Developments
2.2 Method for Image Style Transfer
3 Elements of Experimental Methodology
3.1 Dataset
3.2 Processing Technique
3.3 Method for Audio Style Transfer
4 Results and Analysis
4.1 Implementation Details
4.2 Comparison with Some Previous Methods
5 Conclusion
References
49 Development of Antennas Subsystem for Indian Airborne Cruise Missile
1 Introduction
2 Radio Altimeter
2.1 Radio Altimeter Results
3 Radio Telemetry
3.1 Radio Telemetry Results
4 IRNSS Antenna Design (L5 Band)
4.1 IRNSS L5 Band Results
5 IRNSS Antenna Design (S Band)
5.1 IRNSS S Band Results
6 Conclusion
References
50 A Literature Survey on LEACH Protocol and Its Descendants for Homogeneous and Heterogeneous Wireless Sensor Networks
1 Introduction
2 LEACH Protocol
3 Related Work
4 Descendants of Leach Routing Protocol: Overview
4.1 Performance Comparison Between LEACH Protocol and Its Descendents
5 Conclusion
References
51 Performance Study of Ultra Wide Band Radar Based Respiration Rate Measurement Methods
1 Introduction
1.1 Existing Methodologies
1.2 Key Contributions of This Paper
1.3 Organization of the Paper
2 Experimental Set-Up and Signal Database Creation
2.1 Experimental Set-Up
2.2 Validation Signal Database Creation
2.3 Characteristics of UWB Radar Signals
3 Respiration Rate Measurement Methods
3.1 Respiration-Related Bin Selection
3.2 Variational Mode Decomposition
4 Results and Discussion
4.1 Performance for Different Block Duration
4.2 Subject-Wise RR Estimation Performance
4.3 Performance Comparison
5 Conclusion
References
52 Secure Architecture for 5G Network Enabled Internet of Things (IoT)
1 Introduction
2 Overview of Fourth Generation (4G) and Fifth Generation 5(G) Enabled IoT Applications
3 Literature Review
4 Comparison of Architectures with Proposed Architecture
5 Proposed Architecture for Next Generation
5.1 Security Services of 5G—IoT
5.2 Quality of Services (QoS) of 5G—IoT
6 Conclusion and Future Work
References
Part IIIData Sciences
53 Robust Image Watermarking Using DWT and Artificial Neural Network Techniques
1 Introduction
2 Literature Review
3 Proposed Methodology
3.1 Pre-processing
3.2 DWT Feature
3.3 Watermark Binary Conversion
3.4 Inverse S-Order
3.5 Embedding of Watermark
3.6 Training of EBPNN Neural Network
3.7 Embedded Image
4 Experiment and Result
4.1 Dataset
4.2 Evaluation Parameter
4.3 Results
5 Conclusions
References
54 Fraud Detection in Anti-money Laundering System Using Machine Learning Techniques
1 Introduction
2 AML System Architecture
3 Machine Learning Techniques
3.1 Support Vector Machine (SVM)
3.2 Logistic Regression
3.3 Average Perceptron
3.4 Neural Network
3.5 Decision Tree
3.6 Random Forest
4 Experimental Work
4.1 Dataset
4.2 Experimental Setup
5 Result Analysis
6 Conclusion
References
55 A Smart Approach to Detect Helmet in Surveillance by Amalgamation of IoT and Machine Learning Principles to Seize a Traffic Offender
1 Introduction
2 Literature Review
3 Methodology
3.1 System Overview
3.2 System Architecture
3.3 System Details
4 Archetype Pre and Post Validation
4.1 System Integration
4.2 Product Validation
5 Conclusion and Future Work
References
56 Botnet Detection Using Machine Learning Algorithms
1 Introduction
2 Related Work
3 Botnet Architecture
3.1 Centralized Network
3.2 Peer-to-Peer (P2P) Network
3.3 Hybrid Network
4 DataSet
4.1 Feature Selection
5 Experimental Results
5.1 Performance Evaluation and Results
5.2 ROC Curve
5.3 CAP Curve
6 Conclusion and Future Works
References
57 Estimation of Daily Average Global Solar Radiance Using Ensemble Models: A Case Study of Bhopal, Madhya Pradesh Meteorological Dataset
1 Introduction
1.1 Background
2 Literature Review
3 Proposed Model
3.1 Ensemble Models
3.2 Data Preprocessing and Attribute Correlation
3.3 Random Forest Regressor
3.4 AdaBoost Regressor
4 Results and Discussions
4.1 Graphical Analysis
5 Conclusions
References
58 Text Localization in Scene Images Using Faster R-CNN with Double Region Proposal Networks
1 Introduction
2 Literature Survey
3 Proposed Method
3.1 Double RPN
3.2 Merging of RPNs
3.3 Training Scheme
4 Experimental Result Analysis
4.1 Text Detection Results
4.2 Wrong Detection
4.3 Comparative Performance Analysis
5 Conclusion and Future Works
References
59 Event Classification from the Twitter Stream Using Hybrid Model
1 Introduction
2 Related Work
2.1 Gated Recurrent Networks
3 Proposed Framework
3.1 Data Collection
3.2 Hybrid Deep Neural Network
4 Performance Evaluation of Proposed Framework
5 Result Analysis
5.1 Computational Complexity
6 Conclusion
References
60 Using an Ensemble Learning Approach on Traditional Machine Learning Methods to Solve a Multi-Label Classification Problem
1 Introduction
2 Related Works
3 Dataset Description
4 Proposed Process Workflow
5 Combining Methods Used in Ensemble Model
5.1 Naive Bayes Classifier
5.2 Random Forest Classifier
5.3 XGBoost Classifier
6 Experimentation
6.1 Evaluation Metrics
6.2 Dummy Model and Baselining
6.3 Experimental Results
6.4 Ensemble Learning Using Voting Classifier
7 Conclusion and Future Works
References
61 Automatic Building Extraction from High-Resolution Satellite Images Using Deep Learning Techniques
1 Introduction
2 Related Literature
3 Methodology
3.1 U-Net
3.2 Residual Unet (ResUnet)
3.3 Customizations in Original U-Net and ResUnet Architectures
4 Experiments and Analysis
4.1 Dataset Used
4.2 Evaluation Metrics
4.3 Comparison
4.4 Result Analysis
5 Conclusion
References
62 Epileptic Seizures Classification Based on Deep Neural Networks
1 Introduction
1.1 Deep Learning
1.2 Deep Neural Network Architectures
2 Proposed Methodology
3 Results
4 Conclusion
References
63 Analysis for Malicious URLs Using Machine Learning and Deep Learning Approaches
1 Introduction
2 Related Work
3 Malicious URL Detection Using Machine Learning
4 K-Nearest Neighbor (KNN) Classifier
4.1 Naive Bayes
4.2 Support Vector Machines (SVM)
4.3 Linear Regression
4.4 Decision Tree
4.5 Logistic Regression
4.6 Random Forests
4.7 Adaptive Boosting (AdaBoost)
5 Malicious URL Detection Using Deep Learning
6 Open Problems
7 Conclusion
References
64 Engaging Smartphones and Social Data for Curing Depressive Disorders: An Overview and Survey
1 Introduction
2 Overview
2.1 Inputs
2.2 Social Media Data Extraction
2.3 Sentiment Analysis Technique
2.4 Prediction/Recommendation Technique
3 Related Work
3.1 Prediction Through Mobile Phone Usage Patterns
3.2 Prediction Through Social Media
3.3 Activity Recommendation
4 Discussion and Proposal
5 Conclusion
References
65 Transfer Learning Approach for the Diagnosis of Pneumonia in Chest X-Rays
1 Introduction
2 Literature Survey
3 Model Architecture
4 Material and Methodology
4.1 Image Preprocessing and Data Augmentation
4.2 Methodology
5 Results and Discussion
6 Conclusion and Future Scope
References
66 Physical Sciences: An Inspiration to the Neural Network Training
1 Introduction
2 Proposed Work
3 Experimental Results
4 Conclusion
References
67 Deep Learning Models for Crop Quality and Diseases Detection
1 Introduction
2 Background
3 Materials and Methods
3.1 Input Data
3.2 Preprocessing
3.3 AlexNet
3.4 ResNet50
3.5 Parameter
4 Result and Analysis
4.1 Results of AlexNet
4.2 Results of ResNet
4.3 Comparison of AlexNet and ResNet50 Results
5 Conclusion
References
68 Clickedroid: A Methodology Based on Heuristic Approach to Detect Mobile Ad-Click Frauds
1 Introduction
2 Proposed Methodology
2.1 Terminologies Related to the Proposed Methodology
3 Implementation
4 Results and Discussion
5 Literature Survey and Comparisons
6 Conclusion and Future Direction of Work
References
69 Machine Translation System Using Deep Learning for Punjabi to English
1 Introduction
2 Literature Survey
3 Corpus Development
3.1 WMT 2015 English Corpus
3.2 Brills Bilingual Newspaper
3.3 TDIL Corpus
3.4 Gyan Nidhi Corpus
3.5 Corpus Statistics
4 Experiments
4.1 Set-Up
4.2 Dataset Details
4.3 Data Pre-processing
4.4 Training Details
4.5 Results
5 Conclusion and Future Work
References
70 The Agile Deployment Using Machine Learning in Healthcare Service
1 Introduction
1.1 Challenges of Traditional Model and Agile Deployment
2 Principles Agile with Machine Learning
3 Literature Review
4 Problem Statements
5 How Machine Learning Affect Agile Development
6 Proposed Actuarial Model in Healthcare
7 Advantage of Actuarial Model in Healthcare
8 Conclusion and Future Scope
References
71 Pneumonia Detection Using MPEG7 for Feature Extraction Technique on Chest X-Rays
1 Introduction
2 Dataset Description
3 Related Works
4 Experimental Setup
4.1 Feature Extraction Stage: MPEG7 (Moving Picture Experts Group (Version 7))
4.2 Data Preprocessing Stage: Principal Component Analysis (PCA)
4.3 Architecture: Artificial Neural Network (ANN)
5 Model
6 Result and Discussion
7 Conclusion
References
72 Comparative Study of GANs Available for Audio Classification
1 Introduction to GANs
2 Audio GANs
2.1 WaveGAN
2.2 SpecGAN
2.3 Segan
3 Comparative Study
4 Conclusion
References
73 Extractive Summarization of EHR Notes
1 Introduction
1.1 Types of Summarization
1.2 Main Challenges in Summarization
2 Related Work
3 Proposed Extractive Summarization Technique
3.1 Dataset
3.2 Implementation
4 Results and Future Work
5 Conclusion
References
74 Feature Selection and Hyperparameter Tuning in Diabetes Mellitus Prediction
1 Introduction
2 Literature Work
3 Dataset Description
3.1 PIMA Indian Diabetes Dataset
3.2 Dataset Attributes Description
4 Dataset Visualization
5 Proposed Model
5.1 Data Cleaning and Transformation
5.2 Feature Selection
5.3 Hyperparameter Tuning
5.4 Model Evaluation
6 Conclusion
References
75 Contribution Title a Correlational Diagnosis Prediction Model for Detecting Concurrent Occurrence of Clinical Features of Chikungunya and Zika in Dengue Infected Patient
1 Introduction
1.1 Clinical Feature of Dengue
1.2 Clinical Feature of Chikungunya
1.3 Clinical Feature of Zika
1.4 Diagnosis Methodology for Concurrent Detection of Dengue, Chikungunya and Zika Virus Infection
2 Relative Work
3 Propose Model
4 Experiment Setup
4.1 Inference Engine for Diagnose Co Infection of Dengue, Zika and Chikungunya
4.2 Validation Results
5 Conclusion
References
76 Image Filtering Using Fuzzy Rules and DWT-SVM for Tumor Identification
1 Introduction
2 Proposed Method
2.1 FCM (Fuzzy C Means) Segmentation
2.2 Filtering Using Median Filter
2.3 Discrete Wavelet Transform (DWT) for Feature Extraction
2.4 Feature Reduction
2.5 SVM Classifier
3 Results and Discussion
4 Simulation Result
5 Conclusion
References
77 Multi-Class Classification of Actors in Movie Trailers
1 Introduction
2 Related Work
2.1 Face Detection
2.2 Face Tracking
2.3 Face Recognition
3 Dataset
4 Methodology
4.1 Convolutional Layer
4.2 Pooling Layer
4.3 ReLU Layer
4.4 Fully Connected Layer
5 Proposed Algorithm
6 Results and Discussion
6.1 Training
6.2 Testing
7 Conclusion and Future Work
References
78 Analysis of Machine Learning and Deep Learning Approaches for DDoS Attack Detection on Internet of Things Network
1 Introduction
1.1 Attack Motivation on IoT Devices
2 Attacks on IoT Network Using Botnet
3 Machine Learning-Based Approaches for DDoS Attack Defense on IoT
3.1 Suggestions to Mitigate a DDoS Attack [18]
4 Open Issues and Challenges
5 Conclusion
References
79 Image Retrieval Systems: From Underlying Feature Extraction to High Level Intelligent Systems
1 Introduction and Motivation
2 Feature Extraction Techniques
3 Performance Evaluation Metrics
4 Hybrid CBIR Systems and Their Performance
5 Hybrid and Intelligent CBIR Systems with Their Performance
6 Semantic Gap Reduction
7 Conclusion, Issues and Future Scope
References
80 A Combined Model of ARIMA-GRU to Forecast Stock Price
1 Introduction
2 Related Work
3 Proposed Method
3.1 Principle of Auto-Regressive Integrated Moving Average
3.2 Support Vector Machine
3.3 Gated Recurrent Unit
3.4 Fitting Input Stock Data with ARIMA Model
3.5 Forecasting Using SVM
3.6 Forecasting Using GRU
4 Results
4.1 Experimental Dataset
4.2 Experimental Environment
4.3 Experimental Results
4.4 Comparison of Results
5 Conclusion
References
Author Index
Recommend Papers

Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences: PCCDS 2020
 9811575320, 9789811575327

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Algorithms for Intelligent Systems Series Editors: Jagdish Chand Bansal · Kusum Deep · Atulya K. Nagar

Mayank Dave Ritu Garg Mohit Dua Jemal Hussien   Editors

Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences PCCDS 2020

Algorithms for Intelligent Systems Series Editors Jagdish Chand Bansal, Department of Mathematics, South Asian University, New Delhi, Delhi, India Kusum Deep, Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee, Uttarakhand, India Atulya K. Nagar, School of Mathematics, Computer Science and Engineering, Liverpool Hope University, Liverpool, UK

This book series publishes research on the analysis and development of algorithms for intelligent systems with their applications to various real world problems. It covers research related to autonomous agents, multi-agent systems, behavioral modeling, reinforcement learning, game theory, mechanism design, machine learning, meta-heuristic search, optimization, planning and scheduling, artificial neural networks, evolutionary computation, swarm intelligence and other algorithms for intelligent systems. The book series includes recent advancements, modification and applications of the artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, fuzzy system, autonomous and multi agent systems, machine learning and other intelligent systems related areas. The material will be beneficial for the graduate students, post-graduate students as well as the researchers who want a broader view of advances in algorithms for intelligent systems. The contents will also be useful to the researchers from other fields who have no knowledge of the power of intelligent systems, e.g. the researchers in the field of bioinformatics, biochemists, mechanical and chemical engineers, economists, musicians and medical practitioners. The series publishes monographs, edited volumes, advanced textbooks and selected proceedings.

More information about this series at http://www.springer.com/series/16171

Mayank Dave Ritu Garg Mohit Dua Jemal Hussien •





Editors

Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences PCCDS 2020

123

Editors Mayank Dave Department of Computer Engineering National Institute of Technology Kurukshetra Kurukshetra, India Mohit Dua Department of Computer Engineering National Institute of Technology Kurukshetra Kurukshetra, India

Ritu Garg Department of Computer Engineering National Institute of Technology Kurukshetra Kurukshetra, India Jemal Hussien School of Information Technology Deakin University Geelong, VIC, Australia

ISSN 2524-7565 ISSN 2524-7573 (electronic) Algorithms for Intelligent Systems ISBN 978-981-15-7532-7 ISBN 978-981-15-7533-4 (eBook) https://doi.org/10.1007/978-981-15-7533-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Committees

Patron Dr. Satish Kumar, Director, NIT Kurukshetra

Advisory Committee Dr. Elhadj Benkhelifa, Staffordshire University, UK Dr. Sanjiv K. Bhatia, University of Missouri—St. Louis Dr. Stefka fidanova, Bulgarian Academy of Sciences, Bulgaria Dr. Mario F. Pavone, University of Catania, Italy Dr. Mukul V. Shirvaikar, The University of Texas at Tyler Dr. Vicente García Díaz, University of Oviedo, Spain Dr. Ljiljana trajkovic, Simon Fraser University, Canada Dr. Sheng-Lung Peng, Hualien City, Taiwan Dr. Jerry Chun-Wei Lin, Western Norway University of Applied Sciences, Norway Dr. Khaled Shaalan, The British University in Dubai Dr. Farid Meziane, University of Salford, UK Dr. Michael Sheng, Macquarie University, Sydney Dr. Xiao-Zhi Gao, University of Eastern Finland, Finland Dr. Marcin Paprzycki, Polish Academy of Sciences, Poland Dr. Nishchal K. Verma, IIT Kanpur Dr. Brahmjit Singh, NIT Kurukshetra Dr. J. K. Chhabra, NIT Kurukshetra Dr. A. K. Singh, NIT Kurukshetra Dr. S. K. Jain, NIT Kurukshetra Dr. R. K. Aggarwal, NIT Kurukshetra Dr. Rajoo Pandey, NIT Kurukshetra Dr. Umesh Ghanekar, NIT Kurukshetra

v

vi

Organizing Committee

Organizing Chairs Dr. Mayank Dave, NIT Kurukshetra Dr. Ritu Garg, NIT Kurukshetra Dr. Mohit Dua, NIT Kurukshetra

Organizing Secretaries Dr. Ankit Jain, NIT Kurukshetra Dr. Bharati Sinha, NIT Kurukshetra

Publicity Chairs Dr. Arvind Kumar Jain, NIT Agartala Dr. Jainendra Shukla, IIIT Delhi

Committees

Foreword

It is a matter of great pleasure to record that the International Conference on Paradigms of Computing, Communication and Data Sciences—PCCDS 2020 has been organized in Online mode by the Department of Computer Engineering, National Institute of Technology, Kurukshetra, during May 01–03, 2020. The conference theme Computing, Communication and Data Sciences covered broad spectrum of topics of the current interest to multidisciplinary researchers, academicians and professionals. The recent trends, applications and future challenges of technology were highlighted by distinguished experts from India and abroad. The accepted papers and presentations in the conference were of high quality. The proceedings of the conference would be published by Springer Nature Singapore Pte Ltd. under the ‘Algorithms for Intelligent Systems’ series. This proceeding would prove to be an excellent research reference in above areas. Prof. Mayank Dave, Prof. Jemal Abawajy, Dr. Ritu Garg, Dr. Mohit Dua, Dr. Ankit Jain and Dr. Bharati Sinha deserve appreciation for organization and conduct of this conference. The effort of their team and institutional support is commendable. Akhilesh Swarup Dean (Planning and Development) and Professor of Electrical Engineering National Institute of Technology Kurukshetra Kurukshetra, Haryana, India

vii

Preface

International Conference on Paradigms of Computing, Communication and Data Sciences (PCCDS 2020) was organized by the Department of Computer Engineering at National Institute of Technology, Kurukshetra, India, from May 1 to 3, 2020. The event was technically sponsored by Technical Education Quality Improvement Program (TEQIP-3) of Government of India. The conference received 259 papers, out of which 80 contributions were finally selected for publication in conference proceedings. The conference theme Computing, Communication and Data Sciences served as an invitation to discuss recent trends that are being followed and future challenges that are being faced by various researchers, academicians and professionals from all over the world. It particularly encouraged the interaction of researchers to build an academic community in an informal setting to present and discuss their developed works. The papers selected for the conferences were grouped as per the theme of the conference—32 in Computing, 20 in Communication and 28 in Data Sciences. The papers were presented in 12 conference sessions spread over three days. The contributed papers cover all the latest aspects of intelligent applications that are being developed in different fields of computer engineering, electrical engineering and electronics and communication engineering. In addition to the contributed papers, five invited keynote speeches were delivered by Prof. Jemal Abawajy from Deakin University, Australia; Dr. Sriparna Saha from Indian Institute of Technology, Patna, India; Dr. Jagdish Chand Bansal from South Asian University, New Delhi, India; Dr. Maanak Gupta from Tennessee Technological University, TN, USA; and Dr. Utkarsh Shrivastava Western Michigan University, USA. We express our deep appreciation and thanks to all the keynote speakers. We thank all reviewers, authors and participants for their contributions. We hope that these proceedings being published under the book series ‘Algorithm for

ix

x

Preface

Intelligent Systems’ by Springer Nature, Singapore, will furnish as an excellent reference book to scientific groups all over the world. We also trust that this book will stimulate further study and research in all thematic areas. Kurukshetra, India Kurukshetra, India Kurukshetra, India Geelong, Australia

Mayank Dave Mohit Dua Ritu Garg Jemal Hussien

Acknowledgements

Success in life is never attained single handedly. Our deepest gratitude to the ‘Algorithms for Intelligent Systems’ series editors Dr. Jagdish Chand Bansal, Prof. Kusum Deep and Prof. Atulya K. Nagar for their support and encouragement throughout the conference organizing and publishing work. Words are not enough to express our gratitude to Team Springer Nature, especially Aninda Bose, Senior Publishing Editor, Springer Nature, for his insightful comments and administrative help at various occasions. We are also thankful to Ms. Silky Abhay Sinha for her regular communication that helped us in maintaining the deadlines of the proceedings. We would also like to thank all the member of Advisory Committee, members of Technical Program Committee, reviewers, session chairs and participants for their stimulating questions and invaluable feedback. We are thankful to Director, National Institute of Technology, Kurukshetra, for providing much-needed encouragement and extending necessary facilities. We are thankful to Prof. Sathans, Coordinator, TEQIP-3, for providing sponsorship for the conference. Our sincere thanks go to our department colleagues, especially Department Advisory Committee (DAC), and all those who have directly and indirectly provided us moral support and other kind of help.

xi

Contents

Part I 1

2

3

4

Computing

Real-Time Implementation of Enhanced Energy-Based Detection Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vatsala Sharma and Sunil Joshi

3

Pipeline Burst Detection and Its Localization Using Pressure Transient Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aditya Gupta and K. D. Kulat

13

Improving Machine Translation Using Parts-Of-Speech Tags and Dependency Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Apoorva Jha and Philemon Daniel

27

Optimal Strategy for Obtaining Excellent Energy Storage Density in Polymer Nanocomposite Materials . . . . . . . . . . . . . . . . . . . . . . . Daljeet Kaur, Tripti Sharma, and Charu Madhu

37 49

5

A Study of Aging-Related Bugs Prediction in Software System . . . Satyendra Singh Chouhan, Santosh Singh Rathore, and Ritesh Choudhary

6

Mutual Authentication of IoT Devices Using Kronecker Product on Secure Vault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shubham Agrawal and Priyanka Ahlawat

63

Secure and Decentralized Crowdfunding Mechanism Based on Blockchain Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Swati Kumari and Keyur Parmar

79

Efficient Use of Randomisation Algorithms for Probability Prediction in Baccarat Using: Monte Carlo and Las Vegas Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Avani Jindal, Janhvi Joshi, Nikhil Sajwan, Naman Adlakha, and Sandeep Pratap Singh

91

7

8

xiii

xiv

9

Contents

Comparative Analysis of Educational Job Performance Parameters for Organizational Success: A Review . . . . . . . . . . . . . 105 Sapna Arora, Manisha Agarwal, and Shweta Mongia

10 Digital Anthropometry for Health Screening from an Image Using FETTLE App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Roselin Preethi and J. Chandra Priya 11 Analysis and Redesign of Digital Circuits to Support Green Computing Through Approximation . . . . . . . . . . . . . . . . . . . . . . . . 137 Sisir Kumar Jena, Saurabh Kumar Srivastava, and Arshad Husain 12 Similarity-Based Data-Fusion Schemes for Missing Data Imputation in Univariate Time Series Data . . . . . . . . . . . . . . . . . . . 149 S. Nickolas and K. Shobha 13 Computational Study on Electronic Properties of Pd and Ni Doped Graphene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Mehak Singla and Neena Jaggi 14 Design of an Automatic Reader for the Visually Impaired Using Raspberry Pi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Nabendu Bhui, Dusayanta Prasad, Avishek Sinha, and Pratyay Kuila 15 Power Maximization Under Partial Shading Conditions Using Advanced Sudoku Configuration . . . . . . . . . . . . . . . . . . . . . 189 Gunjan Bharti, Venkata Madhava Ram Tatabhatla, and Tirupathiraju Kanumuri 16 Investigations on Performance Indices Based Controller Design for AVR System Using HHO Algorithm . . . . . . . . . . . . . . . . . . . . . 207 R. Puneeth Reddy and J. Ravi Kumar 17 Ethereum 2.0 Blockchain in Healthcare and Healthcare Based Internet-of-Things Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Vaibhav Sagar and Praveen Kaushik 18 IoT-Based Solution to Frequent Tripping of Main Blower of Blast Furnace Through Vibration Analysis . . . . . . . . . . . . . . . . . . . . . . . 235 Kshitij Shinghal, Rajul Misra, and Amit Saxena 19 A Survey on Hybrid Models Used for Hydrological Time-Series Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Shivashish Thakur and Manish Pandey 20 Does Single-Session, High-Frequency Binaural Beats Effect Executive Functioning in Healthy Adults? An ERP Study . . . . . . . 261 Ritika Mahajan, Ronnie V. Daniel, Akash K. Rao, Vishal Pandey, Rishi Pal Chauhan, and Sushil Chandra

Contents

xv

21 Optimized Data Hiding for the Image Steganography Using HVS Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Sahil Gupta and Naresh Kumar Garg 22 Impact of Imperfect CSI on the Performance of Inhomogeneous Underwater VLC System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Rachna Sharma and Yogesh N. Trivedi 23 Pre-configured (p)-Cycle Protection for Non-hamiltonian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Vidhi Gupta, Rachna Asthana, and Yatindra Nath Singh 24 A Novel Approach to Multi-authority Attribute-Based Encryption Using Quadratic Residues with Tree Access Policy . . . 311 Anshita Gupta and Abhimanyu Kumar 25 An Improvement in Dense Field Copy-Move Image Forgery Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Harsimran Kaur, Sunil Agrawal, and Anaahat Dhindsa 26 Scheduling-Based Energy-Efficient Water Quality Monitoring System for Aquaculture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Rasheed Abdul Haq and V. P. Harigovindan 27 A Study of Code Clone Detection Techniques in Software Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Utkarsh Singh, Kuldeep Kumar, and DeepakKumar Gupta 28 An Improved Approach to Secure Digital Audio Using Hybrid Decomposition Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Ankit Kumar, Shyam Singh Rajput, and Vrijendra Singh 29 Study on the Negative Transconductance in a GaN/AlGaN-Based HEMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Sujit Kumar Singh, Awnish Kumar Tripathi, and Gaurav Saini 30 Hybrid Anti-phishing Approach for Detecting Phishing Webpage Hosted on Hijacked Server and Zero-Day Phishing Webpage . . . . 389 Ankush Gupta and Santosh Kumar 31 FFT-Based Zero-Bit Watermarking for Facial Recognition and Its Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Ankita Dwivedi, Madhuri Yadav, and Ankit Kumar 32 Comparative Analysis of Various Simulation Tools Used in a Cloud Environment for Task-Resource Mapping . . . . . . . . . . 419 Harvinder Singh, Sanjay Tyagi, and Pardeep Kumar

xvi

Part II

Contents

Communication

33 Study of Spectral-Efficient 400 Gbps FSO Transmission Link Derived from Hybrid PDM-16-QAM With CO-OFDM . . . . . . . . . . 433 Mehtab Singh and Jyoteesh Malhotra 34 4  10 Gbps Hybrid WDM-MDM FSO Transmission Link . . . . . . 443 Mehtab Singh and Jyoteesh Malhotra 35 Task Scheduling in Cloud Computing Using Hybrid Meta-Heuristic: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Sandeep Kumar Patel and Avtar Singh 36 Modulation Techniques for Next-Generation Wireless Communication-5G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Sanjeev Kumar, Preeti Singh, and Neha Gupta 37 Muscle Artifact Detection in EEG Signal Using DTW Based Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Amandeep Bisht and Preeti Singh 38 Human Activity Recognition in Ambient Sensing Using Sequential Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Vinay Jain, Divyanshu Jhawar, Sandeep Saini, Thinagaran Perumal, and Abhishek Sharma 39 Towards the Investigation of TCP Congestion Control Protocol Effects in Smart Home Environment . . . . . . . . . . . . . . . . . . . . . . . . 503 Pranjal Kumar and P. Arun Raj Kumar 40 Efficient Information Flow Based on Graphical Network Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 Rahul Saxena, Mahipal Jadeja, and Atul Kumar Verma 41 Tunable Optical Delay for OTDM . . . . . . . . . . . . . . . . . . . . . . . . . 527 P. Prakash, K. Keerthi Yazhini, and M. Ganesh Madhan 42 Game Theory Based Cluster Formation Protocol for Localized Sensor Nodes in Wireless Sensor Network (GCPL) . . . . . . . . . . . . . 535 Raj Vikram, Sonal Kumar, Ditipriya Sinha, and Ayan Kumar Das 43 SG_BIoT: Integration of Blockchain in IoT Assisted Smart Grid for P2P Energy Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553 J. Chandra Priya, V. Ramanujan, P. Rajeshwaran, and Ponsy R. K. Sathia Bhama 44 Software Defined Network: A Clustering Approach Using Delay and Flow to the Controller Placement Problem . . . . . . . . . . . . . . . 565 Anilkumar Goudar, Karan Verma, and Pranay Ranjan

Contents

xvii

45 Netra: An RFID-Based Android Application for Visually Impaired . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Pooja Nawandar, Vinaya Gohokar, and Aditi Khandewale 46 Efficient Routing for Low Power Lossy Networks with Multiple Concurrent RPL Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Jinshiya Jafar, J. Jaisooraj, and S. D. Madhu Kumar 47 Deep Learning-Based Wireless Module Identification (WMI) Methods for Cognitive Wireless Communication Network . . . . . . . 595 Sudhir Kumar Sahoo, Chalamalasetti Yaswanth, Barathram Ramkumar, and M. Sabarimalai Manikandan 48 Style Transfer for Videos with Audio . . . . . . . . . . . . . . . . . . . . . . . 607 Gaurav Kabra and Mahipal Jadeja 49 Development of Antennas Subsystem for Indian Airborne Cruise Missile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 Ami Jobanputra, Dhruv Panchal, Het Trivedi, Dhyey Buch, and Bhavin Kakani 50 A Literature Survey on LEACH Protocol and Its Descendants for Homogeneous and Heterogeneous Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 Anish Khan and Nikhil Marriwala 51 Performance Study of Ultra Wide Band Radar Based Respiration Rate Measurement Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645 P. Bhaskara Rao, Srinivas Boppu, and M. Sabarimalai Manikandan 52 Secure Architecture for 5G Network Enabled Internet of Things (IoT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 Voore Subba Rao, V. Chandra Shekar Rao, and S. Venkatramulu Part III

Data Sciences

53 Robust Image Watermarking Using DWT and Artificial Neural Network Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675 Anoop Kumar Chaturvedi, Piyush Kumar Shukla, Ravindra Tiwari, Vijay Kumar Yadav, Sachin Tiwari, and Vikas Sakalle 54 Fraud Detection in Anti-money Laundering System Using Machine Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . 687 Ayush Kumar, Debachudamani Prusti, Daisy Das, and Shantanu Kumar Rath 55 A Smart Approach to Detect Helmet in Surveillance by Amalgamation of IoT and Machine Learning Principles to Seize a Traffic Offender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701 Gaytri, Rishabh Kumar, and Uppara Rajnikanth

xviii

Contents

56 Botnet Detection Using Machine Learning Algorithms . . . . . . . . . . 717 Chirag Joshi, Vishal Bharti, and Ranjeet Kumar Ranjan 57 Estimation of Daily Average Global Solar Radiance Using Ensemble Models: A Case Study of Bhopal, Madhya Pradesh Meteorological Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729 Megha Kamble and Sudeshna Ghosh 58 Text Localization in Scene Images Using Faster R-CNN with Double Region Proposal Networks . . . . . . . . . . . . . . . . . . . . . 739 Pragya Hari and Rajib Ghosh 59 Event Classification from the Twitter Stream Using Hybrid Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751 Neha Singh, M. P. Singh, and Prabhat Kumar 60 Using an Ensemble Learning Approach on Traditional Machine Learning Methods to Solve a Multi-Label Classification Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761 Siddharth Basu, Sanjay Kumar, Sirjanpreet Singh Banga, and Harshit Garg 61 Automatic Building Extraction from High-Resolution Satellite Images Using Deep Learning Techniques . . . . . . . . . . . . . . . . . . . . 773 Mayank Dixit, Kuldeep Chaurasia, and Vipul Kumar Mishra 62 Epileptic Seizures Classification Based on Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785 Annangi Swetha and Arun Kumar Sinha 63 Analysis for Malicious URLs Using Machine Learning and Deep Learning Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797 Santosh Kumar Birthriya and Ankit Kumar Jain 64 Engaging Smartphones and Social Data for Curing Depressive Disorders: An Overview and Survey . . . . . . . . . . . . . . . . . . . . . . . . 809 Srishti Bhatia, Yash Kesarwani, Ashish Basantani, and Sarika Jain 65 Transfer Learning Approach for the Diagnosis of Pneumonia in Chest X-Rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821 Kuljeet Singh Sran and Sachin Bagga 66 Physical Sciences: An Inspiration to the Neural Network Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833 Venkateswarlu Gudepu, Anshita Gupta, and Parveen Kumar 67 Deep Learning Models for Crop Quality and Diseases Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843 Priyanka Sahu, Anuradha Chug, Amit Prakash Singh, Dinesh Singh, and Ravinder Pal Singh

Contents

xix

68 Clickedroid: A Methodology Based on Heuristic Approach to Detect Mobile Ad-Click Frauds . . . . . . . . . . . . . . . . . . . . . . . . . 853 Pankaj Kumar Keserwani, Vedant Jha, Mahesh Chandra Govil, and Emmanuel S. Pilli 69 Machine Translation System Using Deep Learning for Punjabi to English . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865 Kamal Deep, Ajit Kumar, and Vishal Goyal 70 The Agile Deployment Using Machine Learning in Healthcare Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879 Shanu Verma, Rashmi Popli, and Harish Kumar 71 Pneumonia Detection Using MPEG7 for Feature Extraction Technique on Chest X-Rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891 Abhishek Sharma, Nitish Gangwar, Ashish Yadav, Harshit Saini, and Ankush Mittal 72 Comparative Study of GANs Available for Audio Classification . . . 901 Suvitti and Neeru Jindal 73 Extractive Summarization of EHR Notes . . . . . . . . . . . . . . . . . . . . 909 Ajay Chaudhary, Merlin George, and Anu Mary Chacko 74 Feature Selection and Hyperparameter Tuning in Diabetes Mellitus Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921 Rashmi Arora, Gursheen Kaur, and Pradeep Gulati 75 Contribution Title a Correlational Diagnosis Prediction Model for Detecting Concurrent Occurrence of Clinical Features of Chikungunya and Zika in Dengue Infected Patient . . . . . . . . . . 933 Rajeev Kapoor, Sachin Ahuja, and Virender Kadyan 76 Image Filtering Using Fuzzy Rules and DWT-SVM for Tumor Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945 Rahul Dubey and Anjali Pandey 77 Multi-Class Classification of Actors in Movie Trailers . . . . . . . . . . 953 Prashant Giridhar Shambharkar, Gaurang Mehrotra, Kanishk Singh Thakur, Kaushal Thakare, and Mohammad Nazmud Doja 78 Analysis of Machine Learning and Deep Learning Approaches for DDoS Attack Detection on Internet of Things Network . . . . . . 967 Aman Kashyap and Ankit Kumar Jain 79 Image Retrieval Systems: From Underlying Feature Extraction to High Level Intelligent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 977 Shefali Dhingra and Poonam Bansal

xx

Contents

80 A Combined Model of ARIMA-GRU to Forecast Stock Price . . . . 987 Sangeeta Saha, Neema Singh, Biju R. Mohan, and Nagaraj Naik Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999

About the Editors

Dr. Mayank Dave is working as a Professor in the Department of Computer Engineering, NIT Kurukshetra, India, with more than 27 years of experience of academic and administrative affairs. Prof. Dave received M.Tech. and Ph.D. from IIT Roorkee, India. He has published approximately 200 research papers, delivered expert lectures and keynote addresses and chaired technical sessions in India and abroad including USA, Italy, China, Singapore and Thailand. He has coordinated research and development projects in the institute. He has written a book titled “Computer Networks”. He has guided fifteen Ph.D.s and several M.Tech. thesis. His research interests include mobile networks, cybersecurity, cloud computing, etc. Dr. Ritu Garg received B.Tech. degree in Computer Science and Engineering from Punjab Technical University, Jalandhar, and M.Tech. from Kurukshetra University, Kurukshetra, in 2001 and 2006, respectively. She received Ph.D. in the area of Grid Computing from National Institute of Technology, Kurukshetra, India. She joined the Department of Computer Engineering as an Assistant Professor at National Institute of Technology, Kurukshetra, India, in 2008. Her research interests include grid computing, cloud computing, Internet of things, fault tolerance and security. She has published numerous research papers in national/international journals and conferences mainly in the area of energy management and reliability in grid computing, cloud computing and IoT. Dr. Mohit Dua did his B.Tech. degree in Computer Science and Engineering from Kurukshetra University, Kurukshetra, India, in 2004, and M.Tech. degree in Computer Engineering from National Institute of Technology, Kurukshetra, India, in 2012. He received Ph.D. in the area of Speech Recognition from National Institute of Technology, Kurukshetra, India, in 2018. He is presently working as an Assistant Professor in the Department of Computer Engineering at NIT Kurukshetra, India, with more than 14 years of academic experience. He is a life member of Computer Society of India (CSI) and Indian Society for Technical Education (ISTE). His research interests include speech processing, theory of xxi

xxii

About the Editors

formal languages, statistical modelling and natural language processing. He has published approximately 50 research papers including abroad paper presentations including USA, Canada, Australia, Singapore and Dubai. Dr. Jemal Hussien is a Full Professor at Faculty of Science, Engineering and Built Environment, Deakin University, Australia. He was awarded the higher doctoral degree, Doctorate of Science (D.Sc.), in 2016, by Deakin University for his outstanding research achievements. He is a senior member of IEEE Society; IEEE Technical Committee on Scalable Computing (TCSC); IEEE Technical Committee on Dependable Computing and Fault Tolerance and IEEE Communication Society and founding member of the IEEE Communication Society Technical Committee on Big Data. He is the author of 15 books and more than 350 refereed articles in premier venues such as IEEE Transactions on Computers, IEEE Transaction on Cloud Computing, IEEE Communications Surveys and Tutorials, IEEE Transactions on Fuzzy Systems, IEEE Transactions on Emerging Topics in Computing, IEEE Transactions on Information Technology in Biomedicine, IEEE Transactions on Evolutionary Computation and IEEE Transactions on Services Computing. He has also edited 10 conference volumes and about 20 special issue topics in highly ranked journals such as IEEE Transactions on Cloud Computing, Journal of Computer and System Sciences (Elsevier), Cluster Computing Journal (Elsevier), Journal of Network and Computer Applications (Elsevier) and Concurrency and Computation: Practice and Experience and Future Generation Computer Systems (Elsevier).

Part I

Computing

Chapter 1

Real-Time Implementation of Enhanced Energy-Based Detection Technique Vatsala Sharma and Sunil Joshi

1 Introduction Next-generation networks have increased the demand of spectrum. Efficient spectrum utilization is needed to meet the user demand. Cognitive radio technology is used to opportunistically access the available spectrum band also known as spectrum holes. Sensing of spectrum is very crucial to exploit the spectrum bands that are not in use in any cognitive radio environment. Many detection techniques have been studied in literature so far. Commonly studied and analyzed detection methods include energy-based detection, feature-based detection, matched filter-based detection, Eigen value-based detection, etc. Energy-based detection is the most commonly researched technique by the researchers [1] due to its ease of implementation. The basis of energy detection techniques comes from the work done by Urkowitz [2]. It is a non-coherent technique which can blindly detect unknown signals without any information about the characteristic features of signal transmitted by licensed primary user. It works on the principle of binary hypothesis testing given by [2], C(T ) = A(T ) + B(T ):Z 0 C(T ) = B(T ):Z 1

(1)

where C(T ) denotes the signal at the receiver node from the secondary user, A(T ) is the signal broadcast from the primary node and B(T ) is the error signal. Z0 and Z1 symbolize the binary hypothesis results of the existence and non-existence of information signal, respectively. That is Z0 symbolizes that the channel band is in use by the primary node and thus channel is busy while Z1 symbolizes that the primary node information signal does not exist; therefore, channel is idle. V. Sharma (B) · S. Joshi Maharana Pratap University of Agriculture And Technology, Udaipur, Rajasthan, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_1

3

4

V. Sharma and S. Joshi

Cooperation among cognitive (secondary) nodes enhanced the detection accuracy of energy-based detection [3]. The traditional method of energy-based detection technique with cooperation among secondary generally compares the measured energy of received signal with a predefined single threshold value. Further, researchers optimized the detection accuracy by using double threshold and triple threshold-based energy detection [4]. In double threshold detection, two threshold values are defined and compared with the test statistics and decision of existence and non-existence of primary user is taken accordingly [5] while three threshold values are considered in case triple threshold-based energy detection [6]. Researchers proved that increasing the threshold value results in improved detection accuracy also with low SNR regimes [7]. The proposed design based on energy measurements along with cooperation among secondary users is optimized using multiple threshold values. The proposed algorithm is implemented in real time using wireless access research platform. As per our best knowledge, the multiple threshold-based energy detection technique is not studied so far in literature till date. The following sections in the remaining paper are abides by Sect. 2 that explains the system architecture of the aimed energy-based detection technique and the test statistics used to analyze its performance. In Sect. 3, the proposed design is implemented in real-time scenario using wireless access research platform. The methodology and the steps of implementation are explained in detail. The following Sect. 4 analyzes the performance based on detection accuracy of the proposed energy detection technique with the help of simulation results. The post-implementation simulation results are also explained. Last but not least, the conclusion of the paper is discussed.

2 System Architecture The architecture of the system consists of single primary node, three cognitive users (secondary users) and a decision center as depicted in Fig. 1. Every secondary user exhibits energy-based detection evaluation for sensing the channel with different thresholds. The energy measured by secondary users (SUs) compares the measured value with the predefined threshold values λ and thus decides the existence of primary user (PU). The conclusion is forwarded to the fusion center where the final result is hard combined to analyze the occurrence of primary node signal transmission. The hard fusion schemes include majority (k-by-n) rule, OR-based rule and ANDbased rule [8]. OR rule is used mostly in multiple decision fusion during fading environment. AND rule is mostly used in hardware implementations to avoid high data rates which cause data overhead. Above all, majority rule performs better than both AND and OR rule-based hard fusion schemes [9]. The energy-based detector works on the principle of non-coherent detection, that is, it can determine un deterministic signals. In this technique, the incoming signal energy of the broadcasted signal is observed and equated with the reference value of threshold. The received signal is converted to digital IQ format in case the signal

1 Real-Time Implementation of Enhanced Energy …

5

Fig. 1 A system model for multiple threshold-based energy detection

is analog followed by its FFT transformation as depicted in Fig. 2. The energy of the signal is measured using square law device followed by evaluating the average value of the same. The measured value is equated with predefined values λi known as threshold. The test statistics defined for energy detection is given by [10]. Ti =

 k   |yi (t)| 2 t=1

σn

,

(2)

where t is the indices of samples considered, k notify the count of samples, i notify the count of cognitive nodes (secondary users) and σ n defines the noise variance of the complex AWGN noise which is circularly symmetric. If the detected signal energy received at the cognitive node is larger than the reference value of signal threshold, it detects the spectrum band is utilized by the primary node, otherwise the signal is absent and the channel is vacant.

Fig. 2 Energy-based detection method implemented by secondary user

6

V. Sharma and S. Joshi

The architecture of the aimed design is analyzed that depends on multiple thresholds is done on the basis of detection parameters. Detection probability is denoted as Pd and defined as correct detection of primary user signal when primary user is transmitting, whereas false alarm probability denoted as Pf can be described as sensing the of presence of signal transmitted by primary user while the exact signal with information from primary node is not transmitting. Mathematically, the detection probability for ith secondary node should be expressed by Eqs. (3) and (4) expresses the false alarm probability for ith secondary user [11, 12]. Pd = P{Ti ≥ λi |B1 } = Q u Pf = P{Ti ≥ λi |B0 } =



  2γi , λi

(u, λi /2) (u)

(3) (4)

where λ notify the reference value of threshold by which the result of test statistic is compared, γ is notified as the value of signal to noise ratio for signal present at the cognitive node, Qu represents Marcum Q-function in its generalized form, (.,.) and (.) notifies gamma functions incomplete and complete, respectively.

3 Hardware Implementation Real-time implementation of optimized energy detection technique is done using WARP v3 kit (wireless access research platform) invented by the researchers of Rice University. It operates on ISM bands. The hardware implementation is done using two WARP nodes with one radio acting as primary user and rest acting as secondary users. The primary node transmits the signal that is found by the cognitive (secondary) nodes. The transmitter block with its process description is given in Fig. 3, whereas the receiver block diagram which is just reverse of transmitter block is presented in Fig. 4. At the transmitter end, the signal is generated by a random source in IQ bit format. Then the signal is modulated followed by interpolation to increase the sampling frequency and up conversion. At the receiver end, the signal received is decimated to recover the actual sampling frequency of the signal followed by energy-based detector. The traditional energy-based detection method includes

Fig. 3 Block diagram of WARP transmitter node

1 Real-Time Implementation of Enhanced Energy …

7

Fig. 4 Block diagram of WARP receiver node

the calculation of the energy of the recovered signal using fast Fourier transform and compared with the threshold to obtain decision bit at the fusion center. The results from all the secondary user need to be combined to get the final decision. The proposed energy detection technique is based on hard combining decision fusion scheme [13]. Hard combining decision fusion scheme may be AND-based, OR-based or majority rule-based where the final decision depends on the majority of decision [14]. If more number of secondary user results in the decision denoting the involvement of the signal transmitted by primary node, then final result will depict the presence of the signal, and if the majority of secondary users depicts the nonexistence of the signal transmitted from primary user, then decision fusion result will show that the primary user is absent [15]. Algorithm to implement energy-based detection with cooperation among secondary users using WARP V3 kit is as follows [16]. Step 1: Define the parameters and initialize to load the global definition to WARP node. Step 2: Set up the radio parameters to enable WARP node to sense the received signal and trigger the same to start the reception. Step 3: Read and store the received samples from the WARP node. Step 4: Reset and disable the WARP node. Step 5: Calculate the energy using test statistics for the signal received by the WARP node using FFT and plot the FFT of received waveform samples. Step 6: Plot the IQ bits of the received waveform to notify the existence or nonexistence of signal transmitted by primary node. Step 7: Measure the received signal strength RSSI (dBm) of the received signal and plot as well. Step 8: Close the socket.

4 Experimental Results 4.1 Results of Simulation of Proposed Algorithm The analysis of the simulated curves (ROC curves) of the proposed multiple threshold energy detection is done with the help of simulated parameters denoting the detection accuracy with the help of false alarm probability and detection probability. The ROC

8

V. Sharma and S. Joshi ROC for different values of Lambda 1 lambda = 2 lambda = 5 lambda = 10

Detection probability

0.8

0.6

0.4

0.2

0 0

0.2

0.4

0.6

0.8

1

False alarm probability

Fig. 5 Pd versus Pf curves with multiple threshold values

curves for probability with false alarm versus probability with detection as shown in Fig. 5 depict that the detection accuracy improves as the value of threshold increases. Since the final decision is the result of the combined decision of all the values at different threshold levels, the overall accuracy of the system is improved. Detection accuracy of the proposed design is compared with the conventional energy detection based on single threshold value as shown in Fig. 6. The corresponding ROC curve shows that the detection accuracy of the energy detector based on single threshold value is 40% less than the detection accuracy of the enhanced energy detector based on multiple threshold values.

4.2 Results of Real-Time Implementation of Proposed Algorithm Now, the proposed energy detector is implemented in real-time environment using wireless access research platform and the post-simulation results after the implementation are analyzed. The detection accuracy results of the implemented enhanced energy detector are almost similar compared to the simulation detection accuracy results measured before the hardware implementation as observed in Fig. 7.

1 Real-Time Implementation of Enhanced Energy …

9

Fig. 6 Pd versus Pf curve for cooperative sensing proposed and conventional energy detection schemes

Fig. 7 ROC curve for proposed energy detector before and after the hardware implementation

10

V. Sharma and S. Joshi ROC of ED over different lambda 1 0.9 0.8 0.7

Pd

0.6 0.5 0.4 Lambda1 Sim = 2 Lambda2 Sim = 5 Lambda3 Sim = 10 Lambda1 Imp = 2 Lambda2 Imp = 5 Lambda3 Imp = 10

0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Pf

Fig. 8 ROC curves for Pd versus Pf for different threshold values for both simulation and implementation

Finally, the individual region of convergence for detection and false alarm probability is also studied with the help of Fig. 8. It is shown that the detection accuracy of enhanced energy detector improves with increasing value of lambda in both the simulation and implementation results. The little variation is due to the unwanted noise and interference signals during hardware implementation. The simulation curves in the graph with threshold values λ1 Sim, λ2 Sim and λ3 Sim represent the ROC curves between false alarm probability versus detection probability before hardware implementation and the ROC curves with threshold values λ1 Imp, λ2 Imp and λ3 Imp represent the ROC curves after the hardware implementation.

5 Conclusion The traditional single and dual threshold-based energy detection techniques have been improved to aim a multiple threshold energy-based detector (MTED). The performance analysis of the developed MTED is done with the help of ROC curve between detection and false alarm probability; also, it is equated with the traditional energy-based detection method. It is observed that the accuracy of detection of the proposed technique is 40% more than the conventional energy detector. The consequence of different threshold values on the accuracy of detection of the aimed detection technique is analyzed using the ROC curves. The slight variation in the

1 Real-Time Implementation of Enhanced Energy …

11

simulation results compared to implementation results is due to noise and interference during hardware implementation.

References 1. Sharma V, Joshi S (2018) A literature review on spectrum sensing in cognitive radio applications. Proc IEEE 2:883–893 2. Urkowitz H (1967) Energy detection of unknown deterministic signals. Proc IEEE 55:523–231 3. Lee W, Kim M, Cho DH (2019) Deep cooperative sensing: cooperative spectrum sensing based on convolutional neural networks. IEEE Trans Veh Technol 68(3) 4. Smriti C (2018) Double threshold based cooperative spectrum sensing with consideration of history of sensing nodes in cognitive radio networks. In: 2nd international conference on power, energy and environment: towards smart technology (ICEPE), Proceedings of IEEE 5. Rabiee R, Li KH (2013) Throughput optimization of double-threshold based improved energy detection in cooperative sensing over imperfect reporting channels. In: 9th international conference on information, communications and signal processing, Tainan, pp 1–5 6. Smriti CC (2018) Double threshold based cooperative spectrum sensing with consideration of history of sensing nodes in cognitive radio networks. In: 2nd international conference on power, energy and environment: towards smart technology (ICEPE), Proceedings of IEEE 7. Chen C, Chen Y, Qian J, Xu J (2018) Triple-threshold cooperative spectrum sensing algorithm based on energy detection. In: 5th international conference on systems and informatics (ICSAI), Proceedings of IEEE 8. Mishra S, Sahai A, Brodersen R (2006) Cooperative sensing among cognitive radios. In: Proceedings of IEEE international conference on communication, vol 2. Istanbul, Turkey, pp 1658–1663 9. Bhattacharya A, Chakravarty D, Das A, Chatterjee D, Majumder S, Byabarta N (2017) Performance optimization of a soft and hard detector in cognitive radio environment using WARP. In: 4th international conference on opto-electronics and applied optics (optronix), Proceedings of IEEE 10. Manna T, Misra IS (2019) Design of resource/energy-efficient energy detector for real-time cognitive radio using WARP. In: international conference on opto-electronics and applied optics (optronix), Proceedings of IEEE 11. Yong-sheng D, Meng-bo Z, Yong-wang A (2019) Optimization of cooperative spectrum sensing for OFDM in multiuser scenarios. In: 8th joint international information technology and artificial intelligence conference (ITAIC), Proceedings of IEEE 12. Akyildiz IF, Lee WY, Chowdhury KR (2009) CRAHNs: cognitive radio ad hoc networks. ELSEVIER, Ad Hoc Networks, Broadband Wireless Networking Laboratory, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, United States, vol 7, pp 810–836 13. Manna T, Mishra IS (2019) Design of resource/energy-efficient energy detector for real-time cognitive radio using WARP. In: International conference on opto-electronics and applied optics (optronix), Proceedings of IEEE 14. Haykin SM (2007) Cognitive radio and radio networks. In: INFWEST seminar in Helsinki, 27–28 June 2007 15. Zhang W, Mallik R, Letaief K (2008) Cooperative spectrum sensing optimization in cognitive radio networks. In: Proceedings of IEEE international conference on communication, pp 3411– 3415 16. Rice University (2017) Wireless open access research platform. Available https://warp.rice.edu

Chapter 2

Pipeline Burst Detection and Its Localization Using Pressure Transient Analysis Aditya Gupta and K. D. Kulat

1 Introduction Non-revenue water (NRW) loss is an important issue that needs to be resolved. Bursting of pipeline causing water losses in the water distribution system (WDS) generally occurs when the old corrode pipeline remains under high pressure during low demand hours. There is a need to detect and localize the burst event in WDS as soon as they occur. This will make the pipeline repairing faster and easier. Wireless sensor technology has advanced to the point that it can provide real-time online monitoring of water infrastructure and related parameters such as flow and pressure. Data acquired from these sensors when combined with efficient data processing techniques enables automatic detection and localization of irregularities such as burst events in the pipeline and its localization. Leakages are determined using acoustic and non-acoustic techniques. Acoustic devices can be unreliable for leaks in non-metallic and larger diameter pipes. Their effectiveness also depends on the experience of the user. These techniques are also time-consuming and have limited surveying areas [1]. Non-acoustic sensors such as ground-penetrating radar [2] and infrared thermography [3] are still in the development phase and suffer from limited surveying range and false detection [4]. Moreover, their implementation results in real complex WDS still need to be verified [5]. Trace gases are a more efficient technique for leakage detection when compared to other non-acoustic techniques, but have high implementation cost [6, 7]. Inverse transient A. Gupta (B) Department of Electronics and Telecommunication, College of Engineering, Pune, India e-mail: [email protected] K. D. Kulat Department of Electronics and Communication, Visvesvaraya National Institute of Technology, Nagpur, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_2

13

14

A. Gupta and K. D. Kulat

analysis (ITA) of pressure signal has grabbed the attention of researchers in the recent past for leakage detection in the pipeline system [8, 9]. Earlier ITA-based leakage detection techniques have presented numerical case studies. Laboratory experiments under controlled environment and limited field tests have not been able to achieve the level of validation required in complex systems under a wide range of conditions [10]. Burst event in the pipeline creates negative pressure waves (NPWs) which reflect the pipeline network and leads to an abrupt pressure drop causing a discontinuity in the observed pressure signal. The magnitude of the first NPW is good enough for the detection of burst events. Thus, burst events can be detected by performing a transient analysis of pressure signal. This technique has gained a lot of attention among researchers since the last decay [11]. Wavelet can identify discontinuity easily; thus, wavelet is a perfect tool for identification and localization of burst events. But wavelet analysis also suffers from rare false detection [12]. Cumulative sum (CUSUM) analysis is another way of identifying irregularities. But CUSUM suffers from a slow response. Hence, CUSUM cannot identify the exact arrival of NPW waveform at the sensor node, making localization of burst event difficult. Localization of burst events is performed by utilizing the time difference between the arrivals of two NPWs at different pressure sensor nodes. Based upon time difference, probable burst location is identified. This technique suffers from a localization error of up to 30–50 m [11, 13]. Thus, there is a need for secondary techniques such as acoustic devices for the exact localization of burst. Reduction in localization error will mill make exact localization of burst events possible without the usage of secondary devices.

2 ProposedAlgorithm The burst event in the pipeline is detected by performing a transient analysis of the pressure signal. The pressure signal is sensed by using pressure sensors (MPX10Dp). Data is collected at a rate of 250 samples/sec. De-noising of the pressure signal is performed using wavelet de-noising. Burst event is detected by performing the Haar wavelet and cumulative sum (CUSUM) analyses of the transient pressure signal. It is also important to detect the location of burst, and this will reduce the water losses, time and manpower required to localize the burst event. Burst location is detected by analyzing the time delay in the arrival of the negative reflected wave at different pressure sensors. Dijkstra algorithm is utilized for finding out the shortest distance between sensor node and burst location. Nodal matrix analysis is proposed for the minimization of the localization error. Burst in the pipeline is created by sudden opening and closing of valves. The flowchart of the proposed algorithm is shown in Fig. 1.

2 Pipeline Burst Detection and Its Localization …

15

Collect the pressure signal using pressure sensor at 250 samples/sec. wavelet de-noising is performed for smoothening of pressure signal

Multistage wavelet decomposition is applied for identifying the irregularities in pressure signal

HydraulicCUSUM is also applied in pressure signal to identify irregularities in pressure signal. Product of CUSUM and wavelet analysis is taken for exact detection of burst event.

Nodal matrix analysis is applied for finding out the location of pipeline burst.

Fig. 1 Flow chart of a proposed algorithm

2.1 Wavelet De-Noising Smoothing of the pressure signal is performed by using wavelet de-noising. ‘Symlets8’ wavelet is used for de-noising of the pressure signal. The pressure signal is decomposed into approximate and detail coefficients. The decomposition of the pressure signal is performing up to four stages. Soft thresholding is used for performing denoising. Global thresholding is used for deciding the threshold value, which is given by:  δ = σ 2 log(N )

(1)

where N is the total number of points; σ is the standard deviation. Soft thresholding is used to shrink the detail coefficients of the noisy pressure signal. The values of the detail coefficient above the threshold value are shrinking to absolute value. The signal is reconstructed using an inverse wavelet transform.

2.2 Wavelet Analysis for Burst Detection Multistate wavelet decomposition of the de-noised pressure signal is performed using the Haar wavelet. Due to the stepwise variation of the Haar wavelet, which is the same as that of pressure variation at the time of burst, it enhances the correlation between them. Therefore, the Haar wavelet can efficiently identify the burst event

16

A. Gupta and K. D. Kulat

after analyzing the pressure signal. The wavelet coefficients provide information about the identified signal features. In this study, identification of burst events is the required feature. Wavelet decomposition of the de-noised pressure signal is performed up to five levels. Multiple level decomposition is performed on detail coefficients of the observed pressure signal. Multiple high amplitude peaks are observed in the pressure signal at the time of the burst when compared to peaks observed under normal conditions. As the decomposition level increases, the number of peak decreases. The arrival of NPW corresponding to burst event is the highest peak values observed among these multiple peaks. Moreover, these peaks can also be present because irregularities arrive due to hydraulic operation such as valve operation and pump operation. This can be falsely detected as a burst event. Therefore, further analysis is required for the detection of burst events. Hence, only using Haar wavelet analysis is not sufficient for the confident detection of burst events.

2.3 CUSUM Algorithm CUSUM is adopted here for the identification of discontinuities in observed pressure signals for efficient burst event detection. CUSUM is also immune to background noise. Due to a burst event, there is a drop in pressure signal; therefore, negative CUSUM is applied for burst detection [12, 14], which is defined by: T0 = 0 Tn =



X k − μ0 +

j

(2) Vm 2

 (3)

Mn = max(Tk )

(4)

Burst event: when Mn − Tn > λ and set tc = t

(5)

where K = 1, 2, 3 … n

where X k = value of pressure obtained from pressure sensor at a time instance; μ0 = a mean of a pressure signal; V m = a priori has chosen minimum jump magnitude; and λ = threshold. V m and λ are 6σ and 3σ T [15]. σ is the standard deviation of pressure signal; σ T is the standard deviation of T under the normal situation (no burst). These parameters will be different for each node (sensor) as pressure will be different for each node. The time t c is considered to be a burst event when M n becomes greater than the threshold value (λ).

2 Pipeline Burst Detection and Its Localization …

17

2.4 Burst Localization When the pressure signal is observed at two different sensor points, then negative pressure wave (sudden change 3 in the head) will arrive at the different sensors at a different point of time depending upon the distance of the sensor from burst location. The time difference between these two peaks is utilized for calculating the distance of burst from these sensor nodes.       t j − tk − τi j − τik  (6) Si = j,k∈S B

where S B is the set of measurement points (or sensors) that detected the burst transient. j and k are measuring points (sensor); t j −t k is the difference in arrival time of NPW at node j and k; and τ ij and τ ik is the time taken by NPW to travel from node i to node j and k, respectively. Smaller residual value S i indicates a higher probability that the burst occurred at node i. Thus, the node with the minimum score is selected as the node nearest to the burst location. Dijkstra algorithm [16] is a graph-based technique which is used for finding out the distances between two points. Burst location is identified using function defined in Eq. 6, assuming that S i is the burst candidate node. Problem The node having a minimum value of Si is considered as a burst mode. The distance between two nodes is known as a link. If the distance between two nodes is too long, then the accuracy of burst location gets affected. Due to this, the earlier proposed algorithm suffers from a localization error of up to 30–50 m [13]. Thus, there is a need for secondary devices such as an acoustic sensor for the exact pinpoint of burst location. This is due to the larger value of distances between two nodes (i.e., links). One way to reduce the localization error is by minimizing the links. If the link is reduced to 10 m, then the location error will be less than 10 m from the actual burst. This will help in avoiding the use of secondary devices for pinpointing of exact burst location.

2.5 Node Matrix Analysis To reduce this localization error, nodal matrix analysis is proposed. A node matrix ‘A’ is defined to describe the pipe network based on a graph data structure using the nodes. The node matrix A, describing the basic nodes of the network, is defined as ⎧ ⎨ 0 if i = j for i, j ∈ [1, N ] A(i, j) = di j if node i and j are linked ⎩ ∞ otherwise

(7)

18

A. Gupta and K. D. Kulat

whereN = node number; d i,j = distance between node i and node j; and d i,j = d j,i . ‘A’ is the N × N symmetric matrix. If nodes i and j are not linked, A(i, j) is set as ∞. If A(i, j) > D, then A(i, j) is divided into additional nodes M with step-size D. A new node matrix ‘A’ will have every element A(i, j) ≤ D after the node division (D < 10).

3 Results Experiment 1: The proposed algorithm is firstly applied in a single PVC water pipeline having a length and diameter of 50 m and 40 mm, respectively, as shown in Fig. 2. The burst is created in the pipeline by sudden opening and closing of the valve at a particular time instance having a discharge of 2–3 l/s. Single pressure sensor MPX10DP is used for the measurement of pipeline pressure at a sampling rate of 250 Hz (Fig. 3). The sensor is connected to Arduino where pressure data is collected. Analysis of collected data is performed using MATLAB. Wavelet analysis is performed for denoising of the pressure signal. Soft thresholding is utilized here. The actual pressure signal and de-noised pressure signal are shown in 3. Multistate decomposition of the pressure signal is performed using the Haar wavelet. The decomposition of the pressure signal at a different level is shown in Fig. 4. It can be seen that even after the four-level decomposition so many peaks are observed; hence, the exact detection of burst events is not possible. Thus, fifth level decomposition is performed. As the level of decomposition increases, the number of high amplitude peaks gets reduced. Multiple peaks are identified even after the five-level of decomposition. Peaks are Fig. 2 Test bed pipeline

2 Pipeline Burst Detection and Its Localization …

19

Fig. 3 a Pressure signal collected from the sensor, b pressure signal after de-noising

Fig. 4 Wavelet transforms of the pressure signal

present due to the arrival of NPW at a certain time interval. For finding out the exact time of burst, further processing is required. CUSUM analysis is performed on pressure signals using Eqs. (2–5), and the results obtained after CUSUM analysis can be identified from Fig. 5. Threshold (λ) is selected to be three times of standard deviation, i.e., 2.046 kPa. Multiple peaks are observed which confirms irregularities, i.e., burst detection event. But CUSUM suffers from slow response, which makes it difficult to exactly localize the burst event. The exact location of the burst event in the pressure signal is identified by performing the multiplicative operation of results obtained from CUSUM and wavelet analyses. It is observed from Fig. 4 that there is only a single peak present and the time of occurrence of this peak localizes the burst event.

20

A. Gupta and K. D. Kulat

Fig. 5 a Result after CUSUM analysis, b burst detection result

Experiment 2: The proposed algorithm is applied to the PVC water pipeline network containing three pipelines arranged in T format as shown in Fig. 6a. The pipelines have a diameter of 40 mm. Two pressure sensors (MPX10DP) are placed in the pipeline for measuring the pressure signal. The pressure sensor is connected with Arduino UNO for data collection (Fig. 6c). The pressure data is collected and later processed in MATLAB 2015a. The burst is created in the pipeline by sudden opening and closing of the valve at the particular time instance having a discharge of 2–3 l/s (Fig. 6d). Wavelet analysis is performed for de-noising of the pressure signal. Soft thresholding is utilized here. The actual pressure signal and de-noised pressure signal for the pressure signal measured at sensors 1 and 2 are shown in Figs. 7 and 8. Multistate decomposition (up to four levels) of the pressure signal is performed using the Haar wavelet. Wavelet decomposition of both the pressure signals is shown in Fig. 9. Multiple peaks values are observed after the fourth level of wavelet decomposition at a certain time interval. The peak value among these coefficients belongs to the arrival of NPW as a sensor node due to the burst event. For further analysis, CUSUM algorithm is performed on both the pressure signal, and the results obtained after CUSUM analysis can be identified from Figs. 10a and 11a. The exact location of the burst event in the pressure signal is identified by performing product operation of the results obtained from CUSUM and wavelet analyses. It can be observed from Figs. 10b and 11b that there is only a single peak present and the time of this peak gives an exact time of occurrence of the burst event (Fig. 12). For identifying the burst location, the difference in time of arrival (ToA) of NPW at different sensor nodes is calculated by subtracting the time of arrival of NPW at different sensor nodes which comes out to be 1.06 s as shown in Fig. 11. The system contains four nodes a, b, c and d. The total length of the pipeline is 98 m. The node matrix A is created for further dividing the network where each link < D. D is calculated by dividing the total length of the pipeline by 10 which comes out to be 9.8 m (98/10). The a−b link is greater than 9.8 m. Hence, the link is further

2 Pipeline Burst Detection and Its Localization …

21

Fig. 6 a Pipeline network structure, b T-junction, c pressure sensor placed in pipeline, d burst is created by the sudden opening of the valve

divided by introducing new nodes in between a−b, i.e., ab1, ab2 … ab4, such that the distance between two nodes will become less than 9.8 m (Eq. 7). Similarly, between nodes b−c and b−d, four nodes, i.e., bd1,bd2 and bc1, bc2, are inserted. By dividing the links, the node matrix becomes of size 12 × 12 as shown in Table 1. All the nodes are considered to be a burst location. Now the entire 12 nodes are taken one at a time, and their distance from two sensors S1 and S2 is calculated. The candidate burst node is found out using Eq. 6. Dijkstra algorithm is applied for finding out the shortest path [16]. The speed of pressure waves inside the pipeline is 30.92 m/s. These distances are used to calculate the difference in time of arrival (ToA) of NPW at both the sensor from the selected node (using Eq. 6). The node having ToA equal or closer to observed ToA is considered to be a probable burst location. The calculated time difference between two NPWs at sensor S1 and S2 for node b is 1.033 s which is close to the observed time difference, i.e., 1.06 s. Thus,

22

A. Gupta and K. D. Kulat

Fig. 7 a Pressure signal collected from sensor 1, b pressure signal after de-noising

Fig. 8 a Pressure signal collected from sensor 2, b pressure signal after de-noising

Fig. 9 a, b Wavelet transforms of the pressure signal (sensor 1 and sensor 2)

2 Pipeline Burst Detection and Its Localization …

23

Fig. 10 a Result after CUSUM analysis, b burst detection result for sensor 1

Fig. 11 a Result after CUSUM analysis, b burst detection result for sensor 2

node b is considered as a location of burst, which is 3 m away from the actual burst location. Thus, the proposed methodology can locate the burst event quite efficiently when compared to other referred techniques [12, 13] which suffers from localization error of up to 30–40 m.

4 Conclusion Burst event in pipeline is one of the main reasons for water losses in the water network. During the burst event, the negative wave gets generated and travels across the junctions in the water distribution system causing a sudden change in the nodal pressure of the pipeline. The present study performs transient analysis of the pressure

24

A. Gupta and K. D. Kulat

Fig. 12 Difference in time of arrival (ToA) of NPW at two sensor nodes

Table 1 Node matrix ‘A’ [nodes and their respective distances (m)] Nodes

a

ab1

ab2

ab3

ab4

b

bd1

bd2

d

bc1

bc2

c

a

0

9.8

0

0

0

0

0

0

0

0

0

0

ab1

9.8

0

9.8

0

0

0

0

0

0

0

0

0

ab2

0

9.8

0

9.8

0

0

0

0

0

0

0

0

ab3

0

0

9.8

0

9.8

0

0

0

0

0

0

0

ab4

0

0

0

9.8

0

7.8

0

0

0

9.8

0

0

b

0

0

0

0

7.8

0

9.8

0

0

0

0

0

bd1

0

0

0

0

0

9.8

0

9.8

0

0

0

0

bd2

0

0

0

0

0

0

9.8

0

9.4

0

0

0

d

0

0

0

0

0

0

0

9.4

0

0

9.8

0

bc1

0

0

0

0

9.8

0

0

0

0

0

0

0

bc2

0

0

0

0

0

0

0

0

9.8

0

0

2.4

c

0

0

0

0

0

0

0

0

0

0

2.4

0

signal for the detection of burst events in the smart water network. Pressure data is collected using a pressure sensor (MX10dp). The study shows that wavelet transform alone is not sufficient to find out the burst event confidently as it suffers from false detection of burst events, whereas CUSUM analysis can detect the discontinuity in the signal but suffers from slow response making exact detection of burst location difficult. In the presented study, CUSUM and wavelet analyses are utilized together for finding out the irregularities in pressure signals corresponding to burst event followed by localization of it. Initially, the proposed method is applied to the single pipeline of length and diameter of 50 m and 40 mm, respectively. The system can detect the burst event of 2–3 l/s in the tested pipeline network in real time. The

2 Pipeline Burst Detection and Its Localization …

25

proposed algorithm is tested on the pipelines network having the T-shaped structure (40 mm diameter) containing two pressure sensors. The algorithm can detect the burst event quite efficiently. It is also important to localize the burst event for faster repairing of the pipeline. Node matrix analysis is proposed for efficient localization of burst events in the pipeline. The introduction of virtual nodes minimizes the size of links restricting localization error to 10 m. The system can localize the burst event within a range of 3 m, which is comparatively better than referred studies.

References 1. Tang X, Liu Y, Zheng L, Ma C, Wang H (2009) Leak detection of water pipeline using wavelet transform method. In: International conference on environmental science and information application technology, ESIAT 2009, vol 2. IEEE, Greece, pp 217–220 2. Farley M, Trow S (2007) Losses in water distribution networks; a practitioner’s guide to assessment, monitoring, and control, 2nd edn. International Water Association Publishing, UK 3. Gupta A, Kulat KD (2018) A selective literature review on leak management techniques for water distribution system. Water Resour Manage 32(10):3247–3269 4. Pilcher R, Hamilton S, Chapman H, Field D, Ristovski B, Stapely S (2007) Leak location and repairs, guideline notes by IWA, version 1. International Water Association, UK 5. Puust R, Kapelan Z, Savic DA, Koppel T (2010) A review of methods for leakage management in pipe networks. Urban Water J 7(1):25–45 6. Wang J, Ren L, Jiang T, Jia Z, Wang GX (2019) A novel gas pipeline burst detection and localization method based on the FBG caliber-based sensor array. Measurement 151:107226 7. Haniffa M, Hashim FM (2011) Recent developments in in-line inspection tools (ILI) for deep water pipeline applications. In: National postgraduate conference (NPC). IEEE, USA, pp 1–6 8. Liggett JA, Chen LC (1994) Inverse transient analysis in pipe networks. ASCE J Hydraul Eng 120(8):934–955 9. Haghighi H, Covas C (2012) Modified inverse transient analysis for leak detection of pressurized pipes. BHR Group Press Surges 2(1):87–102 10. Badillo-Olvera A, Pérez-González A, Begovich O, Ruíz-León J (2019) Burst detection and localization in water pipelines based on an extended differential evolution algorithm. J Hydroinform 21(4):593–606 11. Zan TTT, Lim HB, Wong KJ, Whittle AJ, Lee B (2014) S: Event detection and localization in urban water distribution network. IEEE Sens J 14(12):4134–4142 12. Lee SJ, Lee G, Suh JC, Lee J (2015) M: Online burst detection and location of water distribution systems and its practical applications. J Water Resour Plann Manage 142(1):04015033 13. Srirangarajan S, Allen M, Preis A, Iqbal M, Lim B, Whittle J (2013) Wavelet-based burst event detection and localization in water distribution systems. J Signal Process Syst 72(1):1–16 14. Basseville M (1988) Detecting changes in signals and systems—a survey. Automatica 24(3):309–326 15. Choe Y (1995) Application and implementation of scale-space filtering techniques for qualitative interpretation of real-time process data. Ph.D. thesis, Seoul National University, Seoul, Korea 16. Dijkstra EW (1959) A note on two problems in connexion with graphs. Numer Math 1(1):269– 271

Chapter 3

Improving Machine Translation Using Parts-Of-Speech Tags and Dependency Parsing Apoorva Jha and Philemon Daniel

1 Introduction Machine Translation is an important Natural Language Processing tasks. Machine Translation means converting one language usually called the source language to another language, the target language. Machine Translations is a difficult task as the system should understand both the semantics and syntax of the source and target languages as well as find relation between the two languages. The earliest methods involved statistically solving this problem. Statistical Machine Translation is a rulebased translation method [1]. With the boom in Deep Learning many attempts have been used to solve Machine Translation. Many Deep Learning based models like sequence-sequence model [2], sequence-sequence using attention [3] has come up. This model uses LSTM and attention units for modeling the text data. To improve upon these models, Transformer model [4] was introduced which is a purely attentionbased model and has encoder-decoder architecture. The model has achieved significant results for many language translations. We trained a transformer-based model with 7 million cleaned English to Hindi parallel corpus. In spite of large data and long training time, we could not achieve qualitative results. Although the translation model captured the meaning of the words correctly, the structure of the sentences was not good. This resulted in the translated sentence not being comprehensible. The semantic features of the language were learnt by the model but not the syntactic features. This was because Hindi is a morphological language. To overcome this, we utilize two syntactic features, Parts-of-Speech (POS) tags, and Dependency Parsed (DP) tags. In Hindi, the basic verb depends on the noun, it’s gender, number, etc. This results in complexity in the language which was not captured well by the transformer model. Hindi as a morphologically rich language as well as a free word order A. Jha (B) · P. Daniel Department of Electronics and Communication Engineering, National Institute of Technology, Hamirpur, Himachal Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_3

27

28

A. Jha and P. Daniel

[5]. Traditionally the Transformer model is expected to learn the syntactic as well as semantic features. But as this was not the case for English-Hindi translation, we fed the syntactic features along with the sentences to the model. Recent models like BERT [6] have shown good results but require very high computation power making them infeasible for our work. So, we use Transformer model as our base model. For the rest paper, in Sect. 2, we will discuss model proposal and its elements. In Sect. 3 we will discuss the implementation of the model proposed. In Sect. 4, the results and its advantages and limitations are discussed. In Sect. 5 we conclude the paper and propose the future work possible.

2 Proposed Machine Translation System To overcome, the issues faced above for morphologically rich languages, we attempt at proposing a new machine Translation system for English-Hindi translation. In this system, along with raw sentences we use Parts-of-Speech Tags and Dependency Parsed Tags which are syntactic features.

2.1 Parts-Of-Speech Tags Parts-of-Speech Tags are pronoun, verb, adjectives, etc. They give information regarding the relation between two words of a sentence. In is the verb. Here we have only verb. In is adjective, is noun, , is a verb and is the auxiliary verb which supports the main verb. In comparison, the corresponding English translated sentence is “rasgulla is sweet”, only one verb, “is” is used. Thus, using the POS tags will help the machine translation system learn the syntactic features better. In Fig. 1, we can see how the tagger system outputs the POS tags (CC, NN, VB, SYM, etc.) for the given , which means time is correctly identified input sentence. For example, here as a noun (NN)”.

2.2 Dependency Parsed Tags In Hindi for every word, there is an associated gender with it. To add complexity to this, there are three ways to use pronouns based on respect given to a person, for elders, these along with for same age group and for youngsters plural-singular, pronouns, and tense are used to form the verb. This is quite complex as compared to English verbs. Additionally, the word order in English is fixed and the basic word order is SVO where S is the subject, V is verb and O is object.

3 Improving Machine Translation Using Parts-Of-Speech …

29

Fig. 1 Parts-of-speech tags of input sentences

In comparison Hindi is a free word order language. This make translation a more complex task. To counter this, we use Dependency Parsed tags to help understand the sentence structure. Syntactic Parsing or Dependency Parsing is the task of recognizing a sentence and assigning a syntactic structure to it. The most widely used syntactic structure is the parse tree which can be generated using some parsing algorithms. Dependency parsers, give the relation between the words of a sentence and the root word which is the verb. These are defined as subject, object, etc. Some dependencies are subject, root(verb), etc. Let us consider an English sentence, “Ram plays football.” Here “Ram” is the subject (S), “plays” is the verb (V), “football” is the object (O). Since there is only one verb, “plays” it considered to be the root of the parsed tree. With respect to the root word, “plays”, “Ram” which is the subject answers the questions who is playing and “football” which is the object answers the question what is being played. It can be seen how the words are being related to the root words using the dependencies. In Fig. 2, the parsed tree structure of an input Hindi sentence is formed. In this and based on the relation of the other words with respect case, the root word is to this word, the dependency tag is given.

30

A. Jha and P. Daniel

Fig. 2 Dependency parsed tree

2.3 Mechanism of Machine Translation System Proposed Figure 3, shows the flow chart of the Machine Translation system proposed. Our Machine Translation system follows the supervised approach and is based on the Transformer model. Because of this, we require a parallel corpus as shown in Table 1. In our work we have taken English-Hindi translation into consideration. Therefore, our source language is English and Hindi is the target language. Parts-of-Speech tags and Dependencies are extracted for both the source and target language using the parser. Once the features are extracted, we append these tags to the sentences. This now becomes our new parallel corpus. The transformer model is trained on this modified parallel corpus. Once the training is completed, we pass the input sentences along with tags to get the machine translated output. This quality of the machine translation is evaluated using the BLEU [7]. BLEU is the most commonly used evaluation metric for Machine Translation. It is based on n-gram matching between the output translated sentence and reference sentence. Reference sentence is the real translation for the given source sentence. Usually, reference sentences are from human translators.

3 Improving Machine Translation Using Parts-Of-Speech …

31

Fig. 3 Flow chart of proposed machine translation system

Table 1 Example of parallel corpus

English sentences

Hindi sentences

Ramesh Saatpute examined the fort Fan is on

3 Experiment 3.1 Tagger System We use SpaCy library [8] which provides various NLP related tools. This library already contains an English tagger system for POS tags and DP tags. For Hindi no

32

A. Jha and P. Daniel

such system is already available. So, we train a SpaCy language model using Hindi universal Dependency Treebank dataset [9, 10]. The model is trained for 30 epochs and we achieve around 95% accuracy for the tagging system. The Tags generated are as shown in Fig. 1.

3.2 Dataset For training the translation system, we use a parallel corpus. Generally, a transformer model requires a large dataset but for our experimentation, we have chosen a small dataset of 31 k sentences. The total vocabulary was around 10k words. Both the English and Hindi sentences are passed through their respective tagger system, the outputs are then concatenated and are shown in Table 2.

3.3 Training A base transformer model is trained with the parallel corpus as shown in Table 1 and with modified parallel corpus shown in Table 2. It is trained for 8 lakh epochs using 2 GPUs.

3.4 Evaluation Once the training is done, we evaluate our system on 300 sentences. We pass the English sentences along with their tags as shown in Table 2 into the Transformer decoder. The output translated sentence gives the sentences as well as the tags. We strip the tags and compare the output translation with the reference sentence. For evaluation, we use the BLEU metric.

4 Results On similar dataset that the model was trained on, we obtained a low BLEU score of with just the parallel corpus. But along with POS and Dependency tags the BLEU score increased by 3.78. Table 3 shows the output translated system obtained for both model which was trained only on sentences as well as model which was trained on sentences as well as tags.

Ramesh Saatpute examined the fort

Ramesh Saatpute examined the fort. NNP NNP VBD DT NN. compound nsubj ROOT det dobjpunct

Final input Hindi sentence after concatenating

NN PSP NN NNPC NNP PSP VM SYM nmod case nmod compound nsubj case ROOT punct

NN PSP NN NNPC NNP PSP VM SYM

nmod case nmod compound nsubj case ROOT punct

POS tags of Hindi sentence

DP tags of Hindi sentence

Hindi sentences

Compound nsubj ROOT det dobjpunct

DP tags of English sentence

Final input English sentence after concatenating

POS tags of English sentence NNP NNP VBD DT NN

English sentence

Table 2 inputs for the translation system

3 Improving Machine Translation Using Parts-Of-Speech … 33

English input sentences

Out of this, approximately 70% of the amount (Rs. 15.12 lakhs) will be paid by the MSRTC

It is raining

Because of this, the speeding taxi collided with a eucalyptus tree which was on the edge of the highway

His friendly behavior suddenly turned into anger

This information was given by the school principal Nafe Singh

S. No.

1

2

3

4

5

Hindi reference sentence

Hindi output obtained from first model (sentences)

Table 3 Results containing input sentence, reference sentence, and translated sentence using the two models Hindi output obtained from second model (sentences + POS/DP)

34 A. Jha and P. Daniel

3 Improving Machine Translation Using Parts-Of-Speech …

35

4.1 Advantages • Since, the syntactic features were added, the model performs better and has shown significant improvement in the results. • The model is automatically able to learn character-level translation as can be seen in translation of the abbreviation MSRTC. • It is also learnt that with this technique, translation training requires smaller amount of parallel corpus, and thus the model can be effectively utilized for low resource languages and for those that have rich morphology.

4.2 Limitations • Since the tags and dependencies are not concatenated with the sentence structure, the length of the sentence has increased increasing the training time. • Our approach requires the understanding of parts of speech tagging and dependency parsing, which is an expertise difficult to find for low resource languages. Most of the low resource languages are dialects lacking a thorough grammatical study. So, a tedious step of creating grammatical tokens for the language which follow the same convention used by the major languages with the help of linguistic experts is required. • It can be seen in Table 3, in sentence 2 although the model has translated the output correctly it adds more words to the translation. This because the model has been trained with data that contains longer sentence. This means the type of data is an important factor for the system. A system trained with data made from stories, news will not perform will with data that is used for day-to-day conversations.

5 Conclusion and Future Work Machine Translation is one of the most widely used tasks. Many neural machine Translation models have come up but these models do not perform very well with morphologically rich languages like Hindi. In this paper, we propose a new machine translation system that performs well with morphologically rich languages like Hindi. We use syntactic features like parts-of-speech tags and dependency parsing tags along with the model. These tags help the model to learn the syntactic features of the languages better. With the help of these tags, we have seen an increase in BLEU score. Moreover, the new system required less amount of data to train which showed its utility for low resource languages, i.e. languages with small amount of data available. In future work, we plan to implement this model for Kangri, which is a dialect from Himachal Pradesh. We will also attempt to train the model for conversation data type which will help integrate the translation system with Question and Answering systems, chatbots.

36

A. Jha and P. Daniel

Acknowledgements We gratefully acknowledge the support of NVIDIA Corporation with the NVIDIA GPU Grant of Titan X Pascal GPU used for this research.

References 1. Koehn P, Hoang H, Birch A, Callison-Burch C, Federico M, Bertoldi N, Cowan B, Shen W, Moran C, Zens R, Dyer C, Bojar O, Constantin A, Herbst E (2007) Moses: open source toolkit for statistical machine translation 2. Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems 27:3104–3112 3. Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. CoRR. abs:1409.0473 4. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30:5998–6008 5. Ramanathan A, Choudhary H, Ghosh A, Bhattacharyya P (2009) Case markers and morphology: addressing the crux of the fluency problem in English-Hindi SMT. In: Proceedings of the joint conference of the 47th annual meeting of the ACL and the 4th international joint conference on natural language processing of the AFNLP. Association for Computational Linguistics, pp 800–808 6. Devlin J, Chang MW, Lee K, Toutanova K (2018) BERT: pre-training of deep bidirectional transformers for language understanding. CoRR. abs:1810.04805 7. Papineni K, Roukos S, Ward T, Zhu WJ (2002) BLEU: a method for automatic evaluation of machine translation. https://doi.org/10.3115/1073083.1073135 8. Honnibal M, Johnson M (2015) An improved non-monotonic transition system for dependency parsing. In: Proceedings of the 2015 conference on empirical methods in natural language processing. Association for Computational Linguistics, Portugal, pp 1373–1378 9. Bhat RA, Bhatt R, Farudi A, Klassen P, Narasimhan B, Palmer M, Rambow O, Sharma DM, Vaidya A, Vishnu SR, Xia F (2017) The Hindi/Urdu treebank project. In: Ide N, Pustejovsky J (eds) Handbook of Linguistic. Springer Press 10. Palmer M, Bhatt R, Narasimhan B, Rambow O, Sharma DM, Xia F (2009) Hindi syntax: annotating dependency, lexical predicate-argument structure, and phrase structure. In: 7th international conference on natural language processing, ICON, pp 14–17

Chapter 4

Optimal Strategy for Obtaining Excellent Energy Storage Density in Polymer Nanocomposite Materials Daljeet Kaur, Tripti Sharma, and Charu Madhu

1 Introduction For last 30 years, the demand for storage of electricity has increased to a great extent with increased generation of electrical energy via renewable resources like wind and solar power. The continuously increasing demand for storage of electrical energy can be fulfilled by developing high performance energy storage devices [1]. Among all the electrical storage devices which are currently available, dielectric capacitors have the advantage of possibly high power as well as quick charging and discharging capability intrinsically due to which they can provide very large amount of energy almost instantaneously and have very long life cycle that lasts millions of cycles. All these features make dielectric capacitors very attractive option for large scale energy storage [2]. Dielectric capacitors are used in various audio, video, telecommunication and industrial power electronic applications. Specifically dielectric or electrostatic capacitors are used in hybrid electric vehicle, filtering, snubbers, high power microwave, medical devices like defibrillators, x-ray equipment, surgical lasers, electric guns, electric ships [3–13]. But they have low energy density as compared to batteries [14]. Energy Density (Ed ): The bulk of energy stored in a given system per unit volume. According to the equation of energy density E d = ½ ε0 εr E b 2 , where E d is the energy density, ε0 is the permittivity of air, εr is the relative permittivity of the material, E b is the dielectric breakdown strength of the material [15]. D. Kaur (B) · C. Madhu Department of ECE, UIET, Panjab University, Chandigarh, India e-mail: [email protected] T. Sharma Department of ECE, Chandigarh University, Gharuan, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_4

37

38

D. Kaur et al.

Dielectric Constant (k): Dielectric Constant or Relative Permittivity is defined as the ratio of absolute permittivity of the material ε and the permittivity of free space ε0 and is represented by εr = ε/ε0 . Dielectric Breakdown Strength (Eb ): It is the highest value of electric field that a material can handle without loosing its original properties. A dielectric material experiences a breakdown when its insulting behaviour changes to conducting after application of high electric field. Inferred that there are two basic factors that are necessary to enhance the energy density E d of a material, dielectric constant k and dielectric breakdown strength E b of the material [15–17].

1.1 Ceramics Due to high hardness, high melting point, good thermal insulation, high electrical resistance and higher dielectric constant, dielectric ceramics are used in various applications like cutting tools, thermal and electrical insulators, dielectric capacitors, antennas [18–20]. Initially Mica, Magnesium Titanate (MgTiO3 ), Calcium Titanate (CaTiO3 ), and Titanium dioxide (TiO2 ) were used as dielectric materials in capacitors. With time the need to discover the high k material raised so various lead based ceramics like Lead Magnesium Niobate (PMN), Lead ZirconateNiobate (PZN), Lead ZirconateTitanate (PZT) were used due to their excellent piezoelectric and optoelectronic properties which could give energy density as high as 15 J/cm3 . But Lead is neurotoxic material which causes neurological disorders like brain damage and behavioural problems hence is not safe for the human and the environment [14]. Barium Titanate (BaTiO3 ) is a ceramic material which is explored by many researchers due to its high dielectric constant and the property of ferroelectricity. BaTiO3 has low toxicity as compared to lead. It has been a material of keen interest for researchers in energy storage applications. BaTiO3 has k value of 2000 at room temperature and E b of 150 kV/cm [21]. Ceramic materials have the disadvantage of low dielectric breakdown strength, brittleness, poor processability due to which these materials cannot be used as dielectric materials for capacitors at high electric field. Hence are not able to deliver large energy density [22].

1.2 Polymers Polymers are the material with low cost, flexibility, lightweight, easy processing, wide range of operating frequency and high E b but they have low dielectric constant [23]. To achieve high k and E b there was a need to make a hybrid material by combining the properties of polymers and ceramics. Polymer material is used as main matrix and high εr nano ceramic materials are used as fillers and this give rise

4 Optimal Strategy for Obtaining Excellent Energy …

39

to a polymer nano composite which probably provided high k, E b , low dielectric loss and ultimately high E d [24]. Practically there are various factors like size, shape, type of nano filler, dispersion of nano fillers, loading of nano filler, interface between nanoparticles and polymer matrix that affect the value of k and E b of the composite material [25].

2 Factors Affecting the Value of k, Eb of the Nano Composite 2.1 Size and Shape of Nano Fillers To achieve high breakdown strength, the aspect ratio of nano fillers should be high. It has been observed that 1D nano filler provide more dielectric strength as compared to 2D filler [17]. Yang Shen et al. prepared polymer nano composite with Polyvinylidene fluoride (PVDF) as polymer matrix and TiO2 coated BaTiO3 nano fibres as fillers. He concluded that the E b and E d can be improved if the larger aspect ratio of nano fillers is used [17]. Y. P. Mao et al. discussed the influence of variation of size of nano fillers on dielectric properties of BaTiO3 in PVDF matrix. For 60% loading of BaTiO3 , the value of k and dielectric loss was 95 and 0.05 respectively at 1 kHz for 100 nm size. The value of k reduced to 70 at 1 kHz for 500 nm size. So with the larger size of nanofillers the value of k decreases [26].

2.2 Loading of Nano Fillers The amount of loading of fillers in the polymer matrix also affects the E b and dielectric constant of the composite. Although the larger fraction of ceramic fillers increases the dielectric constant but after a certain level the nano fillers starts to agglomerate due to which the E b of the polymer nano composite decreases [14].

2.3 Dispersion of Nano Fillers Homogeneous dispersion of nano fillers in the polymer matrix is required to ensure that there is no agglomeration of nanoparticles that provide tunnelling and hence reduces the dielectric breakdown strength of polymer nano composite [16]. Weimin Xia et al. coated barium titanate with Hyper branched polymer (HBP) and Polydiacetylenes (PDA) and dispersed it into the PVDF matrix. It was conclude from the Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM),

40

D. Kaur et al.

Fourier Transform Infrared Spectroscopy (FTIR) results that good interfacial compatibility of composites favour the dispersion of coated Barium titanate into PVFF matrix and a high value of E d of 7 J/cm3 was obtained [27].

2.4 Interfacial Relationship Between Nanoparticles and Polymer If there is huge difference between value of k of nano fillers and polymer matrix, an abrupt change occurs at the interface which creates a local electric field in the polymer matrix around the nanoparticles. Due to this the total E b of the nano composite decreases below the intrinsic value of the pure polymer. This problem can be solved by removing the interfacial flaws by surface modification of nano filler [15–17]. Y. P. Mao et al. discussed that in two phase composite E b decreased from its pure counterpart due to the interfacial flaws [26]. Zhou et al. modified BaTiO3 with H2 O2 and used PVDF with polymer matrix. At 30% loading k was observed to be 25 in the range of 10 Hz–1 MHz. Whereas for unmodified BaTiO3 , k was 28 and with surface modification k reduced but the most attractive thing was the reduced dielectric loss and better interfacial relationship of nano fillers with polymer matrix [28].

3 Core Shell Structure Rehimabady et al. discussed the core–shell structures of BaTiO3 coated with TiO2 and embedded into PVDF matrix. The value of k for coated material was 110 and for uncoated BaTiO3 was 37 at 1 kHz. The E d was observed to be 12.2 J/cm3 at 3500 kV/cm whereas for pure PVDF it was 5.5 J/cm3 [29]. Liyuan Xie et al. used core double shell method to prepare BaTiO3 based high performance polymer nanocomposites where Hype branched aromatic polyamide (HBP) was used as first shell and Ploy (methyle methacrylate) (PMMA) was developed over it as second shell. Hence core shell nanocomposite provided higher dielectric constant and low dielectric loss as compared to the conventional solution blended nanocomposite [30]. Yanyan Fan et al. prepared a core shell structure of BT@PDA@PLA and observed that this structure has higher k of 8.74 and energy density of 1.52 J/cm3 as compared to uncoated BaTiO3 nanofiller having dielectric constant 7.52 and energy density of 0.85 JJ/cm3 [31]. Liyuan Xie Prepared a core shell structured HBP grafted BaTiO3 as filler in PVDF matrix. A high value of dielectric constant of 1485.5 at 1 kHz was observed at 40% loading as compared to 206.3 for uncoated barium titanate directly used as a filler in PVDF matrix at the same filler concentration [32].

4 Optimal Strategy for Obtaining Excellent Energy …

41

4 Multilayer Structure The maximum energy density that has been achieved by single layer structure is approximately 12 J/cm3 . Further enhancement in the energy density is required which can be achieved by multilayer structure rather than single layer. Mohsin Ali Marwat et al. proposed a tri-layer structure rather than single layer as a dielectric material for enhancing the discharge energy density in polymer nanocomposite. The author proposed the top and bottom layer to be a nanocomposite with 1% loading of BaTiO3 in PMMA polymer and the centre layer having 9% loading of BaTiO3 in PMMA polymer matrix. So this tri-layer structure provided higher E d of 6.08 J/cm3 as compared to single layer structure which provided 1.6 J/cm3 [33]. Yang Shen et al. prepared a sandwiched structure using hot pressing method. The upper and lower layer is made up of TiO2 coated graphene oxide nanoparticles dispersed into PVDF polymer matrix. And the middle layer is made up of Ba0.6 Sr0.4 TiO3 nanofibres dispersed into PVDF matrix. A very high E d of 14.6 J/cm3 is achieved through this topological structure of nanocomposites. The idea behind this structure is to have a high k layer sandwiched between high breakdown strength layers [34]. Jie Chen et al. prepared a polymer blend of PVDF as top and lower layers and PMMA as centre layer of a tri-layer structure using hot pressing method. The reason for this structure was to achieve a high E d and high charge discharge efficiency. High energy density of 20 J/cm3 was achieved by this structure [35].Multilayer structure has been proposed by many researchers to enhance the E d of polymer nanocomposite material. Many researchers have prepared various polymer nanocomposite materials and have studies there dielectric properties like k, dielectric loss, E b, , E d of the composite material which is listed in Table 1. From the table it can be inferred that there is an increase in energy density of the composite with surface modification and core shell structure and further enhancement is achieved with the multilayer structure of various the composite materials. Although polymer nanocomposites resulted into very good and promising dielectric properties and enhanced dielectric strength. But the biggest disadvantage is the toxic nature of the petroleum based polymers which contaminates the environment when they are disposed off [54]. So the electronic products are required to be made up of the material which is bio compatible and bio degradable which is safe for the living organisms as well as human. Thus promotes the green technology. Various researchers have made use of biodegradable materials to develop some new biodegradable nano composites so that some fruitful electronic products can be developed using the advantage of these materials [55]. There are various materials like Poly (3-Hydroxybutyrate) (PHA), Polylactic acid (PLA), Polycaprolactones (PCL), Poly (Butylene succinate) (PBS), Poly(butylene adipate-co-terephthalate) (PBAT), Poly (3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV), PLGA (Ply-lactide-coglycolid), Chitosan, Cellulose, Gluten etc. These materials have been used for different electronic applications like Light Emitting Diode (LED), Supercapacitor electrodes, Printed Circuit Board (PCB) [54–56]. Hence instead of petroleum

Yu Song

Qingchao Jia

Lei Gao

Ke Yang

Ke Yu

Ke Yu, YujuanNiu

Ke Yu

Ke Yu

2

3

4

5

6

7

8

9

2.1

6

Loading (%)

50

10

Core shell structured BaTiO3 /SiO2 nanoparticles in PVDF matrix

BaTiO3 modified with Polyacrylate elastomers (AR71) in PVDF matrix 2

3

BaTiO3 ceramic filler modified 10 with Polyvinylpyrrolidone acid (PVP) in PVDF Polymer matrix

BaTiO3 ceramic filler modified 10 with tetrafluorophthalic acid in PVDF polymer matrix

PHFDA modified BaTiO3 nanoparticles

BaTiO3 modified with H2 O2 and DN-101 in PVDF matrix

Molybdenum Disulfide (MoS2 ) 0.4 nanosheet in PVDF matrix

Ba0.6 Sr0.4 TiO3 modified with PDOPA in PVDF matrix

Chuntian Chen K0.5 Na0.5 NbO3 –SrTiO3 in PVDF matrix

1

Composite material

Author

S. No.

12

13

12

13

43

20

20

11

15

Dielectric constant (at 1 kHz)

Table 1 List of the value of k, Dielectric loss, E b and E d of various polymers nano composites

0.04

0.05

0.04

0.02

0.018

0.05



0.02

0.09

Dielectric loss

3400



3400

2450

1600

2600

2000

4000

2740

Dielectric breakdown strength (kV/cm)

6.28

8.8

7.5

5.1

4.6

4.31

4.1

3

1

Energy density (J/cm3 )

(continued)

[44]

[43]

[42]

[41]

[40]

[39]

[38]

[37]

[36]

References

42 D. Kaur et al.

Mojtaba Rahimabady

Haixiong Tang TiO2 nanowires functionalized 7.5 with APS in PVDF matrix

Mohsin Ali Marwat

Yue Zhang

Y. N. Hao

Bing Xie

Yingke Zhu

Jie Chen

12

13

14

15

16

17

18



2

Trilayer of PVDF, PMMA and PVDF polymers



Sandwiched structure of PVDF – as outer layers with the boron nitride nanosheets as interlayer

Hetero structure of polyimide – (PI) as centre layer and BaTiO3 nanoparticles dispersed in P(VDF-CTFE) as outer layer

BaTiO3 /PVDF grafted multilayer

Sandwiched structure of – [P(VDF-TrFE-CFE)] as centre layer and PVDF as outer layers

Sandwich-structured barium titanate/poly (ether imide) (BT/PEI) nanocomposites

Core–shell BaTiO3 coated with 30 TiO2 in PVDF matrix

2.5

11

Core–shell structured TiO2 @BaTiO3 nanoparticles in PVDF matrix

Penghao Hu

Loading (%)

10

Composite material

Author

S. No.

Table 1 (continued)

12





23





16

50

19.6

Dielectric constant (at 1 kHz)







0.03





0.08

0.03



Dielectric loss

4400

6120

3700

4530

4080

2000

4500

3400

3800

Dielectric breakdown strength (kV/cm)

20

14.3

14.2

9.7

8.7

5.7

12.4

12.2

8.7

Energy density (J/cm3 )

(continued)

[35]

[52]

[51]

[50]

[49]

[48]

[47]

[46]

[45]

References

4 Optimal Strategy for Obtaining Excellent Energy … 43

Jianyong Jiang Multilayer-structured – nanocomposites with 16 layers of P(VDF-HFP) matrix with BaTiO3 as nanofiller

Loading (%)

19

Composite material

Author

S. No.

Table 1 (continued)



Dielectric constant (at 1 kHz) –

Dielectric loss

7820

Dielectric breakdown strength (kV/cm) 30.15

Energy density (J/cm3 ) [53]

References

44 D. Kaur et al.

4 Optimal Strategy for Obtaining Excellent Energy …

45

based polymers, biodegradable polymers can be explored to develop a biodegradable nanocomposite for energy storage applications.

5 Conclusion For achieving a high E d , both k and E b should be high which can not be achieved by a single material. So there was a need to make a hybrid material by combining the properties of polymers and ceramics. The polymer nano composite provided high k, high E b , low dielectric loss and ultimately high E d . There are various factors like size and shape of the nano filler material, percentage of loading of nanofiller material, homogeneous dispersion of nanofiller material, interface between the polymer matrix and the nanofiller material that largely affect the value of k, E b and ultimately E d of the material. Multilayer structure has also proved to deliver more energy density than the single layer structure. Energy density can be increased by using a multilayer structure of polymer nanocomposites material. Polymer material that is used should not be petroleum based as it contaminates the environment. So keeping in mind the environmental safety biodegradable polymers can be identified which are best suited for energy storage applications.

References 1. Whittingham MS (2008) Materials challenges facing electrical energy storage. MRS Bull 33:411–421 2. Thakur VK, Gupta RK (2016) Recent progress on ferroelectric polymer-based nanocomposites for high energy density capacitors synthesis, dielectric properties, and future aspects. Chem Rev 116:4260–4317. https://doi.org/10.1021/acs.chemrev.5b00495 3. Balasubramanian S (2009) Polymer composite and nanocomposite dielectric materials for pulse power energy storage. Materials 2:1697–1733 4. Dang Z (2014) Polymer nanocomposites with high permittivity. In: Nanocrystalline materials. Elsevier Ltd. https://doi.org/10.1016/B978-0-12-407796-6.00009-9 5. März M, Schletz A, Eckardt B, Egelkraut S, Rauh H (2010) Power electronics system integration for electric and hybrid vehicles. In: 6th international conference on integrated power electronics systems 6. Nalwa HS (1999) Capacitors past, present, and future in handbook of low and high dielectric constant materials and their applications. Academic Press, Burlington 7. Jow T (2015) Pulsed power capacitor development and outlook. In: Pulsed power conference IEEE 8. Kimura T (2014) High-power-density inverter technology for hybrid and electric vehicle applications. Hitachi Rev 63:96–102 9. Ribeiro PF et al (2001) Energy storage systems for advanced power applications. Proc IEEE 89:1744–1756 10. Tolbert LM, Member S, Peng FZ, Member S (1999) Multilevel converters for large electric drives. IEEE Trans Ind Appl 35:36–44 11. Guo M, Hayakawa T, Kakimoto M, Goodson T (2011) Organic macromolecular high dielectric constant materials synthesis, characterization , and applications. J Phys Chem B:13419–13432

46

D. Kaur et al.

12. Macdougall FW, Ennis JB, Cooper RA, Bates J, Seal K (2003) High energy density pulsed power capacitors. In: IEEE 14th international pulsed power conference 13. Mcnab IR, Lane WB (1997) Pulsed power for electric guns. IEEE Trans Magn 33 14. Yang L (2019) Progress in materials science perovskite lead-free dielectrics for energy storage applications. Prog Mater Sci 102:72–108 15. Riggs BC, Adireddy S, Rehm CH, Puli VS, Chrisey DB (2015) Polymer nanocomposites for energy storage applications. Mater Today Proc 2:3853–3863 16. Mahmood A, Naeem A (2017) High-k polymer nanocomposites for energy storage applications. In: Properties and applications of polymer dielectrics 17. Shen Y, Zhang X, Li M, Lin Y, Nan C (2017) Polymer nanocomposite dielectrics for electrical energy storage. In: Special topic: energy storage materials 2013–2015 (2017). https://doi.org/ 10.1093/nsr/nww041 18. Sulong TAT, Osman RAM, Idris MS (2016) Trends of microwave dielectric materials for antenna application. AIP Conf Proc 1756 19. Bansal G, Marwaha A, Singh A, Bala R, Marwaha SA (2019) Triband slotted bow-tie wideband THz antenna design using graphene for wireless applications. Optik (Stuttg) 185:1163–1171 20. Bansal G, Marwaha A, Singh A (2020) A graphene-based multiband antipodal Vivaldi nanoantenna for UWB applications. J Comput Electron. https://doi.org/10.1007/s10825-020-014 60-2 21. Mahbub R, Fakhrul T, Islam F (2013) Enhanced dielectric properties of tantalum oxide doped barium titanate based ceramic materials. Proc Eng 56:760–765 22. Xie L, Huang X, Huang Y, Yang K, Jiang P (2013a) Core@double-shell structured BaTiO3 — polymer nanocomposites with high dielectric constant and low dielectric loss for energy storage application. J Phys Chem. https://doi.org/10.1021/jp407340n 23. Mansour SA, Elsad RA, Izzularab MA (2016) Dielectric properties enhancement of PVC nanodielectrics based on synthesized ZnO nanoparticles. J Polym Res 23 24. Wang Q, Zhu L (2011) Polymer nanocomposites for electrical energy storage. J Polym Sci:1421–1429 (2011). https://doi.org/10.1002/polb.22337 25. Jia Q, Huang X, Wang G, Diao J, Jiang P (2016) MoS2 nanosheet superstructures based polymer composites for high-dielectric and electrical energy storage applications. J Phys Chem. https:// doi.org/10.1021/acs.jpcc.6b02968 26. Mao YP, Mao SY, Ye ZG, Xie ZX, Zheng LS (2010) Size-dependences of the dielectric and ferroelectric properties of BaTiO3 /polyvinylidene fluoride nanocomposites. J Appl Phys 108 27. Xia W, Yin Y, Xing J, Xu Z (2018) Results in physics the effects of double-shell organic interfaces on the dielectric and energy storage properties of the P (VDF-CTFE)/BT@HBP@PDAAg nanocomposite films. Res Phys 11:877–884 28. Nanoparticles B (2011) Improving dielectric properties of BaTiO 3/ferroelectric polymer composites by employing surface hydroxylated. ACS Appl Mater Interf:2184–2188 (2011).https://doi.org/10.1021/am200492q 29. Rahimabady M, Mirshekarloo MS, Yao K, Lu L (2013a) Dielectric behaviors and high energy storage density of nanocomposites with core-shell BaTiO3 @TiO2 in Poly(vinylidene fluoridehexafluoropropylene). Phys Chem Chem Phys 15:16242–16248 30. Xie L, Huang X, Huang Y, Yang K, Jiang P (2013b) Core@double-shell structured BaTiO3 polymer nanocomposites with high dielectric constant and low dielectric loss for energy storage application. J Phys Chem C 117:22525–22537 31. Fan Y, Huang X, Wang G, Jiang P (2015) Core-shell structured biopolymer@BaTio3 nanoparticles for biopolymer nanocomposites with significantly enhanced dielectric properties and energy storage capability. J Phys Chem C 119:27330–27339 32. Xie L, Huang X, Huang Y, Yang K, Jiang P (2013c) Core-shell structured hyperbranched aromatic polyamide/BaTiO3 hybrid filler for poly(vinylidene fluoride-trifluoroethylene- chlorofluoroethylene) nanocomposites with the dielectric constant comparable to that of percolative composites. ACS Appl Mater Interfaces 5:1747–1756 33. Marwat MA (2019) Largely enhanced discharge energy density in linear polymer nanocomposites by designing a sandwich structure. Compos Part A Appl Sci Manuf 121:115–122

4 Optimal Strategy for Obtaining Excellent Energy …

47

34. Shen Y (2015) Modulation of topological structure induces ultrahigh energy density of graphene/Ba0.6 Sr0.4 TiO3 nanofiber/polymer nanocomposites. Nano Energy 18:176–186 (2015) 35. Chen J (2018) Multilayered ferroelectric polymer films incorporating low-dielectric-constant components for concurrent enhancement of energy density and charge–discharge efficiency. Nano Energy 54:288–296 36. Chuntian C, Lei W, Xinmei L (2019) K0.5 Na0.5 NbO3 -SrTiO3 /PVDF polymer composite film with low remnant polarization and high discharge energy storage density. Polymers 37. Song Y, Shen Y, Hu P, Beijing T, Lin Y (2012) Significant enhancement in energy density of polymer composites induced by dopamine-modified Ba0.6 Sr0.4 TiO3 nanofibers. Appl Phys Lett:1–5 (2012). https://doi.org/10.1063/1.4760228 38. Jia Q, Huang X, Wang G, Diao J, Jiang P (2013) MoS2 nanosheet superstructures based polymer composites for high-dielectric and electrical energy storage applications. J Phys Chem C. https://doi.org/10.1021/acs.jpcc.6b0296839 39. Gao L, He J, Hu J, Li Y (2013) Large enhancement in polarization response and energy storage properties of poly (vinylidene fluoride) by improving the interface effect in nanocomposites. J Phys Chem C 40. Yang K, Huang X, Huang Y, Xie L, Jiang P (2013) Polymerization : toward ferroelectric polymer nanocomposites. Chem Mater 41. Yu K, Niu Y, Xiang F, Zhou Y, Bai Y, Wang H (2013) Enhanced electric breakdown strength and high energy density of barium titanate filled polymer nanocomposites. J Appl Phys 114:174107 42. Yu K, Niu Y, Zhou Y, Bai Y, Wang H (2013) Nanocomposites of surface-modified BaTiO3 nanoparticles filled ferroelectric polymer with enhanced energy density. J Am Ceram Soc 96:2519–2524 43. You A, Be MAY, In I (2014) Poly (vinylidene fluoride) polymer based nanocomposites with enhanced energy density by filling with polyacrylate elastomers and BaTiO3 nanoparticles. Appl Phys Lett 082904 44. Niu Y, Zhou Y, Wang H (2013) Poly (vinylidene fluoride) polymer based nanocomposites with significantly reduced energy loss by filling with core-shell structured BaTiO3 /SiO2 nanoparticles Poly (vinylidene fluoride) polymer based nanocomposites with significantly reduced energy. Appl Phys Lett. https://doi.org/10.1063/1.4795017 45. Hu P, Jia Z, Shen Z, Wang P, Liu X (2018) High dielectric constant and energy density induced by the tunableTiO2 interfacial buffer layer in PVDF nanocomposite contained with core—shell structured TiO2 @BaTiO3 nanoparticles. Appl Surf Sci 441:824–831 46. Rahimabady M, Mirshekarloo MS, Yao K, Lu L (2013b) Dielectric behaviors and high energy storage density of nanocomposites with core-shell BaTiO3 @TiO2 in Poly(vinylidene fluoridehexafluoropro-pylene). Phys Chem 15:16242–16248 47. Tang H, Sodano HA (2013) High energy density nanocomposite capacitors using nonferroelectric nanowires. Appl Phys Lett 063901 48. Ali M (2019) Sandwich structure-assisted significantly improved discharge energy density in linear polymer nanocomposites with high thermal stability. Coll Surf A581:123802 49. Zhang Y (2017) Enhanced electric polarization and breakdown strength in the all-organic sandwich- structured poly (vinylidene fluoride)—based dielectric film for high energy density capacitor. APL Mater 076109 50. Hu P (2014) Topological-structure modulated polymer nanocomposites exhibiting highly enhanced dielectric strength and energy density. Adv Funct Mater 1–7. https://doi.org/10.1002/ adfm.201303684 51. Xie B (2018) Ultrahigh discharged energy density in polymer nanocomposite by designing linear/ferroelectric bilayer heterostructure Bing. Nano Energy.https://doi.org/10.1016/j.nan oen.2018.10.041 52. Zhu Y (2019) High energy density polymer dielectrics interlayered by assembled boron nitride nanosheets. Adv Energy Mater 1901826:1–10 53. Jiang J (2019) Nano energy synergy of micro-/mesoscopic interfaces in multilayered polymer nanocomposites induces ultrahigh energy density for capacitive energy storage. Nano Energy 62:220–229

48

D. Kaur et al.

54. Feig VR, Tran H, Bao Z (2018) Biodegradable polymeric materials in degradable electronic devices. ACS Cent Sci.https://doi.org/10.1021/acscentsci.7b00595 55. Chandar JV, Shanmugan S, Mutharasu D, Aziz AA (2016) Dielectric and UV Absorption studies of ZnO nanoparticles reinforced Poly(3-hydroxybutyrate) biocomposites for UV applications. J Optoelectron Adv M 8(3):123–128 56. Qazi RA (2020) Eco-friendly electronics based on nanocomposites of biopolyester reinforced with carbon nanotubes: a review. Polym Technol Mater 00:1–24

Chapter 5

A Study of Aging-Related Bugs Prediction in Software System Satyendra Singh Chouhan, Santosh Singh Rathore, and Ritesh Choudhary

1 Introduction Aged software systems often prone to suffer from the performance degradation and increased failure rate. This phenomenon is known as software aging. The primary cause of this phenomenon is the aging-related bugs (ARBs), which are accumulated in the long-running software system. Examples of ARBs are leaked memory, nonterminated threads, mishandling of files and locks, and disk fragmentation. ARBs eventually result to gradual resource attenuation, performance reduction, and sometimes to the system crashing. A software system stuffed with these problems can cause serious damage to the software quality and result in terms of loss of money or even human lives [5, 6]. Aging-related bugs are difficult to discover in the software testing phase due to their inherent nature. Early and accurate detection of these bugs in the software system can lead to the optimal utilization of testing resources and efforts, and eventually, it will lead to the robust software system. Earlier works related to software aging focused on the mitigation of negative effects of aging occurring at runtime. This is typically performed by estimating the time to aging failure and by triggering the proactive recovery measures to bring the system to a safe state [8]. This approach S. S. Chouhan (B) Malaviya National Institute of Technology, Jaipur, India e-mail: [email protected] S. S. Rathore Department of IT, ABV-IIITM Gwalior, Gwalior, India e-mail: [email protected] R. Choudhary Manipal University Jaipur, Jaipur, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_5

49

50

S. S. Chouhan et al.

is known as software rejuvenation [7]. However, rejuvenation approach includes the cost of monitoring the resource consumptions and cost of an unforeseen downtime caused by the failure [11]. Recently, researchers have shown that static source code and complexity software metrics can be used to predict the aging-related bugs in the software systems using machine learning techniques [3, 14]. However, ARBs predictive modeling is suffered from some model building challenges such as imbalance dataset, different feature scaling and selection of classification algorithm, and so on. Imbalance dataset: Class imbalance is one of the main problems when building a prediction model for aging-related bug prediction. The available training ARBs dataset augmented with different software metrics and bugs information is highly imbalanced or skewed [12]. A dataset is said to be imbalanced if the ratio of the buggy and non-buggy classes are not equal. The class of interest (buggy) is in minority, which causes performance challenges in building an effective prediction model. Feature scaling: It is referred to the problem when different dataset attributes (software metrics or features) are of different scale, and the used classification algorithm is sensitive to it. Evolution of many machine learning or classification algorithms during model building depends on the rate of change (for example, algorithms based on gradient descent optimization), which is influenced by the scale of the features used. Therefore, in this case, it is required that all the features are in the same range or scale before building any prediction model [4]. Selection of classification algorithm: Previous works related to ARBs prediction show that different classification algorithms produced varying prediction results and none of the algorithm produced high accuracy and low misclassification value. One of the potential reasons of this is the arbitrary selection of classification algorithm as black-box. Results of ARBs prediction can be improved by the selection of appropriate classification algorithm [22]. The above listed challenges motivated us to carry out a work on ARBs prediction by using standardization and instance-filtering techniques with various classification algorithms. In this paper, we investigate the effect of instance-filtering technique (resampling) and standardization technique on the performance of classification algorithms when building a prediction model for the aging-related bug prediction. The instance-filtering technique helps in increasing the population of the minority class instances, which eventually leads to the accurate prediction model. We perform an experimental study for three different software project dataset to evaluate the effectiveness of presented techniques for the aging-related bug prediction. First, we apply the standardization technique, i.e., standardizing the data set into a standard scale of 0 and 1. Second, we apply the instance-filtering technique and finally use four different types of classification algorithms logistic regression, support vector classifiers (SVC), random forests, and artificial neural networks (ANN) with Softmax function. The results showed that static source code and complexity metrics can be used to build the prediction model for the prediction of aging-related bug. Further, results found that use of instance-filtering technique for handling the class imbalance problem helps in improving the performance of classification algorithms. Following are the research contributions of the presented work.

5 A Study of Aging-Related Bugs Prediction …

51

1. The study presents an investigation on the effect of standardization and instance filtering (resampling) on the performance of ABRs prediction models. 2. The presented study considers four different state-of-the-art classification algorithms for ARBs prediction. To the best of our knowledge, these algorithms have not been considered before for ARBs prediction. 3. We carry out three sets of experiments (without preprocessing, instance filtering, and standardization + resampling) to evaluate the performance of classification algorithms using measures such as accuracy, precision, recall, f-measure, and AUC. The arrangement of the various sections of the paper is as follows. Section 2 presents related work for the ARBs prediction. Section 3 describes elements of experimental study and presents the details of the used classification algorithms and ARBs datasets. Section 4 presents results and analysis of the experimental study. Section 5 concludes the paper.

2 Related Work Cotroneo et al. [2] presented a study to investigate the influence of static software metrics on the software aging prediction. The experimental study used different size and complexity metrics and built the prediction models for ten software projects affected by aging. Authors found that static software metrics have directed correlation with software aging. In another study, Cotroneo et al. [3] built a prediction model to predict the aging-related bugs localization in the software systems. The study was carried out for three large software projects having ARBs information and information of different complexity metrics. Authors found that complexity metrics used for the model building contributed to achieving improved prediction performance for all software projects. Qin et al. [20] performed a study for cross-project aging-related bug prediction in the given software systems. Authors proposed an approach named TLAP (transfer learning-based aging-related bug prediction) that uses transfer learning technique to reduce the distribution difference between training sets and testing sets. Further, to handle the class imbalance problem in model building, authors conducted class imbalance learning on the transferred latent space. Authors evaluated the effectiveness of the proposed approach for two software systems and found that TLAP results in the performance improvement of ARB bug prediction model. Recently, Kumar and Sureka [13, 14] investigated the application of five different feature selection techniques and five different class imbalance handling techniques to counter the effect of class imbalance in building machine learning-based prediction model. Authors performed the experimental study for two large open-source software systems, namely, Linux and MySQL. Results of the analysis showed that random under-sampling performed the best among the used techniques for handling class imbalance problem. Further, authors suggested that use of dimensionality reduction

52

S. S. Chouhan et al.

and class imbalance handling techniques could help in improving the overall performance of the prediction model. Cotroneo et al. [1] published the software aging and rejuvenation repositorY (SARRY) to carry out the research on the aging-related bug prediction. The availability of this repository provides a good opportunity to the research community to carry out more research and to report more insight in the prediction of aging-related bugs.

3 Elements of Experimental Methodology The overview of the experimental methodology is given in Fig. 1. The experimental analysis presented in the paper consists of three sets of experiments. In the first set of experiments, we build the prediction models using the original aging-related bug dataset by applying standardization technique. In the second experiment, we use an instance-filtering technique with the help of SMOTE algorithm and build the prediction models. Finally, we apply a combination of both standardization and SMOTE to preprocess the dataset to build the models. For all three mentioned experiments, we use four different classification algorithms namely, support vector classifiers (SVC), random forests, and ANN with Softmax function to build the model. For each classification algorithm, we assess its performance using various performance evaluation measures. Lastly, we compare the results of the three set of experiments to evaluate the effect of standardization and instance-filtering techniques when predicting aging-related bugs in the software system.

Fig. 1 Overview of the experimental methodology

5 A Study of Aging-Related Bugs Prediction …

53

3.1 Instance Filtering Technique (SMOTE) It is also known as resampling technique. SMOTE is a technique used to balance the population of majority and minority classes by generating synthetic samples of minority class data points. In SMOTE, generation of minority class samples is done by taking each existing minority class sample and create its synthetic examples along the line segments joining k minority class nearest neighbors’ samples. Value of k (nearest neighbors) depends upon the amount of over-sampling required, and this value can be chosen randomly [18]. The selection of k nearest neighbors is performed using Euclidean distance measure. These synthetic samples are generated by taking the difference between the sample and its nearest neighbor. This difference is multiplied by a random number between 0 and 1, and added to the sample. This causes the selection of a random point along the line segment between two specific features. This approach effectively forces the decision region of the minority class to become more general. Once the dataset gets balanced post, the application of SMOTE, the bias, which was pre-existing while the dataset is imbalanced, got removed. But, there is a high variance generated since the data points are widely scattered throughout the plane in the range, this situation of low bias and high variance or vice-versa is called biasvariance trade-off. This bias-variance trade-off leads to decreasing the performance of the model. So, to get away from this problem, a technique called standardization is used to standardize the instances in the feature space.

3.2 Standardization Standardization is a feature scaling technique in which features are scaled such that each feature fulfills the properties of normal distribution with mean (μ) = 0 and standard deviation (σ ) = 1. Standard scores (also called as z scores) of the samples is calculated as follow [4]: z=

x −μ σ

(1)

Standardization of features is important when different features in the given dataset are of different scales. Learning from such features without converting them to the common scale may lead to the bias and variance in the prediction model. It is also a general requirement for many machine learning algorithms.

54

S. S. Chouhan et al.

3.3 Used Classification Algorithms We have used four different classification algorithms, namely, logistic regression, support vector classifier, random forests, and artificial neural networks with Softmax function, to build the prediction models. Details of these algorithms are as follow. Logistic Regression: Logistic regression (LR) is a type of regression method where prediction variable is of binary type. LR estimates the probability that a given dataset example belongs to a particular class. Subsequently, a threshold value is used (generally 50%) to classify the given dataset example into one of the output classes [10]. If the estimated probability is greater than the threshold value, then particular example belongs to the positive class otherwise it belongs to negative class. A logistic regression model computes a weighted sum of the input features (plus a bias term), it outputs the logistic of this result. Representation of joint action is proposed in [14]. In this representation, agents are specified in the parameter list and joint action preconditions are mentioned in the precondition list. There is no such additional syntax or special tag to represent concurrency.   p = h θ x = σ θ T .x

(2)

The logistic, also called the logit, is a sigmoid function (i.e., S-shaped) that outputs a number between 0 and 1. Support Vector Classifiers (SVC): SVC is a machine learning algorithm which is used to solve a two-group classification problem, which can be either boolean, ids, or integers. It operates in a high number of dimensions. Instead of picking an arbitrary line between two sets of data, it maximizes the distance between them. Hence, it generates the best decision boundary or hyper plane between the two classes. It uses a kernel function which helps to get the best fit for the model, thereby generating best results [21]. Random forest: It is a type of ensemble method for the classification. The working of random forest involves generation of a set of decision trees, where each tree trained on a subset of training dataset. Usually, C4.5 algorithm is used to generate individual trees. For the prediction, each trained tree spits out a class label and the class with the most number of votes becomes the final prediction of the model [15]. Artificial Neural Network (with Softmax function): It is a robust function that takes an arbitrary set of inputs and fits it to an arbitrary set of outputs, which are binary. It consists of an input layer, the hidden layers, neurons, the output layer, and a training algorithm. It is a specialty lies in its use of a hidden layer of weighted functions (called neurons) with which a network can be built that map a lot of functions. We provide inputs to a neural network model via the input layers which is processed by the weighted functions present in the hidden layers. The output is generated by the output layer, which depends on the kind of function used to display the output. There are varieties of functions such as sigmoid, tanh, Softmax, etc. We used the Softmax functions, which transforms the outputs into two categorical labels of 0 and 1, hence apt for a binary classification problem [9, 16].

5 A Study of Aging-Related Bugs Prediction …

55

Table 1 Description of used aging-related bug datasets Dataset

Total modules

Non-faulty modules

Dataset_Linux_net

Faulty modules

% of non-faulty modules (%)

% of faulty module (%)

2292

2283

9

99.61

0.31

Dataset_Mysql_innodb

402

370

32

92.04

7.96

Dataset_Linux_scsi

962

958

4

99.58

0.42

3.4 Experimental Datasets The datasets used in the presented study is corresponding to the two different software projects, namely, Linux and Mysql. Datasets of three different software projects have been used. These datasets are publicly available in the PROMISE data repository.1 Details about the datasets are given in Table 1. All three used datasets have contained aging-related bug information and various source code and complexity metrics (82 software metrics). The set of software metrics includes metrics such as “program size related metrics, Cyclomatic complexity, Halstead metrics and agingrelated metrics.” These software metrics are used as independent variables and agingrelated bug information in the file is used as dependent variable. The percentage of minority class modules (buggy) is varied in all three datasets, which make them a good subject of the experiment in order to generalize the findings of the study.

3.5 Performance Evaluation Measures In this work, the performance assessment of built prediction models has been done using five different performance evaluation measures. They are—accuracy, precision, recall, f-measure, and AUC (area under the ROC curve). All performance measures are calculated from the use of confusion matrix [19]. The description of these performance measures is given as follows.2 Accuracy: It calculates the ratio of correctly predicted faulty and non-faulty modules to all the modules in the given dataset. It is defined as Eq. 3. Accuracy = TP + TN/(TP + TN + FP + FN)

(3)

Precision: Precision measures the pertinence of the results. It calculates the ratio of modules predicted correct as faulty out of all faulty predicted modules. It is defined as Eq. 4.

1 The 2 TP

software aging and rejuvenation repository: http://openscience.us/repo/software-aging. = True Positive, TN = True Negative, FP = False Positive, FN = False Negative.

56

S. S. Chouhan et al.

Precision: TP/(TP + FP)

(4)

Recall: Recall measures how many are true out of the relevant results. It calculates the ratio of modules predicted correctly as faulty out of all the faulty modules. It is defined as Eq. 5. Recall = TP/(FN + TP)

(5)

F-measure: It calculates the harmonic mean of precision and recall values of a prediction model. It is defined as Eq. 6. F - measure = 2 × Precision × Recall/(Precision + Recall)

(6)

Area under ROC curve (AUC): AUC provides visualization of the trade-off between the ability to correctly predict fault-prone modules and the number of incorrectly predicted fault free modules. Statistical Test: We have conducted a statistical test to assess whether used classification algorithms performed significantly different from each other for ARBs prediction. Two-tailed Mann–Whitney test [17] has been performed for this purpose and the significance level, i.e., value of α, is set to 0.05 showing 95% of the confidence interval. It signifies that if the p-value is higher than 0.05, then pair of algorithms have not been performed statistically significantly different from each other, otherwise, if the p-value is lower than 0.05, then a pair of algorithms have performed statistically significantly different from each other.

4 Results and Analysis This section presents results of the experimental analysis discussed in Sect. 3. We have used Scikit-Learn3 and Keras4 machine learning libraries available in the Python programming language to build the prediction models. All the used classification algorithms, instance-filtering technique, and standardization technique were implemented in Python with the default values of various hyper-parameters as available in Python libraries. The observations drawn from Tables 2, 3 and 4 for the three different used software projects are summarized as follow. • With reference to the accuracy measure, values are highest in the case of no preprocessing for all three used software projects for all classification algorithms. However, for other considered performance measures, values are relatively lower for no preprocessing case. This phenomenon could be happened due to model 3 Scikit 4 Keras

learn, Python library, https://scikit-learn.org/. library, https://keras.io/.

5 A Study of Aging-Related Bugs Prediction …

57

Table 2 Comparison of performance metrics for various models for project-1: Linux Driver SCSI Experimental method

Logistic regression

SVM

Random forest

ANN (with Softmax)

1 0.5 0.01 0.01 0.01

0.9912 0.5 0.01 0.01 0.01

0.9945 0.5 0.01 0.01 0.01

0.993 0.5 0.01 0.01 0.01

Preprocessing 1a Accuracy ROC Precision F-Score Recall

0.9128 0.95 0.05 0.09 1

0.502 1 1 1 1

0.9973 1 1 1 1

0.507 0.5 0.01 0.01 0.01

Preprocessing 2b Accuracy ROC Precision F-score Recall

0.9709 0.98 0.12 0.22 1

0.9869 0.99 0.25 0.4 1

0.9973 1 1 1 1

0.9985 0.99 0.99 1 1

Without preprocessing

Accuracy ROC Precision F-Score Recall

a Resampling b Standardization

+ resampling

Table 3 Comparison of performance metrics for various models for project-2: Linux Drivernet Experimental method

Logistic regression

SVM

Random forest

ANN (with Softmax)

Accuracy ROC Precision F-Score Recall

0.9941 0.49 0.01 0.01 0.01

0.997 0.5 0.01 0.01 0.01

0.997 0.5 0.01 0.01 0.01

0.992 0.5 0.01 0.01 0.01

Preprocessing 1a Accuracy ROC Precision F-Score Recall

0.8512 0.92 0.05 0.09 1

0.501 1 1 1 1

0.9945 1 1 1 1

0.4985 0.5 0.01 0.01 0.01

Preprocessing 2b Accuracy ROC Precision F-Score Recall

0.9563 0.97 0.14 0.24 1

0.9879 0.98 0.25 0.4 1

0.9956 0.99 0.8 0.89 1

0.9896 0.98 0.98 0.99 1

Without preprocessing

a Resampling b Standardization

+ resampling

over-fitting, where accuracy values are coming high due to the better prediction of majority class values, and models are biased towards the majority class. The accuracy values decreased with preprocessing 1 and preprocessing 2 cases.

58

S. S. Chouhan et al.

Table 4 Comparison of performance metrics for various models for project-3: MySQL Innodb Experimental method

Logistic regression

SVM

Random forest

ANN (with Softmax)

Accuracy ROC Precision F-Score Recall

0.8571 0.58 0.2 0.22 0.25

0.9166 0.5 0.01 0.01 0.01

0.8852 0.52 0.12 0.12 0.12

0.909 0.5 0.01 0.01 0.01

Preprocessing 1a Accuracy ROC Precision F-Score Recall

0.7821 0.76 0.23 0.35 0.75

0.5135 1 1 1 1

0.9256 1 1 1 1

0.7882 0.79 0.71 0.82 0.96

Preprocessing 2b Accuracy ROC Precision F-Score Recall

0.8712 0.87 0.37 0.52 0.88

0.8688 0.85 0.3 0.45 0.88

0.9175 1 1 1 1

0.9923 0.88 0.85 0.88 0.92

Without preprocessing

a Resampling b Standardization

+ resampling

• For preprocessing 1 case, values of accuracy decreased and while other performance measures values increased. This is true for all used classification algorithms. One of the potential reasons behind it is that by applying the resampling technique, the population of minority and majority class samples became balanced and it resulted in better predictive model learning • For preprocessing 2 case, values of accuracy reduced marginally (as compared to no preprocessing case), while values of other performance measures increased significantly. This is true for all used classification algorithms. The rationale behind this is that by applying standardization, all the software metrics (feature) scaled between the range of 0 and 1, leading to better predictive model learning. • Among the used classification algorithms, SVC and random forest produced the best results followed by logistic regression. ANN (with Softmax) performed moderately. In some cases, it produced better results, whereas in other cases, it produced poor results. Overall, we found that the use of standardization and instance-filtering (resampling) techniques helped in the performance improvement of prediction models for ARBs. Additionally, they resulted in better model learning and reduced the biasing in the prediction. Results reported in Table 5 showed that in most of the cases, pairs of classification algorithms have shown no statistically significant performance difference with each other. The statistically significant performance difference has been found for the pairs, LR: RF for all five performance measures, SVM: RF for accuracy, RF: ANN

0.39 0.03 0.5 0.01 0.23 0.055

No Yes* No Yes* No No

0.12 0.03 0.5 0.21 0.21 0.03

No Yes* No No No Yes*

Sig. diff.

Recall 0.43 0.33 0.15 0.36 0.14 0.06

p-value No No No No No No

Sig. diff.

LR logistic regression, RF random forest, ANN artificial neural network, SVM support vector machine

LR versus SVM LR versus RF LR versus ANN SVM versus RF SVM versus ANN RF versus ANN

Precision p-value

p-value

Sig. diff.

Accuracy 0.12 0.04 0.5 0.21 0.27 0.04

p-value

F-score No Yes* No No No Yes*

Sig. diff.

0.14 0.05 0.29 0.22 0.08 0.019

p-value

ROC

Table 5 Results of the statistical test (Two-tailed Mann-Whitneytest), entriesmarked with * are showing the statistically significant difference

No Yes* No No No Yes*

Sig. diff.

5 A Study of Aging-Related Bugs Prediction … 59

60

S. S. Chouhan et al.

for precision, f-score, and ROC. From the table, it can be concluded that the selection of a classification algorithm does not affect the performance of the ABRs prediction model.

5 Conclusions In this paper, we investigated the effect of standardization and instance filtering for the prediction of aging-related bugs in the software systems. The study was of two-folded. In the first fold, we built the prediction models by standardizing the original aging-related bug datasets and evaluated the results. In the second fold, we built the prediction models using the datasets after applying instance-filtering technique (SMOTE) and evaluated the results. In the third fold, we built the prediction models by standardizing the datasets as well as applying instance-filtering technique (SMOTE) and evaluated the results. We found that static source code and complexity metrics can be used to build the prediction model for the prediction of aging-related bug. Further, we found that use of techniques such as standardization and instancefiltering technique for handling the class imbalance problem helps in improving the performance of classification algorithms.

References 1. Cotroneo D, Iannillo AK, Natella R, Pietrantuono R, Russo S (2015) The software aging and rejuvenation repository. In: IEEE international symposium on software reliability engineering workshops (ISSREW), pp 108–113. https://openscience.us/repo/software-aging 2. Cotroneo D, Natella R, Pietrantuono R (2010) Is software aging related to software metrics? In: IEEE second international workshop on software aging and Rejuvenation (WoSAR), pp 1–6 3. Cotroneo D, Natella R, Pietrantuono R (2013) Predicting aging-related bugs using software complexity metrics. Perform Eval 70(3):163–178 4. Gal M, Rubinfeld DL (2018) Data standardization 5. Grottke M, Li L, Vaidyanathan K, Trivedi KS (2006) Analysis of software aging in a web server. IEEE Trans Reliab 55(3):411–420 6. Grottke M, Matias R, Trivedi KS (2008) The fundamentals of software aging. In: IEEE international conference on software reliability engineering workshops (IS-SRE Wksp) 2008, pp 1–6. IEEE 7. Grottke M, Nikora AP, Trivedi KS (2010) An empirical investigation of fault types in space mission system software. In: IEEE/IFIP international conference on dependable systems and networks (DSN). IEEE, pp 447–456 8. Grottke M, Trivedi KS (2007) Fighting bugs: remove, retry, replicate, and rejuvenate. Computer 40(2) 9. Haykin SS et al (2009) Neural networks and learning machines/Simon Haykin. Prentice Hall, New York 10. Hosmer DW, Lemeshow S, Sturdivant RX (2013) Applied logistic regression, vol 398. Wiley 11. Huang Y, Kintala C, Kolettis N, Fulton ND (1995) Software rejuvenation: analysis, module and applications. In: Twenty-fifth international symposium on fault-tolerant computing. Digest of papers, pp 381–390

5 A Study of Aging-Related Bugs Prediction …

61

12. Japkowicz N, Stephen S (2002) The class imbalance problem: a systematic study. Intell Data Anal 6(5):429–449 13. Kumar L, Sureka A (2017) Aging related bug prediction using extreme learning ma- chines. In: IEEE India Council international conference 14. Kumar L, Sureka A (2018) Feature selection techniques to counter class imbalance problem for aging related bug prediction: aging related bug prediction. In: Proceedings of the 11th innovations in software engineering conference, p 2 15. Liaw A, Wiener M et al (2002) Classification and regression by random forest. R News 2(3):18– 22 16. Liu W, Wen Y, Yu Z, Yang M (2016) Large-margin softmax loss for convolutional neural networks. ICML 2:7 17. Nachar N et al (2008) The Mann-Whitney U: a test for assessing whether two independent samples come from the same distribution. Tutorials Quant Methods Psychol 4(1):13–20 18. Nghe NT, Janecek P, Haddawy P (2007) A comparative analysis of techniques for predicting academic performance. In: Frontiers in education conference global engineering: knowledge without borders, opportunities without passports, pp T2G–7 19. Powers DM (2011) Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation 20. Qin F, Zheng Z, Bai C, Qiao Y, Zhang Z, Chen C (2015) Cross-project aging related bug prediction. In: IEEE international conference on software quality, reliability and security (QRS), pp 43–48 21. Suykens JA, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300 22. Torquato M, Araujo J, Umesh I, Maciel P (2018) Sware: a methodology for software aging and rejuvenation experiments. J Inform Syst Eng Manage 3(2):15

Chapter 6

Mutual Authentication of IoT Devices Using Kronecker Product on Secure Vault Shubham Agrawal and Priyanka Ahlawat

1 Introduction IoT is interpreted as a collection of interconnected objects called as sensor nodes, which have the capabilities of sensing along with the network connectivity. IoT has grown gradually in such a way that it is widely used in various sectors and domains like smart cities, smart homes, military, transportation monitoring, public healthcare systems, baby monitoring, environment monitoring and many other common public areas where the monitoring need is necessary [1]. The sensors that are deployed in an environment for collection of data are durable, and since the evolvement of technology, the sensors are able to implement intelligence like humans, which are used for analysing and processing of data. There are various limitations linked with the IoT sensor nodes such as less memory, low computational power and less storage available [2]. Authentication refers to a process of identifying a user or a device on the network. Authentication of sensor nodes is a vital factor for Internet of things. Since the evolvement of IoT environment to a large scale, intruders are finding IoT as an attractive domain to escalate their illegal actions. Authentication is a vital factor in IoT as it encourages people to securely use the IoT technology [3]. In the past years, authentication algorithms are mostly relied on authenticating a device by using just a username and password [4]. These algorithms are easily compromised

S. Agrawal (B) · P. Ahlawat NIT Kurukshetra, Kurukshetra, Haryana, India e-mail: [email protected] P. Ahlawat e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_6

63

64

S. Agrawal and P. Ahlawat

by implementing side-channel attach and dictionary attacks. Another disadvantage of these algorithms includes frequent changing of the passwords. The organization of rest of the paper is as follows: Sect. 2 contains the related work on the authentication schemes. Section 3 includes the challenges and the security issues. Section 4 provides the proposed authentication scheme. Section 5 contains the performance analysis of the proposed scheme. Section 6 shows the security analysis of the proposed scheme. Section 7 overviews the comparison of our technique with other authentication techniques. Finally, Sect. 8 covers the conclusion and future work while concluding the paper.

2 Related Work In this section, we are going to have a look at the various authentication protocols developed over the years. There are mainly three features of an authentication protocol: lightweight, mutual authentication and privacy protection. Shah et al. [5] proposed an authentication technique, which uses a secure vault consisting of n keys each of size m. Their secure vault is completely stored inside the secure server database as well as the IoT device. The algorithm uses AES encryption and a challenge-response mechanism, which gives a challenge task to server as well as the device. Depending upon the response received from the server or the device, next step of authentication takes place. After every authentication phase, the value of secure vault changes, which increases the computation power of the resource-constrained devices. Xiaopeng et al. [6] proposed a virtual technique, which senses the touch of the user on the IoT devices. Depending on the sensing capability of the devices to sense the user’s touch, they developed a secure authentication technique, which uses physical operations. Since the proposed mechanism does not require any physical modification of the devices, it is good for commercial usage. Alizai et al. [7] proposed an authentication scheme, which uses digital signatures to authenticate a device. The proposed algorithm allows an IoT device to stay in the network only if it passes through a multi-factor authentication phase. The proposed scheme is resistant to most attacks like man-in-the-middle attack. Dammark et al. [8] presented an authentication technique, which is a lightweight authentication mechanism. The proposed authentication uses tokens to authenticate the devices; hence, it is a token-based lightweight authentication mechanism. Gargaro et al. [9] presented a mechanism, which authenticates the IoT devices in the network as well as authorizes the access of the device. Agrawal et al. [10] summarize various matrix-based key management schemes, which are used to set up a communication key between the communicating nodes. Kamal et al. [11] present a lightweight key management scheme based on matrix for the wireless sensor network, which enhances the storage cost of the nodes by using an encryption technique.

6 Mutual Authentication of IoT Devices …

65

We observed that researchers have proposed a large number of authentication schemes, and still, there is a need to provide a good trade-off between security and efficiency. How to make the authentication scheme efficient in terms of storage cost is still an open research issue. This motivated us to present our proposed scheme.

3 Security Issues and Challenges Various attacks that are possible over these layers are presented as follows: i.

ii.

iii.

iv.

v.

vi.

Man-in-the-middle attack: In this type of attack, the attacker secretly listens to the data transmitted, intercepts or alters the data transmitted between two parties. The two parties assume that they are directly communicating with each other without any change in data. Denial-of-service (DoS) attack: In this type of attack, the attacker prevents the authorized users from accessing the server. The attacker makes the resources of the server or the network unavailable to the end users by temporarily disrupting the services provides to the legitimate users. Distributed denial-of-service (DDoS) attack: In this type of attack, the attacker uses a large number of devices to perform a DoS attack on a network. It is a larger version of DoS attack as it uses multiple devices to flood the server to jam the network. Replay attack: In this type of attack, the attacker stores a piece of valid data that is transmitted between two parties or hosts by eavesdropping on the communication channel. Later, he/she uses that piece of data to re-transmit to one of the hosts, without having authorization to transfer data. Side-channel attack: These types of attacks are carried out using the hardware implementation of the system rather than the software information. The attacks are based on factors like execution time of a process, storage utilization of a process, power consumption and communication cost of a process, electromagnetic interference, etc. Node capture attack: In this type of attack, the attacker can compromise a node and gather all the vital information like encryption key and data sent and received. The attacker can clone the nodes in the network to affect the security.

4 Proposed Methodology In this paper, we have discussed and proposed a new secure authentication mechanism to authenticate an IoT device in a network. The authentication of an IoT device is necessary because before communicating with a server, the IoT device must ensure its identity to the server so that necessary services could be provided to it. Most of the protocols for authentication are based on single factor that is vulnerable to

66

S. Agrawal and P. Ahlawat

attacks like dictionary attack and side-channel attack. The keys in the secure vault are changed after a certain amount of time, which prevents the dictionary attack.

4.1 Assumption • An initial secure vault is constructed which is an N × N symmetric matrix, where N represents the number of sensor nodes in the network. • Each sensor device is identified using a unique Id number, which is provided before the deployment of the sensors into the network. • The secure vault is completely stored in the server, and only some part of the vault is stored inside each of the sensor nodes. • The vault stored in the server cloud is secured inside the protected database. • The channel through which the communication takes place between the devices and the server is secure and reliable. • All the communications are done over wireless secure network.

4.2 Kronecker Product The Kronecker product is a very different operation from matrix multiplication [12]. Let there be two matrices P and Q. Let P be an (m × n) matrix and Q be an (r × s) matrices, then the Kronecker product of these two matrices P and Q is denoted as P ⊗ Q. The matrix P ⊗ Q is an (mr × ns) matrix, where the elements are generated by applying Kronecker product on the matrices P and Q. ⎤ p11 Q . . . p1n Q ⎥ ⎢ P ⊗ Q = ⎣ ... . . . ... ⎦ pm1 Q · · · pmn Q ⎡

Let us now the  operation  understand by using an example.   32 23 32 23 Let P = , and Q = then P ⊗ Q = ⊗ 51 76 51 76 ⎡

3×2 ⎢3 × 7 P⊗Q=⎢ ⎣5 × 2 5×7

3×3 3×6 5×3 5×6

2×2 2×7 1×2 1×7

⎤ 2×3 2 × 6⎥ ⎥ 1 × 3⎦ 1×6

6 Mutual Authentication of IoT Devices …



6 ⎢ 21 =⎢ ⎣ 10 35

9 18 15 30

67

4 14 2 7

⎤ 6 12 ⎥ ⎥=K 3 ⎦ 6

4.3 Secure Vault The secure vault (K) is an N × N matrix, where n denotes the total number of sensor nodes present in the network. We denote the elements of secure vault as K[i, j], where i and j are the index numbers used to locate the elements in the matrix. The IoT devices store only a short amount of data from the secure vault. The server, however, stored the entire vaults since the server authenticates the devices. Since the secure vault that we are going to use in the proposed algorithm is a symmetric matrix, we are going to assume the matrices P and Q also as symmetric matrices.   5 25 292 134 Let P = , and Q = then, 25 105 134 30   5 25 292 134 P⊗Q= ⊗ 25 105 134 30 ⎡ ⎤ 5 × 292 5 × 134 25 × 292 25 × 134 ⎢ 5 × 134 5 × 30 25 × 134 25 × 30 ⎥ ⎥ =⎢ ⎣ 25 × 292 25 × 134 105 × 292 105 × 134 ⎦ 25 × 134 25 × 30 105 × 134 105 × 30 ⎤ 1460 670 7300 3350 ⎢ 670 150 3350 750 ⎥ ⎥ =⎢ ⎣ 7300 3350 30660 14070 ⎦ ⎡

3350 750 14070 3150 =K

4.4 Pre-processing the Matrix Let the secure vault be denoted as K, which is a symmetric matrix. Let us have two matrices P and Q, which are also symmetric, such that they satisfy the following equation: P⊗Q=K Now we are going to decompose the matrix P and matrix Q such that:

68

S. Agrawal and P. Ahlawat

P = A.B and Q = C.D where (.) represents the normal matrix multiplication. The matrices P and Q are decomposed into two matrices each. Matrices B and D are made publically available, and matrices A and C are made private matrices. Each sensor node is assigned some values from the matrices A and C. 

 5 25 292 134 ⊗ = A. B ⊗ C. D = K 25 105 134 30     3 4 −1 3 10 42 45 K = . ⊗ . 11 18 2 4 −4 25 62 ⎤ ⎡ 5 × 292 5 × 134 25 × 292 25 × 134 ⎢ 5 × 134 5 × 30 25 × 134 25 × 30 ⎥ ⎥ =⎢ ⎣ 25 × 292 25 × 134 105 × 292 105 × 134 ⎦ 25 × 134 25 × 30 105 × 134 105 × 30

4.5 Deployment Phase After the previous phase, we will have four submatrices, out of which two are made private to the network devices and server and two are publically available to anyone in the network including the  intruders. 3 4 −1 3 Matrices A = and B = are decomposed from matrix P, and 11 18 2 4   10 42 45 matrices C = and D = are decomposed from matrix Q. −4 25 62 Each sensor node will be assigned rows from matrix A and C, which are private matrices. Let N i be the ith sensor node in the network, where “i” is the Id number of that sensor node. The rows are represented as Ai,− which represents the ith row of the matrix A. Each sensor node requires keeping data just from two rows, one from matrix A and one from matrix C. Node 1 stores Node 2 stores Node 3 stores Node 4 stores

A1,− and C1,− , which is [4, 13] and [10, 42]. A1,− and C2,− , which is [4, 13] and [−4, 25]. A2,− and C1,− , which is [14, 11] and [10, 42]. A2,− and C2,− , which is [14, 11] and [−4, 25].

6 Mutual Authentication of IoT Devices …

69

4.6 Key Calculation The major operation that will help in the authentication algorithm is calculating the key, using which the mutual authentication of sensor nodes and server can take place. Before calculating the key, the sensor node N i and the server S j compute the index of column of public matrix B and √ matrix D.√ Ni computes the indexes j/ n and j% n √ √ S j computes the indexes i/ n and i% n. where n represents the number of nodes in the network. After this, both the communicating parties will calculate a key value, which must be same. The result value becomes the secret key between sensor node and the server. The key value is calculated as follows: Ni Computes Ai/√n,− · B−, j/√n × Ci%√n,− · D j%√n S j Computes A j/√n,− · B−,i/√n × C j%√n,− · Di%√n . The secret key values will be same for both the entities, and this secret key can be further used for communication.

4.7 Authentication Mechanism The authentication mechanism proposed by us is a three-way mutual authentication mechanism, which mutually authenticates the IoT device as well as the IoT server. Figure 1 shows the message transmission between IoT device and the server. The authentication mechanism is triggered by IoT device. When the server receives a communication request, it responds to the request by sending a test query message Q1. The device replies to the query message sent by the server and generates a new query message Q2 for the server. After receiving the query message from the IoT device, the server authenticates the reply message, and if server finds it to be valid, it replies back to the device with a new query message Q3. The IoT device again authenticates the server’s query message, and if it found to be valid, then at this point both the server and the device have successfully authenticated each other. Now, the transmission of data can start between the device and the server. The detailed explanation of the authentication steps is given as follows: Step 1 The communication process is triggered by the IoT device, which wishes to establish a communication with the server. Before communication begins, the IoT device must authenticate itself. The device sends a trigger message T 1 to the server, which consists of device unique, Id. This trigger message contains nothing other than the device Id, and no sensitive information is transmitted.

70

S. Agrawal and P. Ahlawat

Fig. 1 Three-way mutual authentication query message exchange briefly shows the flow of data during the authentication phase

T 1 = [Device ID] Step 2 The server checks if the device Id sent by the IoT device is a valid Id of the sensor node present in the network. If the device Id is valid, then the server allocates itself a random Id r1 such that {0 < r1 ≤ n and r1! = device Id}. Now the server calculates the secret key using Kronecker product method [Kronecker (device_Id, r1)] such that first node Id is the Id sent by the triggered sensor node and second node Id is r1. The server replies back the IoT device by sending the first query message Q1, which consists of the random number r1 and the sum of digits (sod1) of the calculated secret key. Q1 = [R1, Sod1] Step 3 After receiving the query message Q1 from the server, the IoT device extracts the random number r1 and the sod1 from the message. The device uses r1

6 Mutual Authentication of IoT Devices …

71

as the second node Id and calculates the secret key using Kronecker product method (Kronecker (device_Id, r1)). After calculating the secret key, it will calculate the sum of digits of the key and matches it with the key received from the server. If the key calculated by the device matches to sod1, then the device generates a new random number r2 such that {0 < r2 ≤ n and r2! = r1. Now the device again calls the method Kronecker (r1, r2) and finds the new secret key. The device replies back to the server with another query message Q2. The query message Q2 consists of a random number r2 generated by device and the sum of digits of new secret key (sod2). Q2 = [r 2, sod2] Step 4 After receiving the query message Q2 from the IoT device, the server extracts the random number r2 and the sod2 from the message sent by IoT device. The server calculates the secret key by calling Kronecker (r1, r2) by using the first node Id as r1 and second node id as r2. After calculating the final secret key, server will calculate the sum of digits of the calculated secret key as sod3. It will match sod2 sent by the IoT device and the sod3. If both the values match, then the server confirms the authenticity of the IoT device. If sends a confirmation message to the IoT device informing it that it has been successfully authenticated and if can proceed in further communication. Checks if sod2 == sod3 Sends confirmation message "Con". The communication of data can start between the device and the server as soon as both the device and the server authenticate each. Each authentication phase establishes a session key, which is shared by both the IoT device and the server. This session key helps the server to predict the sender of the query message and the trigger message. Each authentication session between a device and the server is different for all IoT devices. The session key is unchanged for a single session but is different for all the different sessions.

4.8 Changing the Secure Vault The values of the secure vault are changes after a certain amount of time. This depends on the admin of the system. If the admin finds any vulnerability inside the network or if the admin detects a sudden change in the flow of data, then the admin can completely change the values of secure vault to prevent any security attack. The new value of the secure vault does not have any relationship with the previous values of the. The only requirement during changing the values of vault is to take

72

S. Agrawal and P. Ahlawat

symmetric matrices, as the Kronecker product will generate the same secret key value for symmetric matrix only.

5 Performance Analysis In this section, we will analyse our algorithm in terms of storage cost which analyse the number of keys being stored in a single sensor node, communication cost which analyse the amount of extra data required to transmit during the generation of secret key and computation cost which will analyse the number of computations required for a single authentication session.

5.1 Storage Cost The storage cost is the number of keys that are stored inside a√ sensor node. our √ In√ √ m × m and n × n. algorithm, we have decomposed our matrices into size √ √ √ √ The private matrices A of size m × m and C of size n × n are used to select data to be stored inside a sensor node. Each sensor node√only stores a single row √ from each of the two matrices, which can be calculated as m + n. Since N is the number of sensor nodes present inside the network during time of deployment, the size of Kronecker matrix or the secure vault is N × N, and since only a part of the matrix is stored √ inside the sensor node, the storage cost of each sensor node can be termed as O( N ). Figure 2 shows the storage cost analysis. Fig. 2 Storage cost analysis of proposed authentication mechanism

6 Mutual Authentication of IoT Devices …

73

5.2 Communication Cost The communication cost depends upon the number of messages transmitted for a single authentication phase. In step 1 of the authentication process, the device only sends its Id to the server, which counts as O(1). In steps 2 and 3, a single query message is transmitted consisting of only two messages each, i.e. a random number and a sum of digits, which counts as O(1). Hence, the total communication cost can be termed as only O(1).

5.3 Computation Cost The computation cost is calculated by considering the number of multiplications that are required to compute the secret key at server and device site. The number of multiplications totally A and C. The private √ on the size of private √ matrices √ √depends matrices A is of size m × m and C is of size n × n√are used √ to proceed with multiplications. The cost of one Kronecker operation is m + n. We analysed the computation cost of our algorithm by assuming that the size of matrix A and C is same, i.e. m = n. In the results, we see that when the size of both the √ matrices is N, the computation cost for one operation to calculate secret key is N . Since Kronecker () method is called √ phase, the √ for four times for a single authentication communication cost is 4 × N . Hence, the computation cost is O( N ). Figure 3 shows the computation cost analysis.

6 Security Analysis Our algorithm is safe from most of the attacks; some of them are listed below. In this section, we will show how our algorithm prevents most of the attacks. Fig. 3 Computation cost analysis of proposed authentication mechanism

74

S. Agrawal and P. Ahlawat

6.1 Construction of Kronecker Matrix A problem may arise to the security of the network, if the attacker somehow managed to reconstruct the secure vault matrix. Since the two submatrices, B and D, are publically available, if the attacker successfully gained the values of matrices A and C, he can construct the secure vault matrix and break the entire security of the system. The attacker can gain access to one of the node and can fetch the row values of that node for the matrices A and C. Even if he achieved to do this, it is impossible for the attacker to create the entire matrix because there will be infinite number of solutions for the equation that would be gained after tempering a single node. A.B ⊗ C.D = P ⊗ Q     3 4 −1 3 10 42 45 =K = . ⊗ . A21 A22 2 4 C21 C22 62 ⎤ ⎡ K 11 5 × 134 25 × 292 25 × 134 ⎢ 5 × 134 K 21 25 × 134 25 × Q22 ⎥ ⎥ K =⎢ ⎣ 25 × 292 25 × 134 K 33 A22 × 134 ⎦ K 44 25 × 134 25 × Q22 A22 × 134

6.2 Man-in-The-Middle Attack In our algorithm, we are only transmitting a random number and the sum of digits, and the attacker cannot use those values to create the next reply message either to the server or to the IoT device. If the attacker pretends to be an IoT device, as soon as it sends the Id to the server, the server is going to respond with a random number and a sod1, which will confuse the attacker about what to do with just those two values.

6.3 Next Password Prediction In our algorithm, since each session requires different device Id values and it generates different sum of digit (sod) values, as each pair of numbers will have different key values, it is impossible for the attacker to predict the next session key or secret key.

6 Mutual Authentication of IoT Devices …

75

6.4 Side-Channel Attack Since our algorithm does not utilize any encryption technique, the power consumption is nil. The memory and energy consumption is very minimal as only little amount of data is stored inside the IoT device, and only a few multiplications are required to generate the secret keys in each session. Hence, it is impossible for the attacker to predict or break the system by using such a little amount of hardware properties.

7 Comparison of Authentication Mechanism In this section, we are going to compare our algorithm with the mechanism proposed by Shah et al. [5]. Their algorithm also uses a secure vault, which stores a set of keys. We compare the two proposed algorithm based on the following criteria.

7.1 Storage of Secure Vault Shah et al. [5] use a secure vault to store n keys with each of m size. Our approach uses a secure vault that holds an N × N symmetric matrix. If we assume that the values of n and m (n = m = N) are same for shah algorithm, then the total size of the secure vault turns to be N 2 . Similarly, in our algorithm, the size of secure vault is N 2 . The secure vault in both the algorithms is stored completely in server’s secure database. In Shah’s [5] approach, the entire secure vault is stored inside the IoT device; however, in our approach, only two rows are stored inside the IoT devices, which decreases the storage cost of IoT devices. The storage complexity of IoT devices for √ Shah’s approach is O(N 2 ), and in case of our approach, the storage cost is just O( N ). The comparison of storage of secure vault in both the approaches can be seen in the below graphs (Fig. 4).

a. Storage cost of our approach

b. Storage cost of Shah T. [12] approach

Fig. 4 Comparison of storage cost of our approach with Shah’s authentication approach

76 Table 1 Table of comparison showing the comparison of our algorithm with Shah T. authentication scheme

S. Agrawal and P. Ahlawat Algorithm used Storage cost Computation cost Shah et al. [5] Our approach

O(N2 ) √ O( N )

Encryption cost + hashing cost O(1)

7.2 Computation Cost Shah et al. [5] use encryption and hash methods, which increases the computation cost of each IoT device. In our approach, only simple multiplications are used which does not overload the computation power. Moreover, in Shah’s approach, the message transmitted between the server and the IoT device consists of an encrypted message, which is concatenated with the hash values. In our approach, only two integer numbers are transmitted which does not require any encryption technique (Table 1).

8 Conclusion and Future Work We have proposed a new authentication algorithm, which uses a symmetric matrix stored in a secure vault. We use Kronecker product operation to generate the secret key. The purpose of this mechanism is to reduce the storage cost, computation cost as well as the communication cost for the IoT devices, which are resource-constrained. The communication cost is reduced to O(1)√as there is minimal communication required. The storage cost is reduced to O( N ), which √ is a major factor of this algorithm. The computation cost is also reduced to O( N ), where N is the total number of sensor nodes present in the network. We have also seen that the algorithm is resistant to dictionary attack as well as side-channel attack. Our proposed algorithm also prevents the network from side-channel attack and next password prediction attack. In future work, we will try to make the algorithm more efficient to prevent the network from other attacks like node capture attacks.

References 1. El-hajj M, Chamoun M, Fadlallah A, Serhrouchni (2017) Taxonomy of authentication techniques in internet of things (IoT). In: 2017 IEEE 15th student conference on research and development (SCOReD). IEEE, pp 67–71 2. Albalawi A, Almrshed A, Badhib A, Alshehri S (2019) A survey on authentication techniques for the internet of things. In: 2019 international conference on computer and information sciences (ICCIS). IEEE, pp 1–5

6 Mutual Authentication of IoT Devices …

77

3. Saadeh M, Sleit A, Qatawneh M, Almobaideen W (2016) Authentication techniques for the internet of things: a survey. In: 2016 cybersecurity and cyber forensics conference (CCC). IEEE, pp 28–34 4. Atwady Y, Hammoudeh M (2017) A survey on authentication techniques for the internet of things. In: Proceedings of the international conference on future networks and distributed systems. ACM 5. Shah T, Venkatesan S (2018) Authentication of IoT device and IoT server using secure vaults. In: 2018 17th IEEE international conference on trust, security and privacy in computing and communications/12th IEEE international conference on big data science and engineering (TrustCom/BigDataSE). IEEE, pp 819–824 6. Li X, Yan F, Zuo F, Zeng Q, Luo L (2019) Touch well before use: intuitive and secure authentication for IoT devices. In: The 25th annual international conference on mobile computing and networking, pp 1–17 7. Alizai ZA, Tareen NF, Jadoon I (2018) Improved IoT device authentication scheme using device capability and digital signatures. In: 2018 international conference on applied and engineering mathematics (ICAEM). IEEE, pp 1–5 8. Dammak M, Boudia ORM, Messous MA, Senouci SM, Gransart C (2019) Token-based lightweight authentication to secure IoT networks. In: 2019 16th IEEE annual consumer communications and networking conference (CCNC). IEEE, pp 1–4 9. Gargaro G, Trinchini P (2019) Adaptive enhanced environment-aware authentication for IoT devices. U.S. Patent 10,225,261, issued 5 Mar 2019 10. Agrawal S, Ahlawat P (2020) Key management schemes in internet of things: a matrix approach. In: Handbook of wireless sensor networks: issues and challenges in current Scenario’s. Springer, Cham, pp 381–400 11. Kamal R, Ahlawat P (2019) Improved matrix based key management scheme for wireless sensor network security. In: 2019 international conference on issues and challenges in intelligent computing techniques (ICICT), vol 1. IEEE, pp 1–5 12. Tsai IC, Yu CM, Yokota H, Kuo SY (2017) Key management in internet of things via Kronecker product. In: 2017 IEEE 22nd Pacific rim international symposium on dependable computing (PRDC). IEEE, pp 118–124 13. El-Hajj M, Chamoun M, Fadlallah A, Serhrouchni A (2017) Analysis of authentication techniques in Internet of Things (IoT). In: 2017 1st cyber security in networking conference (CSNet). IEEE, pp 1–3 14. Lee CH, Kim KH (2018) Implementation of IoT system using block chain with authentication and data protection. In: 2018 international conference on information networking (ICOIN). IEEE

Chapter 7

Secure and Decentralized Crowdfunding Mechanism Based on Blockchain Technology Swati Kumari and Keyur Parmar

1 Introduction Crowdfunding refers to the idea of raising funds for a project or business venture from investors. To facilitate crowdfunding, there are several crowdfunding platforms that bring investors and entrepreneurs together through which the businesses get early stage support. There are three types of crowdfunding: 1. Reward crowdfunding: In reward crowdfunding, investors receive gifts or product samples if they pledge a certain amount to a business venture. The crowdfunding platforms, such as Kickstarter [1], follow reward crowdfunding. 2. Debt crowdfunding: Debt crowdfunding is similar to bank loan where the loan has to be returned within a specified time. Here, investors provide loans to entrepreneurs. Debt crowdfunding is preferred over bank loans as it does not require manual intervention. The blockchain-based crowdfunding is less time-consuming as compared to the conventional crowdfunding mechanism. LendingClub [2] offers debt crowdfunding. 3. Equity crowdfunding: In equity crowdfunding, investors get the company’s share in return for the investment. Crowdfunding platforms, such as Wefunder [3], offer equity crowdfunding. The conventional mechanism of crowdfunding works as follows: 1. Entrepreneurs share the business ideas on crowdfunding platforms. S. Kumari (B) · K. Parmar Indian Institute of Information Technology, Vadodara, India e-mail: [email protected] K. Parmar e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_7

79

80

S. Kumari and K. Parmar

2. Entrepreneurs mention a tentative amount of money needed to start the project or business venture. 3. The products and/or business ideas are promoted to attract investments in the projects. 4. Investors analyze the published business ideas and invest in the one which is promising for the investors. 5. In return of their investment, investors get rewards from entrepreneurs depending upon the type of crowdfunding. There are several crowdfunding platforms, such as Kickstarter [1], Indiegogo [4], GoFundMe [5] and LendingClub [2]. Different crowdfunding platforms provide different features to the users, such as crowdfunding perks, social media integration and customization of crowdfunding page. The facilities provided by crowdfunding platforms help entrepreneurs to showcase the business ideas effectively. Some of the crowdfunding platforms, e.g., Kickstarter, are based on All or Nothing model, i.e., an entrepreneur gets fund if and only if the intended amount of fund is received. Conventional crowdfunding platforms play a crucial role in crowdfunding process by being the intermediary between entrepreneurs and investors. However, conventional crowdfunding platforms charge a significant amount of platform fee for the services. There are mainly two types of fees collected by crowdfunding platforms: 1. Platform fee: For example, 4–5% of the total fundraised. 2. Payment processing fee: For example, 3% of each transaction. Platforms, such as Kickstarter [1], charge 5% of the fundraised as platform fee if a project is successfully funded. In addition to the platform fee, a payment processing fee is also collected which ranges from 3 to 5%, depending upon the location of the business venture. Indiegogo [4] charges 5% platform fee on all funds raised if a user is in ‘InDemand program’ and ran the campaign on Indiegogo. The ‘InDemand program’ allows campaigns to raise funds even after the funding goal of the campaign has been reached within a specified time. Along with the platform fee, a transaction fee is also being charged by the payment processors for each contribution a project receives. The transaction fee of Indiegogo varies with the location of the business venture and the currency. In this paper, we propose a secure and decentralized crowdfunding mechanism based on blockchain technology. The proposed approach eliminates the role of conventional crowdfunding platforms that act as trusted third party between entrepreneurs and investors. The conventional crowdfunding platforms charge platform fee and payment processing fee. Elimination of such trusted platforms allows the entrepreneurs to use the total fundraised from the investors. The entrepreneurs are no longer required to pay the platform fee and payment processing fee to the intermediary platforms, such as Kickstarter, for providing the services. The organization of the paper is as follows: Sect. 2 discusses the literature survey related to blockchain. Section 3 provides the preliminaries. In Sect. 4, we propose the blockchain-based crowdfunding mechanism. In Sect. 5, we analyze the proposed approach. In Sect. 6, we conclude the paper with future research directions.

7 Secure and Decentralized Crowdfunding Mechanism …

81

2 Related Works Satoshi Nakamoto proposed the first application of blockchain technology in electronic cash, known as Bitcoin [6]. Bitcoin eliminates the need for trusted intermediary (bank) in transaction process. Bitcoin allows users to be pseudonymous and provides a decentralized, fast and secure way to transfer money. To add transactions to blockchain, nodes with computational resources, called miners, validate every transaction to avoid illegal transactions and to avoid double-spend attack. A miner receives reward to append a block to the blockchain. The proposed approach uses blockchain technology to facilitate fund transfer between entrepreneurs and investors without the need of intermediary crowdfunding platforms. Zhao and Coffie [7] proposed the ways in which blockchain technology can be used to solve the issues related to crowdfunding contracts, such as illegal transactions, investor abuse and role of crowdfunding platforms. The crowdfunding platforms act as intermediaries and could manipulate the crowdfunding contracts. Authors describe the ways in which features of blockchain technology can be used to manage the relationship between creators, backers and crowdfunding platforms, securely. Authors describe the ways to modify the role of crowdfunding platforms using blockchain technology. However, the issue of platform fee and payment processing fee collected by the crowdfunding platforms is unmentioned. Authors briefly discuss the future possibilities of elimination of intermediary crowdfunding platforms using blockchain technology. Saadat et al. [8] described the issues related to crowdfunding campaigns, such as irregularities in crowdfunding campaigns, frauds and delay in completion of crowdfunding campaigns. To resolve such issues, authors proposed the Ethereum-based smart contract in crowdfunding platforms that allows the contracts to get executed automatically when predefined conditions are met. The smart contract prevents fraud and ensures that the projects are delivered within the specified time limit. However, the issue related to platform fee and payment processing fee collected by the intermediary crowdfunding platforms is unmentioned. Li et al. [9] discuss the issues prevailing in existing crowdsourcing systems that are dependent on central servers. Dependency on central server exposes a system to threat such as single point of failure, distributed denial of service attack and Sybil attack. High service fee is also one of the issues with existing crowdsourcing platforms. Authors propose and implement a blockchain-based decentralized crowdsourcing framework in which crowdsourcing is performed without any third party’s involvement.

3 Preliminaries In this section, we discuss the details of blockchain technology, Ethereum, smart contracts and cryptographic techniques used in blockchain technology.

82

S. Kumari and K. Parmar

3.1 Blockchain Blockchain is an immutable and decentralized ledger that allows transactions to take place without trusted intermediary. Blockchain is the sequence of blocks that stores valid transactions. Blockchain technology combined with other cryptographic techniques creates a cryptocurrency, namely Bitcoin [6]. Bitcoin eliminates the role of central authorities, such as banks, in transaction procedure. In Bitcoin, transactions are recorded in public blockchain and visible to all the nodes on the Bitcoin network. Resourceful nodes, called miners, verify each transaction to ensure authenticity and integrity of transactions. Miners group transactions into a block. Transactions in a block are stored in the form of the Merkle tree [10]. To mine Bitcoins and append transactions to the blockchain, miners solve a cryptographic puzzle. In a cryptographic puzzle, miners generate the hash of a newly created block along with a cryptographic nonce. The nonce is varied until the hash value becomes smaller than or equal to the target value. The target value is a 256-bit number with initial few zeros (e.g., initial 40 zeros). The cryptographic hash function used in Bitcoin is Secure Hash Function-256 (SHA-256). Once the miner calculates the required hash value, it broadcasts the block along with the hash value and nonce in the network. All other miners in the network can verify the published block by comparing the hash value given in the block with the target value. The target value is changed after every 2016 blocks [11]. After verification, miners append the block to their blockchain ledger. The miner whose block gets appended onto the blockchain ledger gets a reward in the form of Bitcoins. The reward value gets halved after every 210,000 blocks [11]. While generating a block, the miners include a reward generating transaction as the first transaction in the block. The recipient address of the reward generating transaction is miner’s own address. In addition to the block reward, the miner also receives transaction fee associated with each transaction in the block. However, the transaction fee is optional in Bitcoin. The transaction fee is recommended as the transaction with low or no transaction fee could suffer starvation because miners prefer to include transactions with higher transaction fee into the newly created block. The process of solving the cryptographic puzzle is referred to as Proof-of-Work (PoW). PoW is necessary in order to build consensus among miners in the blockchain network. In PoW, the miner with more computational resources has better chance to append its block to the blockchain ledger by being the first to solve the cryptographic puzzle. Bitcoin uses asymmetric-key cryptography to digitally sign the transactions. Elliptic Curve Digital Signature Algorithm (ECDSA) [12] is used to sign the transactions to ensure that Bitcoins are spent by the owner of the Bitcoins.

7 Secure and Decentralized Crowdfunding Mechanism …

83

3.2 Smart Contract A smart contract [13] is a computer program that contains and controls agreement between two parties without any trusted third party. A smart contract selfverifies the terms and conditions of the agreement. When the predefined conditions are satisfied, the smart contract gets executed automatically. Smart contracts can be written in different programming languages, such as solidity. The use of a smart contract reduces transaction cost as the trusted intermediary controlling the negotiation between two parties is eliminated.

3.3 Ethereum Ethereum [14] is an open-source public blockchain platform used to implement smart contracts. Smart contracts run on Ethereum Virtual Machine (EVM). Ethereum requires cryptocurrency, namely Ether, that is used to pay for computational services on the Ethereum network. Every time a smart contract is executed, user needs to pay execution fee that is termed as ‘gas’. Every contract specifies the ‘gas limit’, which is the maximum amount of gas the contract can use for the computation.

3.4 Asymmetric-Key Cryptography Asymmetric-key cryptography is also known as public-key cryptography. Public key and private key are used in asymmetric-key cryptography. The private key is kept secret, while the public key is made public. In blockchain, asymmetric-key cryptography is used to ensure integrity and authenticity of transactions. Each transaction is signed using sender’s private key, and the signatures are verified using the sender’s public key.

3.5 Hash Function Hash function is a function that maps variable size input to a fixed size output. The output of a hash function is termed as message digest. The characteristics of a hash function are: • Preimage resistant, i.e., given an output, it is computationally infeasible to compute the corresponding input. In blockchain, given the desired hash value that satisfies the cryptographic puzzle, it is computationally infeasible to compute the nonce that produces the desired hash value.

84

S. Kumari and K. Parmar H6(H4, H5)

H4(H1, H2)

H1(T1)

H5(H2, H3)

H2(T2)

H3(T3)

Fig. 1 Merkle tree

• Second preimage resistant, which implies, given an input, it is computationally infeasible to find another input that produces the same output. In blockchain, given the desired hash value that satisfies the cryptographic puzzle, it is computationally infeasible to compute another nonce that produces the same hash value. • Collision resistant, i.e., it is not possible to find two inputs that produce same output. In blockchain, it is not possible to find two nonces that produce the same hash value that satisfies the cryptographic puzzle. Blockchain uses SHA-256 hash function to produce a 256-bit hash code.

3.6 Merkle Tree Merkle tree [10] is a data structure in which hash codes of different transactions are grouped together to produce a hash code referred to as Merkle root. In a Merkle tree, leaf nodes contain transactions and hash code of transactions and internal nodes contain child nodes hash codes. If any transaction at the leaf node of Merkle tree is altered, then Merkle root needs to be altered. If an adversary manipulates the transactions available in the blockchain, it will be easily detected by the miners using the Merkle root. Therefore, Merkle tree provides transaction integrity in Bitcoin. Figure 1 shows the structure of a Merkle tree, where leaf nodes H1 , H2 , H3 are the hashes of transactions T1 , T2 , T3 , respectively, and internal nodes H4 and H5 are the hashes generated using the hashes of child nodes. Node H6 is the root node.

3.7 Consensus Algorithms To reach consensus in a decentralized network, algorithms as mentioned below are used in blockchain network. • Proof-of-Work (PoW): In PoW, a miner needs to solve a computationally difficult cryptographic puzzle. A miner competes with other miners to solve computationally difficult puzzle before other miners to append a block on the blockchain

7 Secure and Decentralized Crowdfunding Mechanism …

85

ledger. Miners generally try to find a number, called nonce, in such a way that the hash code of block, containing selected transactions and the nonce, becomes smaller than or equal to the given target value. SHA-256 hash algorithm is used to generate a hash code. H(nonce||transactions) ≤ target value

(1)

• Proof-of-Stake (PoS): In PoS consensus algorithm, the publisher of next block is chosen based on the amount of stake the miner has invested in the blockchain network. In PoS, higher the stake a user has put into the network, higher is the chance that they would behave honestly. Unlike the PoW, the PoS does not require any computational power. Therefore, there is no block reward system in the PoS. The only reward to publish a new block comes from the transaction fee.

4 The Proposed Crowdfunding Mechanism Based on Blockchain Technology The proposed crowdfunding mechanism involves two parties, creators, who are the entrepreneurs with business ideas, and backers, who are the investors. The proposed mechanism eliminates the need for a trusted intermediary between the creators and backers. In the proposed crowdfunding platform, creators and backers register themselves. A creator launches a campaign and provides the details about a project or a product. The creator provides tentative amount of money that is needed for the project, termed as the funding goal. A backer interested in a project pledge to invest in the project. If the funding goal of the project is reached within a specified time (e.g., 30 days), the money pledged by the backers will be given to the creator. If the business venture does not raise the funding goal within the specified time, fund will be returned to the backers. As a reward for investing in a project, the backers will be given incentives in the form of cryptocurrency. To append transactions to the blockchain ledger, miners first verify the transactions. To facilitate the transaction verification process, validators are needed in the blockchain network. In the proposed system, the process of transaction validation is performed by the backers on the blockchain network. To build consensus, the proposed mechanism uses PoS consensus algorithm. In PoS, validators are selected based on the amount of money invested, i.e., more the money invested by a backer, more is the chance to get selected as a validator. To validate transactions, the validators get a block reward. Following are the algorithms that describe the working of the proposed crowdfunding mechanism based on blockchain technology (Table 1).

86

S. Kumari and K. Parmar

Table 1 Symbols used in algorithms

Symbols

Description

IDc

ID of creator

IDb

ID of backer

IDbn

ID of business idea

signpr (bn)

Business idea signed with the creator’s private key

fg

Funding goal

t

Timestamp

frIDbn

Fundraised for a particular business idea

fgIDbn

Funding goal of a business idea

AMTescrow Amount stored in escrow AMTp

Amount pledged by a backer

IDtx

Transaction ID

Algorithm 1: Creator Node 1. IDc ← Register(details) 2. (IDbn, t) ← DApp(signpr(bn) || fg) 3. if frIDbn ≥ fgIDbn&& waitTime ≤ 30days then 4. IDc ← AMTescrow 5. else 6. IDb ← AMTescrow 7. end if

• A creator node registers on DApp via a smart contract and provides necessary details (e.g., name, contact information, etc.). To write a smart contract, we can use programming language, such as solidity. The registration process generates a unique ID for the creator of business idea. • A creator node signs a business idea using its private key and then publishes the same via a smart contract that handles all the interactions between creators and backers. In addition, the creator provides a funding goal. The registration process generates a unique business ID for the business idea, along with the timestamp of publishing the idea. • If a business idea is suitable for a backer and fundraised by the creator is greater than or equal to the funding goal and wait time is less than or equal to 30 days, then creator receives the pledged fund that is temporarily stored in escrow. Otherwise, the pledged amount is returned back to the backers.

7 Secure and Decentralized Crowdfunding Mechanism …

87

Algorithm 2: Backer Node 1. IDb ← Register(details) 2. for IDbn = 1 to n do 3. Analyse IDbn 4. if IDbn is suitable then 5. IDtx ← DApp(IDbn, AMTp) 6. AMTescrow ← AMTp 7. end if 8. end for

• A backer registers on DApp via a smart contract and provides necessary details (e.g., name, contact information, etc.). The registration process generates a unique ID of the backer. • Backer analyzes business ideas that are published by the creators. • If a backer finds a business idea convincing, then the backer pledges an amount to the business. The transaction from backer to escrow generates a unique ID of transaction. Algorithm 3: Decentralized Application (DApp) 1. if frIDbn ≥ fgIDbn&& waitTime ≤ 30days then 2. Miners(IDb, IDtx) 3. if IDtx is valid then 4. IDc ← AMTescrow 5. Blockchain ← (IDbn, t, IDtx, IDb, IDc) 6. else 7. Discard IDtx 8. end if 9. else if frIDbn< fgIDbn&& waitTime < 30days then 10. wait 11. else if frIDbn< fgIDbn&& waitTime > 30days then 12. IDb ← AMTescrow 13. end if

• If fundraised is greater than or equal to funding goal of the business and wait time is less than or equal to 30 days, then miner validates the transaction from backer to escrow to make sure that it is not a double spend. The miner provides unique ID as well in order to receive mining reward. • If the transaction is valid, then creator receives the pledged amount from escrow. • Store the business idea, timestamp, ID of the transaction, ID of backer and ID of creator in blockchain. • If the transaction is not valid, then miners discard the transaction. • If fundraised by a business is less than funding goal of the business idea and wait time is less than 30 days, then wait for the fund.

88

S. Kumari and K. Parmar

• If fundraised is less than funding goal and wait time is greater than 30 days, then backers get the pledged amount back from escrow (Fig. 2).

Fig. 2 A diagrammatic representation of the proposed blockchain-based crowdfunding mechanism

7 Secure and Decentralized Crowdfunding Mechanism …

89

5 Discussion • Security: The proposed blockchain-based crowdfunding mechanism provides security because all the transactions are verified by miners before the transactions are stored in blockchain. The fund stored in escrow is given to the creator only when miners validate the transaction. Thus, double spend and illegal transactions are prevented. Also, a backer can be sure that in case the funding goal of a business is not reached, the fund pledged by the backer will be returned as it is written in immutable smart contract. • Elimination of trusted third party: The proposed crowdfunding mechanism is decentralized as it eliminates the role of conventional crowdfunding platforms. All the interactions between creators and backers take place through smart contracts. • Cost-effective: With the blockchain-based crowdfunding mechanism, creators use the full fundraised from the backers. The creators need not to pay platform fee and payment processing fee to the intermediary crowdfunding platforms.

6 Conclusion and Future Work The proposed crowdfunding mechanism based on blockchain technology eliminates the need for trusted intermediary crowdfunding platforms, such as Kickstarter and Indiegogo. Elimination of conventional crowdfunding platforms allows creators to make use of total fundraised from backers. The proposed mechanism provides a fast and secure way to transfer fund from the backers to the creators. However, there are a few issues that need to be addressed by the proposed crowdfunding mechanism. The first issue is to protect the business ideas from being copied once they are published on blockchain network. In the proposed crowdfunding mechanism, miners validate every transaction before storing the transactions in the blockchain to avoid double spend and illegal transactions. A secure transaction validation mechanism needs to be formulated so that only valid transactions are stored in blockchain. A reward mechanism also needs to be designed for miners as well as for backers in such a way that a backer gets reward for the investment proportional to the amount invested.

References 1. 2. 3. 4. 5. 6.

Kickstarter. https://www.kickstarter.com/, last accessed: 24 Dec 2019 LendingClub. https://www.lendingclub.com/, last accessed: 24 Dec 2019 Wefunder. https://wefunder.com/, last accessed: 24 Dec 2019 Indiegogo. https://www.indiegogo.com/, last accessed: 24 Dec 2019 Gofundme. https://www.gofundme.com/, last accessed: 24 Dec 2019 Nakamoto S (2008) Retrieved 12 24, 2019, from Bitcoin: a peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf 7. Zhao H, Coffie CP (2018) The applications of blockchain technology in crowdfunding contract. Available at SSRN 3133176

90

S. Kumari and K. Parmar

8. Saadat MN, Halim SA, Osman H, Nassr RM, Zuhairi MF (2019) Blockchain based crowdfunding systems. Indonesian J Electr Eng Comput Sci 15(1):409–413 9. Li M, Weng J, Yang A, Lu W, Zhang Y, Hou L, Liu JN, Xiang Y, Deng RH (2019, June) CrowdBC: a blockchain-based decentralized framework for crowdsourcing. IEEE Trans Parallel Distrib Syst 30(6):1251–1266 10. Merkle RC (1988) A digital signature based on a conventional encryption function. In: Pomerance C (ed) Advances in cryptology—CRYPTO ‘87. Springer, Berlin, pp 369–378 11. Conti M, Kumar ES, Lal C, Ruj S (2018) A survey on security and privacy issues of Bitcoin. IEEE Commun Surv Tutor 20(4):3416–3452 12. Johnson D, Menezes A, Vanstone S (2001, August) The elliptic curve digital signature algorithm (ECDSA). Int J Inf Secur 1(1):36–63 13. Szabo N (1997) Formalizing and securing relationships on public networks. First Monday 2(9) 14. Buterin V (2014) A next-generation smart contract and decentralized application platform. White pap 3:37

Chapter 8

Efficient Use of Randomisation Algorithms for Probability Prediction in Baccarat Using: Monte Carlo and Las Vegas Method Avani Jindal, Janhvi Joshi, Nikhil Sajwan, Naman Adlakha, and Sandeep Pratap Singh

1 Introduction The fields of cryptography, load balancing, and parallel and distributed computing majorly employ randomised algorithms in their implementation. However, these algorithms and their subclasses have not been explored much. A randomised algorithm receives some random values, along with the input data, which are used for making random choices, hence affecting the algorithm’s behaviour. This way, it ultimately achieves a good performance in the average case. Mostly, the execution time is a random variable, even when the input given is fixed. We usually talk about the expected worst-case performance of such algorithms—that is, the average time taken when it is given the worst input of a fixed size (Fig. 1). To give a formal definition of a randomised algorithm [1], it can be thought of as a machine M which calculates M(x, r). Here, x signifies the problem input and r signifies the sequence of random bits. M is a typical random-access machine, which has a memory space and we can perform all operations on a memory location. These arithmetic operations on the integers typically involve read, write or other mathematical operations up to O(log n) bits in a given constant time. The assumption that we make here is that constant time is taken in generation of a random integer of size O(logn). The running time of the algorithm and the number of various constant-time operations depends on the r random bits. We formally define them as a random variable. A. Jindal (B) · J. Joshi · N. Sajwan · N. Adlakha · S. P. Singh Department of Virtualization, School of Computer Science, University of Petroleum and Energy Studies (UPES), Dehradun, Uttarakhand 248007, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_8

91

92

A. Jindal et al.

Fig. 1 Randomisation algorithm illustration

Let a probability space  be defined consisting of all possible sequences r, each having a probability Pr[r]. On some input x, the running time is given as an expected value E r [time(M(x, r))]. Here, for any X, Er [X ] =



X (r ) Pr[r ]

(1)

r ∈

For a deterministic algorithm, if we assume, it runs in time O(f (n)), where n = |x|, then we see that the run time is dependent on the size of the input. Whereas, the randomisation algorithm given by us runs in expected time O(f (n)). It suggests that E r [time(M(x, r))] = O(f (|x|)) for all inputs x. The fact stated above is dissimilar from the conventional worst-case analysis, where no r and no expectation is defined, and average-case analysis, where again no r is defined, and the expectation is only over some distribution on x.

1.1 Methods to Generate Randomness There are typically two methods for generating the random bits r, which are essential inputs for any randomised algorithm. True random numbers are values generated through using some type of physical phenomenon as mentioned in [2–4]. Another method includes the generation of pseudo-random numbers based on a seed value. The seed itself is chosen at random, often times using physical randomness. Other parameters including quantum random walks [5] are also used. Pseudo random numbers can be cryptographically-secure, but for the purpose of this paper, we have chosen the statistical pseudo-randomness method to generate the needed random inputs since security of the application was not the main focus of the research.

1.2 Classification of Randomisation Algorithms Monte Carlo Monte Carlo is that class of algorithm which may return the correct result or the incorrect result with some probability. The algorithm resources used in this are often bounded, and thus, it has a deterministic or fixed run time. It gives

8 Efficient Use of Randomisation Algorithms …

93

better probability results when it is run for a larger number of iterations. This class can also be used to predict the value of Pi. Las Vegas algorithms, which we will discuss further, are said to be a subset of Monte Carlo. Las Vegas A Las Vegas algorithm returns the correct or optimum result always and informs when it fails. Its run time differs at each run since it depends on a random value, even for the same input. When László Babai introduced this term, he explained that the algorithm could be thought to depend on a series of coin flips to determine its next step. A classic example of this class of algorithms is the randomized quicksort, wherein the pivot p in the algorithm is chosen randomly from the array of elements.

1.3 Pthread Library A standardised programming interface to provide the full capabilities of threads in UNIX systems was provided by the IEEE POSIX 1003.1c standard (1995). POSIX threads or pthreads are the thread implementations which adhere to this standard. Implementation of pthread is available with GCC compiler and has been used for multithreading. We have implemented all the algorithms in C language with a GCC compiler only.

1.4 Baccarat Baccarat is a game of cards which contains 8 decks. There are 416 * 415 * 414 * 413 possible cases in first hand which makes it almost impossible to be followed for patterns/predictions. To be able to even calculate the probabilities with a great deal of accuracy, it takes a large amount of time. Hence, it served as a suitable problem statement that could be solved by our algorithms. The game of Baccarat is dealt from a shoe containing 1–8 decks of 52 cards each. Tens and face cards are counted as zero. A hand of two cards is dealt for both the banker and the player alternately. Bets are placed on the banker’s hand, or on the player’s hand, or on the tie. The hand whose sum of the face values of the cards is closest to 9 wins. If the hands have the same value, the result is a tie. The rest of the paper is organised as follows. First, a brief outline about the work that has been done in the recent years related to these algorithms is given (Sect. 2). In Sect. 3, the proposed approach for the design, implementation and optimization of Las Vegas and Monte Carlo algorithms have been described. An explanation of the game of Baccarat has also been given. This is followed by the analysis of the results obtained in Sect. 4. Finally, in Sect. 5, a comparison of all the proposed algorithms with each other has been done.

94

A. Jindal et al.

2 Related Works When there is an attacker who intentionally tries to give bad inputs to the algorithms, such as Prisoner’s dilemma, randomisation algorithms prove to be very useful. It also has various applications in cryptography. The numbers that have to be chosen in cryptographic applications cannot be pseudo-random as they can then be predicted by the attacker. Therefore, the source of numbers should be truly random. Another important area which uses the concept of randomness is quantum computing. A randomisation algorithm was first used by Michael O. Rabin as a method for closest pair problem in computational complexity. After the discovery of a randomised primality test, the study of these algorithms took a great leap. There are different fields in which Monte Carlo and Las Vegas algorithms have been put to use. In [6], randomisation algorithms have been used for computations of high dimensional Gaussian weighted integrals. The integrals have been calculated using Markov chain Monte Carlo, and then, they are compared. This approach can also be generalised and then used to compute integrals by general weighted functions other than the Gaussian weights. Other cases where Monte Carlo has been used is to approximate the energy cost of a problem instance [7]. It is a very complex scenario to do this because of the impact of distribution of runtimes. This approach considers Weibull and Pareto distributions, the two common run time continuous run time distributions. It demonstrates the interesting and uncommon relationship between parallelism, run time and energy cost in combinatorial solving. Another research related to this is [8], where the tolerance analysis is formulated mathematically. It simulates the effects of geometrical derivations on the geometrical behaviour of the mechanism. To perform this computation, two approaches based on quantified constraint satisfaction problem solvers and Monte Carlo simulation have been suggested and tested. One of the closely related works is the probabilistic model for parallel execution of Las Vegas algorithms [9]. It analyses the run time distribution of sequential runs of the algorithm. While their approach is to speed up the algorithm by analysis of gathered data, we use the concept of multithreading for this. Another work on Las Vegas is for evaluating the performance of the algorithm based on identification of empirical run-time distributions [10]. They demonstrate their approach by applying it to stochastic local search (SLS) algorithms for the satisfiability problem (SAT) in propositional logic. They have even discussed the pitfalls caused due to use of improper methods and the benefits of the given approach. A lucid and universal strategy with a property are presented in [11]. For any algorithm A,   T A, scLuniv = O(lin A log(lin A))

(2)

where scL is a strategy and T (A, scL) is the expected value of the running time of the simulation of A under strategy scL. The application of this method can also be found in Markov processes where we get the correct results after termination of an iterative method [12], in identifying the error in circuits of logic gates using

8 Efficient Use of Randomisation Algorithms …

95

partitioning algorithms [13] and also in communication networks for optimal routing in a topology to fulfil the time constraints with the use of clustering algorithms [14].

3 Proposed Approach 3.1 Using Monte Carlo Method A simple example to explain this class is given below: Algorithm repeat 300 times: k = RandInt(n) if A[k] = 1 return k return “Failed” The above example will run 300 times, which is the number of iterations specified, hence leaving a possibility that, within all the 300 times, the algorithm might not find ‘1’ in the array. Therefore, Monte Carlo wagers with the correctness of the result, rather than the run time. In our application, we checked for a given number of chances in baccarat with the given shoe (8 decks) and for each chance we calculated a random outcome using a baccarat simulation, Algorithm for i in (0, n). check (randcard()); Increase counters(player,banker,tie,chances) appropriately Calculate and show probability to user of all outcomes; Since we cannot count the same card combination more than once, we create a flag array of size equal to the total number of cards in the shoe (416 in our case). Each time a card A is randomly chosen, we change the value of the array[A] from 0 to 1, indicating that this card must not be chosen again. Since we are simply accessing the values through its index, the operation does not take too long. Mathematical Representation Let N be number of test cases taken before result is shown, For each iteration 1–N, let C 1 , C 2 , C 3, C 4 be randomly generated cards from available shoe ⇒

96

A. Jindal et al.

shoe = Values [0–415]—Used Set {} (null at start of game) let a, b, c, d be face values of C 1 , C 2 , C 3, C 4, respectively, our operations can be seen  in the equation given below N Player Wins Total Probability = 1 = (   (a, b, c, d))/N + (N Banker Wins (a, b, c, d))/N + (N Tie (a, b, c, d))/N Note: each summation separated by ‘+’ operator is a probability to be shown in output.

3.2 Using Monte Carlo Method with Data Structures In the second version of Monte Carlo with data structures, we used arrays of linked lists connected hierarchically to form a 416-size tree like structure. Each node is kept null until called and each leaf node represents a unique case of hand, if while traversing the nodes for cards, a leaf node comes across as ‘not null’, it is a sign that that particular combination of cards has already been discovered in previous iterations and instead the algorithm starts looking for a new combination, hence increasing the accuracy in the resultant probability. This is done because all combinations taken in account for the calculation must be unique. Algorithm for i in (0, n). Get 4 randcard(); traverse node forest for 4 cards if 4 cards exist in forest i––; continue; else initialise nodes; check(4 cards); increase counters(player,banker,tie,chances) appropriately; calculate and show probability to user of all outcomes; This algorithm takes more time and memory space than the typical implementation of Monte Carlo described above because creating and then searching in a tree is expensive. Mathematical Representation: Let N be number of test cases taken before result is shown, For each iteration 1–N,

8 Efficient Use of Randomisation Algorithms …

97

let C 1 , C 2 , C 3, C 4 be randomly generated cards from available shoe which are unique together⇒ shoe = Values [0–415]—Used Set {} (null at start of game) set a, b, c, d be face values of C 1 , C 2 , C 3, C 4, respectively, our operations can be seen in the equation given below N Player Wins Total Probability = 1 = (   (a, b, c, d))/N + (N Banker Wins (a, b, c, d))/N + (N Tie (a, b, c, d))/N Note: each summation separated by ‘+’ operator is a probability to be shown in output.

3.3 Using Las Vegas Method To demonstrate this class of algorithm, we use the following example: Algorithm repeat: k = RandInt(n) if A[k] == 1 return k; An array A is indexed by a variable k, which is randomly generated. The value k is returned if the index contains the value 1. Otherwise, this process is repeated until 1 is found. This Las Vegas algorithm finds the correct answer on all occasions, but because of the randomisation, it does not have a fixed run time; there is a possibility that a huge amount of arbitrary time elapses before the termination of the algorithm. As we know, Las Vegas runs for an indefinite time span but always gives the correct result. In our approach, we implemented this algorithm in two ways: using the brute force technique and by using the concept of multithreading in C language only. We shifted from brute force technique to multithreading because of extremely high time complexity in the former. In the brute force technique, we iterated through all the remaining card combinations in the deck sequentially and compute the win or loss in each possible case. After that, we calculated the probability of the cases in which the banker won, in which the player won and in which there was a tie. The calculated probabilities were then given to the user and he could play his next move accordingly. The algorithm we used is as follows: Algorithm for i in (0, 416): if card_is_drawn, continue; for j in (0, 416):

98

A. Jindal et al.

if card_is_drawn, continue; for k in (0, 416): if card_is_drawn, continue; for l in (0, 416): if card_is_drawn, continue; checkWin(i, j, k, l) Mathematical Representation Shoe = Values [0–415]—Used Set {} (null at start of game) Total = total combination possible from the available shoe let C 1 , C 2 , C 3, C 4 be randomly generated cards from available shoe which are unique together⇒ let a, b, c, d be face values of C 1 , C 2 , C 3, C 4, respectively, our operations can be seen in the equation given below for 1 to total chances⇒  (a, b, c, d))/total+ TotalProbability = 1 = (total Player Wins  (total Banker Wins (a, b, c, d))/total + (total Tie(a, b, c, d))/total Note: each summation separated by ‘+’ operator is a probability to be shown in output.

3.4 Using Las Vegas Method with Multithreading As we can see, normal Las Vegas is done using four for loops which is not an optimal approach. This also increased the execution time. So, as a solution to this problem, we chose to switch to multithreading. As discussed in the introduction section, a thread is a lightweight process. Using them, we can make use of the modern-day microprocessor architecture and also make the programme faster. It is possible to do in C by using the library. Algorithm create 416 POSIX threads; for each_thread: for j in (0, 416): if card_is_drawn, continue; for k in (0, 416): if card_is_drawn, continue; for l in (0, 416): if card_is_drawn, continue;

8 Efficient Use of Randomisation Algorithms …

99

checkWin(tid, j, k, l) So, it can be seen that we created 416 threads that take up the workload of one of the four for loops completely reducing the time complexity by a whole power extent. Each thread works concurrently and this fastens the processing. All the possible card combinations are tested using the three for loops and the unique thread id assigned to each thread. Mathematical Representation: Shoe = Values [0–415]—Used Set {} (null at start of game) Total = total combination possible from the available shoe threadload = total/threads let C 1 , C 2 , C 3, be randomly generated cards from available shoe which are unique together⇒ let a, b, c be face values of C 1 , C 2 , C 3, respectively, our operations can be seen in the equation given below for 1 to threadload chances = >   Player Wins (a,  b, c, d))/threadload + (threadload thread X = (threadload Banker Wins (a, b, c, d))/total + (threadload Tie(a, b, c, d))/threadload Here Xis sequentially taken from shoe such that shoesize thread x = 1 Note: each summation separated by ‘+’ operator is a probability to be shown in output.

3.5 Simulation of Baccarat Our proposed algorithms find the probability of winning in the card game of baccarat. The flowchart explains the flow of how the algorithm simulating the game works. The random() function in C has been used to generate random cards values like the dealer would choose from a shoe of cards. The face value of the cards is calculated, and then the sum of the cards of banker’s and player’s hand is taken. One thing to note here is that in baccarat, if the face value of a card is more than 9, then it is considered to be 0. A few rules need to be followed in baccarat regarding the calculation of sum of the cards: if the sum of player hand is greater than 5 and the banker hand is lesser than 6, then the banker needs to draw one more card from the deck. If the vice versa happens, then the player needs to draw an additional card. The face value of this additional card is also added in the final sum now. The hand with the highest face value sum wins the current hand. If the sum is equal, then it is a tie. In this paper, we have considered a simulation with a shoe containing 8 decks of cards (Fig. 2).

100 Fig. 2 Flowchart of baccarat simulation

A. Jindal et al.

Choose algorithm to be used

Place bet on banker, player or Ɵe

Display cards of both hands and winner of that round

Display probability of winning of each hand for next round

Repeat Ɵll end of game

4 Result Analysis To analyse the results and performance of our algorithms, we run the simulation of Baccarat game. The total number of combinations that the application checks is around 12,000 crores. First of all, it displays a list of the four algorithms that can be used to calculate the probability of winning in the next round. We choose one of them to do so and get the required result. On selecting Monte Carlo, the calculation takes approximately 1 s. It considers 20 crore unique combinations randomly. Monte Carlo using data structures runs for approximately 4 s. It takes more time than the normal one because of the time overhead associated with allocating linked lists and traversing them in each iteration. The brute force Las Vegas algorithm takes a long time of around 6 h to compute all the 12,000 crore combinations sequentially. Optimised Las Vegas which uses multithreading shows an exponential decrease in the execution time of approximately 45 s. Parallel threads help in this case. The probabilities output from each algorithm and shown in Table 1.

8 Efficient Use of Randomisation Algorithms …

101

Table 1 Resultant probability predictions Algorithm

Probability of (~) Banker (%)

Player (%)

Tie (%)

Monte Carlo

43.87

43.85

12.23

Monte Carlo using data structures

44

43.8

12.16

Las Vegas

46.5

45.9

7.6

Las Vegas using multithreading

46.15

46.01

7.83

To verify our results and ensure the correctness, we looked up the standard range of these probabilities [15] and found them to be as follows: Probability of Banker ~ 45.85% Probability of Player ~ 44.62% Probability of Tie ~ 9.53%. Therefore, we can see that the results given by our algorithms are in close accordance with the standard approximate probabilities. The user can further place his bet on either the banker, player or tie and verify the probability of winning in each subsequent round.

5 Comparison of Algorithms Four algorithms were designed to compare Monte Carlo and Las Vegas methodologies, two were vanilla versions Monte Carlo algorithmic unit (MCAU) and Las Vegas algorithmic unit (LVAU). Following their normal protocols, MCAU, as expected gave us high performance, with the algorithm giving results within a second. Its time complexity is O(n) where n is number of cases chosen randomly. The faster speed costs us with some errors (2–5%). LVAU, on the other hand, gave dismal performance going through all cases (≈30B) needing hours to resolve each hand, with a time complexity of O(nˆ4) where n is the number of cards left in the shoe. However, at the cost of performance time, we get results with no inaccuracy. It was clear from vanilla testing that MCAU works better as the probability distribution in card games seems to be normalised throughout the deck making it ideal for applications requiring fast processing, for example, big data analytics. However, as much as the error percentage may seem small for one hand, on consecutive applications, the error percentage can compile to give erroneous results in mission critical applications where even such minute errors are unacceptable, for example, pharmaceuticals, molecular training and deep learning. We designed two more algorithms each overcoming the drawbacks of the Vanilla versions above, while still adhering to the protocols of both the algorithms—Monte Carlo with data structure (MCAUDS) and Las Vegas with threading (LVAUTH).

102

A. Jindal et al.

Table 2 Comparison of the algorithms Algorithm

Best case time complexity

Worst case time complexity

Space complexity (bytes)

Monte Carlo algorithmic unit

(n)

O(n)

877

Monte Carlo algorithmic unit using data structures

(n)

O(nˆ2)

2626

Las Vegas algorithmic unit (nˆ4)

O(nˆ5)

920

Las Vegas algorithmic unit (nˆ3) using threads

O(nˆ4)

26658

MCAUDS uses an array of linked lists to form a forest data structure to memorise card combination choices considered before by the algorithm and not repeating them hence increasing accuracy of the random choices. It was observed that in this case, MCAUDS took time to allocate and create the array of linked lists. Creating the forest for a number of test cases as small as 20,000 consumed large amounts of memory and time. Since LVAUTH leverages multithreading, the performance in terms of time, for the same amount of test cases as in LVAU, increases manifold. The time taken by this algorithm has reduced from hours to seconds, although increasing the memory consumption (Table 2).

6 Conclusion and Future Scope There are many methods to calculate probability in various fields. These classes of algorithms have not been explored much till date as they combine the concepts of randomisation and probability determination. As mentioned in the related works, they have been used in some other fields but not in this one. We could predict the probabilities in less than a minute for such an enormous number of possible cases using them. The user can have the probabilities before each round and place his bet accordingly. This will increase his chances of winning in each round. The prediction of the algorithm also improves after each round as more and more cards are withdrawn from the deck. Implementation of multithreading helped in achieving an execution time which was approximately 99% lower than the brute force technique. This enhancement in performance is one of the most remarkable milestones in our research. Having done this, we can utilise the same in varied fields of research which involve a larger number of computations. This will give us optimal results in a few seconds. Other than this, a similar approach can be used in other card games like Black Jack, Poker and so on. We can also use these for various other applications like weather forecasting, sports games predictor, insurance options prediction, etc.

8 Efficient Use of Randomisation Algorithms …

103

References 1. Aspnes J (2020) Notes on randomized algorithms. arXiv:2003.01902 2. Cherkaoui A, Fischer V, Fesquet L, Aubert A (2013) A very high speed true random number generator with entropy assessment. In: International workshop on cryptographic hardware and embedded systems. Springer, Berlin, pp 179–196 3. Dejun L, Zhen P (2012) Research of true random number generator based on PLL at FPGA. Proc Eng 29:2432–2437 4. Barak B, Shaltiel R, Tromer E (2003) True random number generators secure in a changing environment. In: International workshop on cryptographic hardware and embedded systems. Springer, Berlin, pp 166–180 5. Yang YG, Zhao Q (2016) Novel pseudo-random number generator based on quantum random walks. Sci Rep 6(1):1–11 6. Zhao Z, Kumar M (2012) A comparative study of randomized algorithms for multidimensional integration. In: 2012 15th international conference on information fusion. IEEE, pp 2236–2242 7. Siala M, O’Sullivan B (2019) Combinatorial search from an energy perspective. Inf Proc Lett 148:23–27 8. Dantan JY, Qureshi AJ (2009) Worst-case and statistical tolerance analysis based on quantified constraint satisfaction problems and Monte Carlo simulation. Comput Aided Des 41(1):1–12 9. Truchet C, Richoux F, Codognet P (2013) Prediction of parallel speed-ups for Las Vegas algorithms. In: 2013 42nd international conference on parallel processing. IEEE, pp 160–169 10. Hoos H, Stützle T (1998) Evaluating Las Vegas algorithms–Pitfalls and remedies In: Proceedings of the 14th conference on uncertainty in artificial intelligence. arXiv:abs/1301. 7383 11. Luby M, Sinclair A, Zuckerman D (1993) Optimal speedup of Las Vegas algorithms. Inf Process Lett 47(4):173–180 12. Fujita R, Iwata KI, Yamamoto H (2019) An iterative algorithm to optimize the average performance of Markov chains with finite states. In: 2019 IEEE international symposium on information theory (ISIT). IEEE, pp 1902–1906 13. Scarabottolo I, Ansaloni G, Constantinides GA, Pozzi L (2019) Partition and propagate: an error derivation algorithm for the design of approximate circuits. In: 2019 56th ACM/IEEE design automation conference (DAC). IEEE, pp 1–6 14. Hsu CH, Hung SC, Chen H, Sun FK, Chang YW (2019) A DAG-based algorithm for obstacleaware topology-matching on-track bus routing. In: 2019 56th ACM/IEEE design automation conference (DAC). IEEE, pp 1–6 15. Best-Baccarat-Online.com Homepage. https://www.best-baccarat-online.net/baccarat-oddsplayer.html, last accessed 19 Feb 2020

Chapter 9

Comparative Analysis of Educational Job Performance Parameters for Organizational Success: A Review Sapna Arora, Manisha Agarwal, and Shweta Mongia

1 Introduction Data mining (DM) is one of the propitious fields of information which deals with the processes to fetch out the valid and useful patterns from large databases. These databases are composed of heterogeneous data from multiple data sources [1, 2]. On this heterogeneous data, various data mining techniques are applied in order to make decision making. Data mining can be applied to various fields like the educational sector, biological data analysis, banking and finance, telecommunication sector, etc. In this paper, the author’s main focus is on the educational sector. Educational job performance means the job performance relating to the educational sector in order to achieve organizational success. Data mining techniques are applied for job performance which is possible through two main entities, i.e., student and educationist. In short, educational job performance data mining refers to the application of data mining which deals with the Job performance [3] related to two entities, i.e., student and educationist. In terms of students, one can relate it with students’ performance parameters like study time, behavior, background, result, consistency in records, etc. On the other hand, educationist performance based on different parameters like teaching methods, interaction with students, work experience, etc., are also a part of job performance data mining.

S. Arora (B) · M. Agarwal Banasthali Vidyapith, P.O. Banasthali Vidyapith, Banasthali, Rajasthan 304022, India e-mail: [email protected] S. Mongia University of Petroleum and Energy Studies, Dehradun, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_9

105

106

S. Arora et al.

Fig. 1 Phases of job performance data mining

The paper is organized into the following sections. Section 2 presents different phases associated with job performance data mining. Section 3 of the paper recapitulates the main points and the study associated with different research. Section 4 summarizes the comparative study of educational data mining tools, techniques, and parameters. Section 5 shows an analysis of the study. Section 6 concludes the work with future work in the concerned area.

2 Educational Job Performance Data Mining Phases Job performance refers to the means of analyzing the work description and goals, an individual is achieving. Job performance can be accessed from different entities in different ways. In terms of education, student and educationist are the two entities on which job performance can be analyzed. As an employer, you can estimate the work performance of an educationist through various factors like results, years of experience, etc. On the other hand, one can consider it according to self-assessment factors like job satisfaction, etc. In short, job performance mining is concerned with the process of fetching out useful data from datasets [4] covering different parameters. As shown in Fig. 1, to access the job performance, the following steps will be followed: Step 1 Primary phase of job performance data mining is to collect the data from various sources and to find out the relation between the data through the use of data mining techniques. Step 2 Second phase is to apply data mining techniques on valid relationships. Step 3 Final phase is decision making for job performance.

3 Related Studies Many researchers have used numerous machine learning algorithms with different tools and techniques in various domains. In these papers, the emphasis is given on

9 Comparative Analysis of Educational Job Performance Parameters …

107

researches associated with the educational domain. Author [5] analyzed the significance of the relationship between two factors, i.e., job satisfaction and teaching quality. Author used ANOVA technique, descriptive statistics correlation regression analysis and Pearson correlation analysis to identify and establish relationships and found a positive correlation between two factors. Pal et al. [6] evaluated numerous parameters like organizational feedback, student feedback, etc., associated with teacher’s performance. Idea behind the study was to extract useful patterns associated with educational organizations to extract certain unidentified trends in teacher’s performance. They used Naive Bayes, ID3, CART, and LAD for performance prediction and found Naïve Bayes to be the best one. In another study, Mythili [7] implemented five different algorithms (J48, Random forest, Decision Tree, IB1, Multilayer Perceptron) evaluated the student’s performance. The dataset was collected from students who are taken from different primary schools of Tiruchirappalli (Tamilnadu). They found random forest to be more accurate than other algorithms. Similarly, a research [8] compared J48 and random tree on students record to predict performance of third semester MCA students of GGSIPU and concluded the fact that random tree is more accurate in terms of predicting performance. In the study [9], author used three different models, i.e., J48, random classifier, Naïve Bayes and multilayer perceptron to analyze student success. Out of these, random classifier was found to be the best one. Hemaid [10] proposed a model in order to evaluate performance using data mining techniques that can help to improve teacher’s performance. Authors used the rule induction, Bayesian Kernel, K-NN, and decision tree algorithms, after which the results were compared. The accuracy achieved through KNN is 79.92, which is more accurate than other algorithms. Jindal and Dutta [11] did prediction analysis in order to work for quality improvement and ensuring educational success at all the levels. They compared five different algorithms for prediction analysis in order to work for quality improvement in the education sector. These algorithms were C4.5-A2, C4.5-A1, C5.0, NN, and CRT on the same dataset. Accuracy of 5.0 comes out to be the best among all other algorithms used. Another study [12] suggested how academic performance is helpful in predicting the placement of a student. Authors emphasized various parameters associated with students like CGPA, subject chosen, etc. They used the J48 decision tree algorithm, decision tree, and Naïve Bayes to determine the type of company in which students will be placed. They compared the accuracy of given classifiers and found the J48 decision tree to be the best one. Mustafa [13] shows an implication of data mining techniques to analyze instructor’s teaching performance. According to the author, teaching success is dependent on a student’s interest in the subject and his perception. On the basis of the same, he designed a dataset by selecting different parameters which covered numerous aspects like student attendance, student’s perception about teachers, assignments given, interaction between students and teacher, etc. Then, they applied C5.0, CART, SVM, ANN-Q2H, ANN-Q3H, ANN-M, and DA on the same dataset

108

S. Arora et al.

for prediction. The paper analyzes the effectiveness of models and found C5.0 to be the best in performance. Another study [14], showed the analysis of B. Tech-IT students’ performance. They integrated the use of classifiers and the result of progression of 4 years in order to predict the student’s educational outcome. They used decision tree with gini index (DT-GI), decision tree with information gain (DT-IG), decision tree with accuracy (DT-Acc), rule induction with information gain (RI-IG), 1-nearest neighbor (1-NN), Naive Bayes, neural networks (NN), random forest trees with gini index (RF-GI), random forest trees with information gain (RF-IG), random forest trees with accuracy (RF-Acc). Results of 1-NN were found to be the best. Pal [15] analyze the primary data collected from MCA students of VBS, Purvanchal University, Jaunpur, India and assisted the students who need special counseling to understand the dangers associated with alcohol. Authors used four data mining algorithms to, i.e., sequential minimal optimization, bagging, REP, and decision table. Out of these algorithms, the accuracy of the bagging tool was the best. This approach worked towards improvement of academic performance of students who were prone to alcohol addiction. Bhatnagar and Saxena [16] compared five techniques for evaluating faculty performance in higher educational institutions. The criteria of evaluation were collective student feedback. Among logical regression, decision trees, linear SVM, ANN, and Naive Bayes, decision tree was identified as the best classifier to evaluate the faculty performance. Adekitan [17] implemented a predictive analysis of final year CGPA of engineering students using variables as year of entry, program of study, grade point analysis (GPA) of first three years. They used Konstanz information miner (KNIME) application for identification of interrelation between features of the dataset. They implemented six data mining algorithms as probabilistic neural network (PNN), random forest, decision tree, Naïve Bayes, tree ensemble, and logistic regression. Principal component analysis was done on the data to establish relation and found random forest to be the best one.

4 Comparative Study of Educational Data Mining Tools, Techniques and Parameters An educational job performance data mining is the job performance relating to two main entities, i.e., student and educationist in order to achieve organizational success. In the educational domain, where a number of problems exist, just because we emphasize one entity, i.e., student but not on the other, i.e., educationist, entity which is building up the former one. The only solution to this problem is to study different perspectives [3, 16, 18] in order to achieve organizational success by predicting performance of students and educationists. For the futuristic strategic management [19] of an organization, it is mandatory to use data mining tools [20]. Therefore, to

9 Comparative Analysis of Educational Job Performance Parameters …

109

understand the parameters associated with students and educationists and to work on them in order to improve their job performance. In various studies, various algorithms have been used such as WEKA, R, KNIME, rapid miner, D2K, etc. in order to analyze the performance of students as well as educationists. But none of them have emphasized on both the entities (student as well as educationist). In this paper, a comparative analysis of different tools (WEKA, R, KNIME, Rapid Miner, SPSS, etc.), techniques (Decision Tree, Cart, Naïve Baes, Random Forest, etc.), algorithm accuracy, and challenges [21] associated with both the entities has been done (Table 1).

5 Analysis Applying the data mining techniques in the educational domain is important and effective only if all the factors related to both the entities (student and educationist) can be analyzed. This paper emphasized on the capability of factors which are supportable for the students as well as educationists. Based on the count related to the study of performance with both the entities are given in Fig. 2. Figure 2 clearly shows the studies associated with the count percentage of educationists and students, which is 38.5 and 61.5, respectively. Numerous parameters like student feedback, course completion, teaching methods, course training, etc., plays a major role while assessing the faculty performance. However, a number of factors like the viewpoint of faculty from employer’s perspective should be included in the studies. Similarly, parameters like CGPA, locality, relation with educationist, personal factors, background, etc., plays an important role. Few more factors like defaulters and low scorers details need to be added for a precise result. Various algorithms used for educational job performance for students and educationists have been used for two main parameters—student educationist interaction and student’s perception. It has been observed from various studies that for a smaller dataset, random forest has been used. Decision trees and logistic regression are best to use when sample size is more. With multiple subsets of a small dataset, decision tree may not be much effective but structured form of decision tree; i.e., bagging may give better results. Moreover, it has been observed that C5.0 has better accuracy than C4.5. For better results, it is much needed to choose parameters carefully.

6 Conclusion This paper focuses attention to keep in view the numerous factors associated with job performance of students and educationists. In brief, it can be concluded that data mining algorithms have a great potential to work in the educational field to achieve a better performance of an organization. Different parameters in the study reveal that algorithms will work best only if criteria of evaluation are clear.

Paper

The impact of engineering student’s performance in the first three years on their graduation result using EDM [17]

S. No.

1

Predictive analysis of final CGPA of engineering students from the GPA for first three years

Purpose Dataset of 1841 students was obtained from a study by Popoola (2018)

Dataset/tool

Table 1 Comparative study of tools, techniques, and parameters

PNN-85.89% Random forest 87.70% Decision tree 87.85% Naïve Bayes 86.43% Tree ensemble 87.88 Logistic regression 89.15%

First year GPA Second year GPA Third year GPA Final CG

Algorithm and accuracy Parameters used

(continued)

Advantage Beneficial for students in order to improve their performance Limitations No solution is suggested in order to handle the poor result/defaulter students Study could be more effective if the sample size of data is more

Advantages and limitations

110 S. Arora et al.

Paper

Analysis of faculty performance evaluation using classification [16]

S. No.

2

Table 1 (continued)

To provide feedback to faculty in order to assist and enhance their performance

Purpose From students of Jaipur National University R tool

Dataset/tool Logistic regression 97% Decision tree 97% SVM 95.9% NNET 95.9% Naïve Bayes 95.9

Target variable—“Pass and Fail”, “Good Grades” (w.r.t. all factors of faculty performance evaluation) Predictors are Agg_time, Agg_subject, Agg_teaching_methods, Agg_helping_attitude, Agg_Laboratory_interaction, Agg_Class_Control

Algorithm and accuracy Parameters used

(continued)

Advantage The study gave an insight to organizations in evaluating performance Limitation Parameters like course done, interaction with students needs to be included for better evaluation Faculty job performance is shown only on the viewpoint of student and result

Advantages and limitations

9 Comparative Analysis of Educational Job Performance Parameters … 111

Paper

Performance analysis of students consuming alcohol using data mining techniques [15]

Analyzing undergraduate students’ performance using educational data mining [14]

S. No.

3

4

Table 1 (continued)

To speculate the student’s academic achievement using classifier and relating the result of progression with prediction

To improve the efficiency of academic performance in the educational institutions for students who consume alcohol

Purpose

Dataset comprises 210 sample sizes, from the enrolled students of 2007–2009 from a public sector engineering university in Pakistan Rapid Miner software

The dataset obtained from 250 students of batch 2010–11 to 2015–16 of MCA Department, VBS Purvanchal University, Jaunpur, India WEKA tool

Dataset/tool

DT-GI 68.27% DT-IG 69.23% DT-Acc 60.58% RI-IG 55.77% 1-NN 74.04% Naive Bayes 83.65% Neural Networks (NN) 62.50% RF-GI 71.15% RF-IG 69.23%

SMO 73.41% Bagging 80.25% REP tree 80% Decision table 79.24%

Adj_Marks (HSC Examination total marks) Maths_Marks (Mathematics marks), MPC (PCM marks), HS-205/206 (Islamic studies or ethical behavior), MS-121 (applied physics) CS-251 (logic design and switching theory) CT-255 (assembly language programming) HS-207 (financial accounting and management)

School, sex, age, address, famsize, p status, Medu, Fedu, Mjob, Fjob, reason, guardian, travel time, study time, failures, schools up, famsup, activities, higher, internet, romantic, famrel, freetime, gout, Dalc, Walc, health, absences, G1, G2, G3

Algorithm and accuracy Parameters used

(continued)

Advantages Indicators of low performance will help weak students in improving results Limitations Dropouts and failures are not investigated as a part of this study Classifiers not interpretable for humans, i.e., Prediction of course on performance

Advantage This approach helps the students with special attention, who need counseling for leaving alcohol by understanding the dangers associated Limitation Judgment level of an alcoholic person is not justified

Advantages and limitations

112 S. Arora et al.

Paper

Predicting instructor performance using data mining techniques in higher education [13]

S. No.

5

Table 1 (continued)

To analyze the instructor’s teaching success based on the student’s interest in the subject

Purpose Dataset is created from the students of Maramara University, Istanbul, Turkey

Dataset/tool C5.0 92.3% CART 89.9% SVM 91.3% ANN-Q2H 91.2% ANN-Q3H 90.8% ANN-M 90.5% DA 90.5%

Parameters for dataset designing used 26 variables related to course and instructor

Algorithm and accuracy Parameters used

(continued)

Advantages An effective solution for educational and administration purposes Limitation Improvement Needed in order to design measuring metrics for better results related to instructor’s performance

Advantages and limitations

9 Comparative Analysis of Educational Job Performance Parameters … 113

Paper

Application of data mining in predicting placement of students [12]

S. No.

6

Table 1 (continued)

An approach in predicting the final placement of students in different company types (core IT/consultancy) on the basis of academic performance and their final placement

Purpose Dataset is taken through students of Department of Computer Science, Thapar university, Patiala (Punjab) WEKA tool

Dataset/tool J48 decision tree 95.52% WEKA J48 82.39% Naïve Bayes 62.3%

Dataset comprised on the basis of 27 attributes which includes: CGPA, discrete mathematics, data structures, OOPS, OS, computer system architecture, system analysis, computer networks, principles of programming languages, algorithm design, database systems, software engineering

Algorithm and accuracy Parameters used

(continued)

Advantages Approach will help students in analyzing the students where they are lacking and hence can improve it This approach can also be beneficial for educators, HODs, placement officers as they can overcome the problems faced during final placement of students Limitations Author would try to emphasize some more attributes (like skills, achievements, etc.) in order

Advantages and limitations

114 S. Arora et al.

Paper

Predictive analytics in higher education [11]

S. No.

7

Table 1 (continued)

To show the ability of well predictive analytics towards resolving issues like decision prediction, enrolment management etc. through a case study at DTU

Purpose AIEEE 2007 dataset collected from NIC WEKA & SPSS, Clementine tool tenfold validation, method

Dataset/tool C5.0 99.95% C4.5-A2 67% C4.5-A1 60% CRT 98.72% Neural Network 98.09%

AIEEE rank, Category rank, category, birth year, gender, family pressure, eligibility counseling, course Branch name, midterm marks, technical participation related to subject, subject teacher interaction, lectures attended, understanding of theory, practice, subject scope and grade

Algorithm and accuracy Parameters used

(continued)

Advantages Performance of C5.0 is better than C4.5 C5.0 provides maximum accuracy during training and testing, hence leads to maximum information gain Limitations Use of limited data during training phase

Advantages and limitations

9 Comparative Analysis of Educational Job Performance Parameters … 115

Improving teacher Development of a performance using data mining-based data mining [10] model that can evaluate teacher’s performance

8

Purpose

Paper

S. No.

Table 1 (continued)

Teacher dataset comprises of 813 records from ministry of education and higher education in Gaza City (2010–2013)

Dataset/tool Rule induction 76.23% Bayesian Kernel 77.46% KNN 79.92%

Teacher_name, Teacher ID, Classification, Qualification, Specific, Course1, Class1, Course2, Class2, Date_Of_Work, Workplace, Upper_Workplace, Number of days and hours of training course 1 and 2. Q1 to Q29 (based on goal courses, trainees and trainers)

Algorithm and accuracy Parameters used

(continued)

ADVANTAGES Such a model can help in improving the teacher’s performance Better for educational organizations Limitations Few more factors (e.g., employment status, instructor’s attitude) needs to be included to make research more effective

Advantages and limitations

116 S. Arora et al.

Paper

Prediction of student’s success by applying data mining algorithms [9]

Mining students’ data for performance prediction [8]

S. No.

9

10

Table 1 (continued)

Prediction of performance of third semester students of MCA

To develop a predictive model in order to analyze the academic performance of students

Purpose

Sample of 250 students of MCA, from GGSIPU WEKA tool

Data obtained through the survey of 907 senior secondary school students from session 2011 to 2012, in Tuzla canton WEKA tool

Dataset/tool

J48 88.37% Random tree 94.41%

J48 64.68% Random classifier 69.48% Multilayer perceptron 59.21% Naïve Bayes 56.29%

Gender, Father’s education, Mother’s edu, Father’s occupation, Mother’s Occupation, tenth, twelfth, grad, first_sem, sec_sem, thr_sem, graddegtype, graddegstream, gapyear, acadmichrs, assertion, empathy, decision a making, leadership, drive, stress mgmt [22]

Sex, age, type_of_school, address, arent’s cohabitation Status, mother’s education, mother’s job, father’s education, father’s job, family size, reason to choose this school Home to school travel time, type of travel from home, monthly scholarship, weekly study time, internet access at home, importance of grades obtained, years of schooling, average income of parents, final grade

Algorithm and accuracy Parameters used

(continued)

Advantages Approach is emphasizing on the academic, social as well as emotional parameters Limitations Social parameters have less effect on the student’s performance

Advantages Approach can be user friendly for new users and professors Methodologies used in the approach can be useful in decision making

Advantages and limitations

9 Comparative Analysis of Educational Job Performance Parameters … 117

Paper

An analysis of student’s performance using classification algorithms [7]

Evaluation of teacher’s performance: a data mining approach [6]

S. No.

11

12

Table 1 (continued)

To analyze the performance of the teacher on the basis of multiple factors like student feedback, organizational feedback, institutional support, etc

To evaluate the school student’s performance by applying data mining classification algorithms

Purpose

From Post graduate students at Department of college of engineering in VBS Purvanchal University, Jaunpur (U.P.) WEKA tool

260 samples from students are taken from different schools. (Primary), Tiruchirappalli (Tamilnadu) WEKA Tool

Dataset/tool

Naïve Bayes 80.35% ID3 65.17% CART 72.32% LAD 75.00%

Teacher’s name, speed of delivery, content arrangement, presentation, communication, knowledge content delivery, explanation power, doubts clearing, discussions of problems, overall completion of course and regularity, student’s attendance, result, performance of teacher

Decision tree classifier Gender, locality, Paredu, eco, 86.15% attendance, result C4.5(J48) 86.15% Random forest 89.23% Neural network 87.30% Lazy-based classifier (IBI) 80.38%

Algorithm and accuracy Parameters used

(continued)

Advantage An effective way to extract unidentified trends in a teacher’s performance Limitations Need to introduce some more effective variables Few attributes didn’t show much clear effect on performance prediction

Advantages This kind of study can be extremely useful to assist faculty members in getting a good result Limitations Only details of the decision tree are given in the study

Advantages and limitations

118 S. Arora et al.

Paper

Job satisfaction of Educationists: an important antecedent for enhancing service quality in education sector of Pakistan [5]

S. No.

13

Table 1 (continued)

To establish a positive relationship between Job satisfaction and teaching quality

Purpose Sample collected from 206 faculty members of different public and private Sector Universities in Pakistan

Dataset/tool One-way analysis of A positive relationship exists variance was used to between Factors JS and TQ check the control with mean as 3.59 variables Descriptive statistics correlation regression analysis used to find out the relationship between variables

Algorithm and accuracy Parameters used

Demographic variable (gender, qualifications, income, work experience and province) Job satisfaction (13 items), teaching quality (9 items)

Advantages and limitations

9 Comparative Analysis of Educational Job Performance Parameters … 119

120

S. Arora et al.

Fig. 2 Count of performance w.r.t. study

An extension to this work could be to predict the interrelation among factors which affect the performance of students or educationists using data mining algorithms.

References 1. Arora S, Agarwal M (2018) Empowerment through big data—issues and challenges. Int J Sci Res Comput Sci Eng Inf Technol 3:423–431 2. Arora S (2016) A novel approach to notarize multiple datasets for medical services. Imperial J Interdisc Res 2(7):325–328 3. Quan P, Liu Y, Zing T, Wen Y (2018) A novel data mining approach towards human resource performance appraisal. Springer International Publishing AG, pp 476–488 4. Radaideh A, Qasem N, Eman A (2012) Using data mining techniques to build a classification model for predicting employees performance. Int J Adv Comput Sci Appl 3(2):144–151 5. Waqas M, Qureshi T, Anwar F, Haroon S (2012) Job satisfaction of educationists: an important antecedent for enhancing service quality in the education sector of Pakistan. Arabian J Bus Manage Rev 2(2):33–49 6. Pal A, Pal S (2013) Evaluation of teacher’s performance: a data mining approach. Int J Comput Sci Mobile Comput:359–369 7. Mythili SM, Shanavas (2014) An analysis of student’s performance using classification techniques. IOSR J Comput Eng 16(1):63–69 8. Mishra T, Kumar D, Gupta S (2014) Mining Students’ data for performance prediction. In: International conference on advanced computing and communication technologies, pp 255–262 9. Osmanbegovic EE, Agic H, Suljic M (2014) Prediction of student’s success by applying data mining algorithms. J Theor Appl Inf Technol:378–388 10. Hemaid RK, Halees M (2015) Improving teacher performance using data mining. Int J Adv Res Comput Commun Eng:407–413 11. Jindal R, Dutta BM (2015) Predictive analytics in higher education. IEEE:24–33 12. Pruthi K, Bhatia P (IEEE) Application of data mining in predicting placement of students. In: International conference on green computing and Internet of things, pp 528–533 13. Mustafa A (2016) Predicting instructor performance using data mining techniques in higher education. IEEE:2379–2387 14. Asif R, Merceron A, Ali A, Haider GN (2017) Analysing undergraduate students’ performance using educational data mining, computers and education. Elsevier, pp 177–194 15. Pal S, Chaurasia V (2017) Is alcohol affect higher education students performance: searching and predicting pattern using data mining algorithms. Int J Innov Adv Comput Sci:8–17 16. Bhatnagar S, Saxena PS (2018) Analysis of faculty performance evaluation using classification. Int J Adv Res Comput Sci 9(1):115–121

9 Comparative Analysis of Educational Job Performance Parameters …

121

17. Adekitan AI, Salau O (2019) The impact of engineering students’ performance in the first three years on their graduation result using educational data mining. Heliyon, Elsevier, pp 1–21 18. Anuradha C, Velmurugan T (2015) A comparative analysis on the evaluation of classification algorithms in the prediction of students performance. Indian J Sci Technol 8:1–12 19. Agarwal S, Pandey GN, Tiwari MD (2012) Data mining in education: data classification and decision tree approach. Int J e-Educ e-bus e-manage e-learn 2(2):140–144 20. Shah D (2017) Towards data science. Comprehensive list of data mining tools [Online], Retrieved from towardsdatascience.com. https://towardsdatascience.com/data-mining-toolsf701645e0f4c, 16 Nov 2017 21. Ahmad R, Bujang S (2013) Issues and challenges in the practice of performance appraisal activities in the 21st century. Int J Educ Res 4 22. Mehfooz Q, Haider S (2017) Effect of stress on academic performance of undergraduate medical students. J Commun Med Health Educ 7(6)

Chapter 10

Digital Anthropometry for Health Screening from an Image Using FETTLE App Roselin Preethi and J. Chandra Priya

1 Introduction The important scope of FETTLE app is that it measures human body accurately and scientifically. The main scope of the FETTLE app is to determine human body size, structure, and composition. In fact, FETTLE helps psychologists to assess the attributes like body size, height, arm length, etc., in association with other human measurements, including sight (e.g., color, distance, and clarity), touch (e.g., sensitivity, weight, and pain), movement (e.g., rate and reaction time), memory, and mental fatigue. FETTLE app serves as a personal physician to monitor individual health and development. Combination of augmented reality and deep learning is one of the striving feature of the FETTLE app solution. AR technology to determine human attributes scientifically and by training the app to estimate the health risks of an individual is more creative solution of the proposed work. By making the mobile app camera to screen the human health is another thriving feature of FETTLE. Analyzing the anthropometric results of an individual by comparing the current health trends is another predominant work of this solution. Detecting the health risks from a single image captured by mobile camera by triaging the anthropometric information of an individual is the highlighting feature of the novel approach. Branch of somatometry, cosmetology, cephalometry, craniometry, and osteometry in the identification of human remains have been identified using FETTLE app. Turning the mobile device into a personal physician is what the FETTLE app is doing all the way. It guides the R. Preethi · J. Chandra Priya (B) Anna University, MIT Campus, Chennai, India e-mail: [email protected] R. Preethi e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_10

123

124

R. Preethi and J. Chandra Priya

user in knowing his health status, disease at risk by prompting his anthropometric values thereby screens his overall health status. FETTLE compares the training set of records which contains the healthy report with the captured image of an individual to ensure experimentally like a medical practitioner to determine the health risks.

2 Related Work Human attributes quantification approach from an image in two-dimensional view is described in [2]. The human body fat estimation method discussed in [4] is useful in creating the dataset for weight prediction. Significant body point labeling and tracking explained in [13] is useful for labeling the body point in a predominant manner. 3-D reconstruction of human body shape from a single commodity depth camera method [7] helps in tracking the human body shape by reconstruction. Real-time fully incremental scene understanding on Mobile Platforms [14] model details how the AR is rendered in the camera scene. The paper on computer vision-based human body segmentation and posture estimation [26] explains the human body segmentation from different postures and estimates the segmentation results. Another approach on age synthesis and estimation via faces [22] describes the face recognition of the human with respect to his age and other factors. 3-D facial landmark localization with asymmetry patterns and shape regression from incomplete local features [19] details the landmark localization of the human body shapes triggering it with a asymmetric pattern thereby applying shape regression. One another approach using Kinects is scanning 3-D full human bodies using kinects [15] that performs the full body scan and extracts the details as required. Typical study on privacy preserving cloth try-on using mobile augmented reality [16] helps to bring out the cloth try-on mode of scene rendering with respect to size attribute. 3-D rigid body tracking using vision and depth sensors [18] study enhances the 3-D body tracking usage and identifiers.

3 System Overview Detailed system overview is described in the subsequent subsections as follows, that covers the entire concept of delivering a useful app for estimating health screening of an individual.

3.1 Adulthood Space The adulthood sector details the digital anthropometry of an adult to screen their health risk and respective diagnosis suggestions if any. Health screening test from an image is one of the novelty of our approach.

10 Digital Anthropometry for Health Screening …

125

Stature Discoverer: Stature discovery section is proposed in such a way that the person is subjected to the AR scene, wherein his/her photo is captured from camera and his height is determined successfully. Standing bare footed on a hard, flat surface with back against a wall and feet be spaced just less than shoulder width apart gives the accurate height results in our experiments. Below diagram Fig. 1 shares the information on how to place the control points for a height discovery. Weight Quester: Weight quester module takes the image as input in front view and side view to calculate the abdominal circumference so as to determine the weight of the user from the image captured in the camera scene. A reference picture is also displayed so as to train the user in placing the control points in the captured image. Fitness Calculators: Following fitness calculators will define one’s fitness status in terms of body metabolism values. Body Mass Index (BMI): Body mass index is obtained in such a way that the anthropometric values captured from the above sections, that is, height, weight, and gender attributes. Status of the individual based on their BMI value is delivered. Fig. 1 Control points placed to track height

126

R. Preethi and J. Chandra Priya

Following formula determines the obese, normal, and underweight status based on the BMI value. BMR: Basal metabolic rate is calculated from the attributes like weight, height, and age. Below equation details the BMR of an individual. Normally, BMR is the number of calories body burns at rest to maintain normal body functions. ω = (10 ∗ δ) + (6.25 ∗ λ) − (5 ∗ γ ) + 5

(1)

where ω = BMR value, δ = weight, λ = height and γ = age. Athlete Space: Athlete space module is useful for an individual in terms of athlete world. Fitness report in playing is the motto of this module. FETTLE helps athlete that has to monitor their fitness status by the anthropometric values obtained from their image. Forensic Anthropometry: Forensic anthropometry module is developed for detecting the criminals by comparing the images. This section helps the forensic department to identify the criminals by feeding the criminal photographs into FETTLE.

3.2 Baby Care Space The baby care space is developed to determine baby’s anthropometry from an image. This section acts as a baby nutritionist in assessing the baby growth and deficiencies. Baby Nutritionist: Baby nutritionist space determines the health status of the baby in terms of nutrition and growth. This section gets the image of a baby from the user and studies the image by analyzing it. Here, presented a convolution neural network model to determine the disease of the baby from an image. The CNN model here created will triage the picture obtained from the user in such a way that it compares with the dataset fed into it that contains the malnutrition diseases like Rickets, Marasmus, Kwashiorkor, etc. The FETTLE app not only returns the name of the disease rather it also determines to which probability the disease is found in the image. Ideal Baby Weight: This section calculates the ideal weight of the baby during various stages of its development. This calculator calculates the weight of babies from their birth up to seven years. The below derived formula is throwing the right results for both male and female babies, wherein the δ determines the age of the baby and λ determines the ideal weight of the baby. λ = 9.5 + (2 ∗ (δ − 1))

(2)

Baby Height Predictor: Normally, a child’s height is based on parental heights subject to regression toward the mean. This means that very tall or short parents are likely to have a taller or shorter child than average, but the child is likely to be closer

10 Digital Anthropometry for Health Screening …

127

to the average height than their parents. Here, δ is the father’s height and γ is the mother’s height used in the below formula which predicts the future height of the baby which is λ. λ = ((δ + γ ) + 13)/2

(3)

Baby Milestone Calculator: This section determines the child’s development to assess the baby growth, this will also help to track the developmental landmarks the baby show as its gets older. Results from this section is useful to identify whether the baby is developing normally or not.

3.3 Immobilized Patients Anthropometry This module is specially developed for the bed ridden patients to determine their anthropometry. Bed Ridden Patients Wing: In this section, hospitalized patients were subjected to anthropometry which is easier to monitor their health on a regular basis. Measurement of height in the critical care unit is necessary for estimating ideal body weight. FETTLE provides a solution in estimating the height and weight of the bed ridden patients. Bed Ridden Patients Height Determination: In this ideality, our novel approach is to obtain the height of the bed ridden patients from an image using the below formula where λ is the height of the patient, γ is the knee height of the patient and δ is the age of the patient. For men, λ = [1.94 ∗ γ ] − [0.14 ∗ δ] + 78.31

(4)

For women, λ = [1.85 ∗ γ ] − [0.21 ∗ δ] + 82.21

(5)

Knee height measurement is also symbolized the below Fig. 2

Fig. 2 Knee height measurement from critically ill patients

128

R. Preethi and J. Chandra Priya

3.4 Health Trends Based on the overall anthropometric results obtained, individual can track his health status with the current health trends in this section. Detailed report of an individual user of this app is compared with the current health trends.

4 High Level Implementation Architecture High level system architecture diagram is detailed in Fig. 3. User login to the app, new user registration is also available, thereby he gets access to the FETTLE app. Once login is successful, user enters to a health screening section.

5 Methodology Anthropometric information from a human body in terms of circumference, skin fold, and width are detailed in our novel approach by applying the transfer learning methodology to get the anthropometric values obtained from an image. Algorithm to Determine the Adulthood Module Input: Image captured from camera or picked from photo library. Output: Anthropometric values along with health screening results. Step 1 Capturing the camera scene view. Step 1.1 Scenes are organized in a tree-like structure called the scene graph. Step 1.2 Center of the graph is a root node which defines the scene coordinate system. Step 1.3 Nodes represent position in a scene that can attach elements to such as lightning or geometries which are visible elements of the scene. Step 2 Encapsulate the real-world location. Step 3 Each node in a scene contains a transformation/transformation matrix relative to the root node. It contains information such as position, rotation, and scale of a node relative to its parent. Step 4 Observe the physics body in the camera scene. Step 5 Implement the action. Step 6 Apply transfer learning methodology to determine accurate results. Playing with the Camera Scene

10 Digital Anthropometry for Health Screening …

Fig. 3 FETTLE app architecture diagram

129

130 Table 1 Tech stack version details

R. Preethi and J. Chandra Priya Technical details

Version

Programming language

Objective C 2.0, Swift 5.0, Python 3.7.4

Framework

Core ML 3.0, ARKIT 3.0, Keras 2.3.0, Tensor Flow 2.0.0

Platform

iOS 13.1

XCode

10.1

Mac OS X

10.14.6

Step 1 Location and orientation of the camera is stored in transform matrix. Step 2 Transform Matrix—A rectangular array of numbers. Step 3 Scene Vector—A new three component vector created from individual component values. Finding a Real-World Location Step 1 Camera’s center point is determined from the scene view’s point of view. Step 2 Camera’s location takes the three component values which includes cameraTransform.m41, cameraTransform.m42, and cameraTransform.m43. Step 3 Camera’s orientation takes the three component values from camera transform matrix. Playing with Physics Body in the Camera Scene Step 1 Observe the physics body in the camera scene and with respect to six degrees of freedom (6DOF) track the object. Step 2 Hit testing is done to ensure the other interesting features that were caught on the track. Step 3 Detected feature points are then analyzed to determine the required object in the scene. The detailed tech stack is summarized in Table 1.

6 Experimental Results Baby malnutrition detector studies the images with the help of an image classifier that is customized to predict the malnutrition diseases. Python code is used to train the CNN model named “FETTLE malnutrition.h5” using the diseases of malnutrition in the form of images. The obtained “FETTLE malNutrition.h5” is then converted to “FETTLE malnutrition.mlmodel” to use it in the iOS CoreML code which exactly predicts the result on a single click of the button in iPad/iPhone devices. Figure 4

10 Digital Anthropometry for Health Screening …

131

Fig. 4 Turi create visualization results

describes the result from turi create visualization and Fig. 5 details the training accuracy of the ResNet classifier. The simulator output form iPhone8 device is depicted in Figs. 6 and 7 which detects the malnutrition disease such as Marasmus and Kwashiorkor. Other disease like rickets and other under nutrition baby diseases can also be detected by this approach.

7 Performance Analysis Usage of indoor scene, understanding in our novel approach which is efficient both in terms of memory and computational time. Following are the factors by which we determine the performance of our FETTLE App. Numerical Calculations: The test measured the time to calculate the number n to ten thousand decimal places, for which our app is returning a better result. Network Support: This test measures the reachability of the network in both online and offline mode of the app. FETTLE yields a good result even in this experiment. Image Downloading Speed: This is an important and an essential test that needs to be carried out in our novel approach since many of our modules are handled with images. Since we have used ResNet-50 CNN as a image classifier, overall app size is pretty good when compared to other classifiers. Computational power: FETTLE app yields a good result in terms of computational power since it uses many of the iOS Native plugins and frameworks. Use of ARKIT and COREML is one such example to obtain better performance.

Fig. 5 Training accuracy of baby malnutrition detector

132 R. Preethi and J. Chandra Priya

10 Digital Anthropometry for Health Screening …

133

Fig. 6 Malnutrition detector determine marasmus disease

8 Conclusion We presented a deep convolution neural network as a transfer learning methodology for identifying human anthropometry scientifically. The main motivation behind FETTLE app was the need to design an efficient algorithm for determining anthropometry so as to assess health screening with the help of mobile cameras. A digital health guide which strives each one of us in identifying the health risks proactively before seeking a medical practitioner.

134

R. Preethi and J. Chandra Priya

Fig. 7 Malnutrition detector determining kwashiorkor disease

References 1. Preethi R, Farhana R (2016) Human attributes quantification from A 2D image using hale CANVAS app. Int J Innov Res Sci Eng Technol 5(3):4101–4105 2. Preethi R, Farhana R (2016) Deracinating Deets from an image using FETTLE. In: Proceedings of 35th IRF international conference, pp 6–8 3. Giachetti A, Lovato C, Piscitelli F, Milanese C, Zancanaro C (2015) Robust automatic measurement of 3D scanned models for the human body fat estimation. IEEE J Biomed Health Inform 19(2):660–667 4. Jiang M, Guo G (2019) Body weight analysis from human body images. IEEE Trans Inf Forensics Secur 14(10):2676–2688 5. de Oliveira Rente P, Brites C, Ascenso J, Pereira F (2018) Graph-based static 3D point clouds geometry coding. IEEE Trans Multimedia 21(2):284–299 6. Zhang Y, Luo X, Yang W, Yu J (2019) Fragmentation guided human shape reconstruction. IEEE Access 7:45651–45661 7. Zhao T, Li S, Ngan KN, Wu F (2018) 3-D reconstruction of human body shape from a single commodity depth camera. IEEE Trans Multimedia 21(1):114–123 8. Cheng ZQ, Chen Y, Martin RR, Wu T, Song Z (2018) Parametric modeling of 3D human body shape—a survey. Comput Graph 71:88–100

10 Digital Anthropometry for Health Screening …

135

9. Tsitsoulis A, Bourbakis NG (2015) A methodology for extracting standing human bodies from single images. IEEE Trans Human-Mach Syst 45(3):327–338 10. Li S, Lu H, Shao X (2014) Human body segmentation via data-driven graph cut. IEEE Trans Cybern 44(11):2099–2108 11. Li S, Lu H (2011) Arbitrary body segmentation with a novel graph cuts-based algorithm. IEEE Signal Process Lett 18(12):753–756 12. Prisacariu VA, Kähler O, Murray DW, Reid ID (2014) Real-time 3D tracking and reconstruction on mobile phones. IEEE Trans Visual Comput Graph 21(5):557–570 13. Azhar F, Tjahjadi T (2014) Significant body point labeling and tracking. IEEE Trans Cybern 44(9):1673–1685 14. Wald J, Tateno K, Sturm J, Navab N, Tombari F (2018) Real-time fully incremental scene understanding on mobile platforms. IEEE Robot Autom Lett 3(4):3402–3409 15. Tong J, Zhou J, Liu L, Pan Z, Yan H (2012) Scanning 3D full human bodies using Kinects. IEEE Trans Visual Comput Graph 18(4):643–650 16. Sekhavat YA (2016) Privacy preserving cloth try-on using mobile augmented reality. IEEE Trans Multimedia 19(5):1041–1049 17. Dey A, Jarvis G, Sandor C, Reitmayr G (20122) Tablet versus phone: depth perception in handheld augmented reality. In: 2012 IEEE international symposium on mixed and augmented reality (ISMAR). IEEE, pp 187–196 18. Gedik OS, Alatan AA (2013) 3-D rigid body tracking using vision and depth sensors. IEEE Trans Cybern 43(5):1395–1405 19. Sukno FM, Waddington JL, Whelan PF (2014) 3-D facial landmark localization with asymmetry patterns and shape regression from incomplete local features. IEEE Trans Cybern 45(9):1717–1730 20. Liu Z, Huang J, Bu S, Han J, Tang X, Li X (2016) Template deformation-based 3-D reconstruction of full human body scans from low-cost depth cameras. IEEE Trans Cybern 47(3):695–708 21. Song D, Tong R, Du J, Zhang Y, Jin Y (2018) Data-driven 3-D human body customization with a mobile device. IEEE Access 6:27939–27948 22. Fu Y, Guo G, Huang TS (2010) Age synthesis and estimation via faces: a survey. IEEE Trans Pattern Anal Mach Intell 32(11):1955–1976 23. Chen Y, Cheng ZQ, Lai C, Martin RR, Dang G (2015) Realtime reconstruction of an animating human body from a single depth camera. IEEE Trans Visual Comput Graph 22(8):2000–2011 24. Cui PF, Yu Y, Lu WJ, Liu Y, Zhu HB (2017) Measurement and modeling of wireless off-body propagation characteristics under hospital environment at 6–8.5 GHz. IEEE Access 5:10915– 10923 25. Edelman G, Alberink I (2010) Height measurements in images: how to deal with measurement uncertainty correlated to actual height. Law Probab Risk 9(2):91–102 26. Juang CF, Chang CM, Wu JR, Lee D (2008) Computer vision-based human body segmentation and posture estimation. IEEE Trans Syst Man Cybern Part A Syst Humans 39(1):119–133

Chapter 11

Analysis and Redesign of Digital Circuits to Support Green Computing Through Approximation Sisir Kumar Jena, Saurabh Kumar Srivastava, and Arshad Husain

1 Introduction The computing devices starting from a smartphone to a more powerful supercomputer, clusters, and even a grid, changed the way a human works with their problem statements. These devices now act as essential equipment of our daily routine. But when you think about these technological revolutions in terms of environmentfriendliness, we are in a detrimental dilemma. The massive power consumption by these devices places a heavy burden on the power grid. Beginning from its production to its disposal, it poses several disadvantages to the environment. The Green Revolution, which is initially developed for increasing agriculture production in developed countries [1], is now become known as Green Computing (GC) in the computing industries. GC refers to the research and development of computing devices that do not pose any adverse impact on the environment. Energy consumption is one such issue that comes under GC, and several researchers are trying to develop a computing system that consumes less power and energy. Thermal energy consumption poses the biggest challenge in computing devices, whether the device runs on battery power or consumes the energy from a standard power grid. To employ “greenness,” researchers in several fields like big data [2], many-core systems [3], communications [4], Cloud Computing [5], and IoT [6] are trying to build a methodology that helps in reducing the energy consumption. There are very few contributions toward the logic circuit level and they are termed as low-power designs. We are mainly focusing on the inexact circuit design where errors are deliberately allowed with an exchange in energy minimization [7]. S. K. Jena (B) · S. K. Srivastava · A. Husain DIT University, Dehradun, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_11

137

138

S. K. Jena et al.

Given Gate-Level Netlist

Given Error Margin (EM)

Analysis and identifying the Candidate Gate Store Peak Power

Remove the Candidate Gate and Redesign to produce Green ICs

Peak Power Analysis

Obtaining the result for sample inputs

YES

Is the obtained Result is within the Error Margin?

NO Stop Fig. 1 Proposed method

Fig. 2 a Weight calculation using Rule-1, b weight calculation using Rule-2

11 Analysis and Redesign of Digital Circuits …

139

Fig. 3 Analysis, identifying CG and redesign after CG removal

This paper is a contribution to the analysis of digital circuits and proposes a technique to redesign it with an intent to support green computing. We use a gatelevel netlist to carry out the study. The objective is to remove components (in terms of gates) from the primary circuit and propose a new modified circuit that consumes less power as well as area. In the electronic industry, these modified circuits are referred to as approximate circuits [8]. The only problem with this kind of circuit is it does not produce an actual result; rather, it produces an approximate result (also known as a good-enough result). Because of the error-resilient property of several applications, these approximate results are quite useful. For instance, applications like machine learning, image and video processing are error-resilient in nature. The redesigned circuit with our proposed method not only contributes toward green computing but also can be used in several devices that are error-resilient in nature.

140

S. K. Jena et al.

Fig. 4 Analysis of C17 circuit and generation of C17 Green ICs

106.17 72.19

78.85

58.83

39.09 28.13

31.2 19.03

22.75 14.1

17.56 11.23

C432

C499

C880 C1355 C1908 C2670 C3540 C5315 C6288 C7552

17.83 11.41

17.83 11.41

C17

11.93 8.94

0.45 0.35

82.87

115.97

PEAK POWER ANALYSIS Series1 Series2

Fig. 5 Peak power analysis of several ISCAS’85 benchmark circuit Table 1 Result and ES of example circuit R

Rg1

E S 3b

Rg2

x3

x2

x1

x0

E S 3c

0

1

0

1

1

1

0

1

0

0

1

1

1

15

15

0

15

0

1

0

0

1

33

33

0

33

0

1

0

1

0

36

36

0

32

4

1

1

0

0

48

48

0

48

0

1

1

1

0

60

60

0

56

4

11 Analysis and Redesign of Digital Circuits …

141

Table 2 Result analysis of C17 circuit and its versions x4

x3

x2

x1

x0

IC

GIC1

R

Rg1

E S1

GIC2 Rg2

E S2

GIC3 Rg3

E S3

0

0

0

0

0

0

1

1

0

0

2

2

0

0

0

0

1

2

3

1

2

0

2

0

0

0

0

1

0

0

1

1

0

0

2

2

0

0

0

1

1

2

3

1

2

0

2

0

0

0

1

0

0

0

1

1

0

0

2

2

0

0

1

0

1

2

3

1

2

0

2

0

0

0

1

1

0

0

1

1

0

0

2

2

0

0

1

1

1

0

1

1

0

0

2

2

0

1

0

0

0

3

3

0

2

1

0

3

0

1

0

0

1

3

3

0

2

1

2

1

0

1

0

1

0

3

3

0

2

1

0

3

0

1

0

1

1

3

3

0

2

1

2

1

0

1

1

0

0

3

3

0

2

1

0

3

0

1

1

0

1

3

3

0

2

1

2

1

0

1

1

1

0

0

1

1

0

0

0

0

0

1

1

1

1

0

1

1

0

0

0

0

1

0

0

0

0

0

1

1

1

1

3

3

1

0

0

0

1

2

3

1

3

1

3

1

1

0

0

1

0

0

1

1

1

1

3

3

1

0

0

1

1

2

3

1

3

1

3

1

1

0

1

0

0

1

0

1

1

0

3

2

1

0

1

0

1

3

2

1

3

0

3

0

1

0

1

1

0

1

0

1

1

0

3

2

1

0

1

1

1

1

0

1

1

0

3

2

1

1

0

0

0

3

3

0

3

0

1

2

1

1

0

0

1

3

3

0

3

0

3

0

1

1

0

1

0

3

3

0

3

0

1

2

1

1

0

1

1

3

3

0

3

0

3

0

1

1

1

0

0

3

2

1

3

0

1

2

1

1

1

0

1

3

2

1

3

0

3

0

1

1

1

1

0

1

0

1

1

0

1

0

1

1

1

1

1

1

0

1

1

0

1

0

142

S. K. Jena et al.

Table 3 Peak power analysis Circuit C17

Total gates

Input lines

Output lines 2

Vector

PP

Green OIC PP 0.35

% Gain

6

5

26

0.45

22

C432

160

36

C499

202

41

7

89

11.93

8.94

25

32

143

17.83

11.41

36

C880

383

60

26

111

17.56

11.23

36

C1355

546

C1908

880

41

32

143

17.83

11.41

36

33

25

177

22.75

14.10

C2670

1193

38

233

140

95

31.20

19.03

39

C3540

1669

50

22

304

39.09

28.13

28

C5315

2307

178

123

175

82.87

58.83

29

C6288

2406

32

32

82

115.97

78.85

32

C7552

3512

207

108

337

106.17

72.19

32

The overall idea of our proposed technique is based on an Error Margin (EM). We will first identify the Candidate Gate (CG) to remove from the circuit. The more gate we remove from the original circuit, the more we save on the energy as well as the area. Removing the gate affects the quality of the result (QoR). Hence, our objective is to keep the result within the EM. We will assign a weight to each gate of the given netlist and determine the CG based on the weight (a significance value). The gate having the smallest weight will be treated as the CG for removal. After removing the CG, a set of the sample test pattern are taken and applied. If the result is within the EM, then the peak power consumption is noted and compared with the original circuit’s peak power. The process of removal is continued until the result is within the EM. On each removal, the peak power consumption of the circuit is also noted down. A comparison result of the entire experiment is shown in the experimental section. According to the experiment, it has been observed that our approach can reduce the power consumption by 25–40% with an expense of negligible amount of QoR. In summary, the contribution of this paper includes the following: • We propose a novel circuit analysis and design idea that reflects the effectiveness of approximate computing and contributes toward Green Computing. • The peak energy consumption during the testing of the circuit is captured, and a comparison result is shown. It has been observed that 25–40% of energy can be saved using our proposal. The remaining section of the paper is as follows. Section 2 describes the related works in the field of green computing and approximate computing. Section 3 presents the proposed methodology to achieve greenness in designing digital circuits. Section 4 shows the experimental results about energy consumption compared to the original circuit, and finally, Sect. 5 concludes the discussion.

11 Analysis and Redesign of Digital Circuits …

143

2 Previous Works and Rationale of Our Proposal “Green Computing (GC),” as the name suggests, refers to computing techniques that do not pose any adverse effect on the environment. Energy consumption is one such area that supports GC. When we talk about Green IT infrastructure, it refers to the power consumption by data centers, and the technical equipment used in the campus [10]. When it comes to cloud computing, a technique such as an energyefficient cloud is proposed in [11]. The other area where green computing concept was employed includes big data, IoT, Data Centers, communications, and many-core systems described in [2–7]. This paper focuses on designing a digital circuit, which is the basic building block of any computing devices.

3 Proposed Method In this section, we will discuss the proposed methodology using a flowchart shown in Fig. 1. To analyze a circuit, we need its gate-level representation and the Error Margin (EM) of that circuit. The EM indicates an absolute numeric value up to which the output of the circuit is acceptable. The value of the EM depends on the application where the proposed circuit is to be used. For example, if the redesigned circuit is used in image processing application, the EM maybe ±5 because the addition or subtraction of 5 with the pixel value of an image does not deteriorate its clarity. Even if it slightly modifies the image, it will not affect the human perception toward the image clarity. The next step of our approach is to analyze and identify the Candidate Gate (CG). This is done by assigning a weight to each gate of the given circuit. Every circuit is assumed to have a set of input lines (X i ) on the top and a set of output lines (Yi ) on the bottom. Weight assignment begins from the output line from right to left with a value 2n , where n varies from 0 to the total number of output line minus 1. The gate that directly connects to the output line will get the weight of the output line itself. The gates that are present above the Level 0 (Level 1 onwards) will get its weight by the weight of the gate below it, using either of the following rules. Rule-1 If the output line of the gate at level i, (i > 0) is not branched (no fanout) and only connected to one gate at level j, (i > j), then the weight of the gate at level j will be the weight of the gate at level i. Rule-2 If the output line of the gate at level i, (i > 0) is branched (fanout) and connected to two or more gate at level j (i > j), then the sum of the weight of all the gate at level j will be the weight of the gate at level i.

144

S. K. Jena et al.

Figure 2 shows the way of calculating the weight of a gate using the rule explained above. Figure 3a shows an example circuit for which our approach generates the weight of each gate. Starting with y0 all the output lines is assigned the value 20 , 21 , . . . and so on, as you move toward y5 . The level-0 gates (connected directly to the output line) use these values as their weight. The weight of the top-level gate is calculated using either the rule-1 or 2 explained above. For instance, the weight of the OR gate (shown black) is 6, obtained by adding the weight of its immediate descendant (gates). After the weight assignment is over, we can easily identify the CG. In our approach, the gate with the lowest weight is known as the CG. For instance, in the original circuit (Fig. 3a), the OR gate marked with the red star has the lowest weight value and is identified as CG. The next step of our approach is to remove the CG and redesign the circuit. Redesigning is necessary because, after the removal of CG, we need to decide the input of the CG to its output. This problem is called CIO (connect input-output) decision problem and given below an example that demonstrates the complications associated with connecting the input line of the CG to the output line. Example Consider the example circuit shown in Fig. 3a. The gate marked with a red star is having the lowest significance and recognized as a CG to be removed from the circuit. Removal of this gate leads to the problem of CIO decision. You can notice that there are two input net (Net1 and Net2) and an output net (Net3) connected to the gate. There are two possible approaches to connect the input net to the output net for our example circuit. (i) Remove the gate and connect Net1 to Net3 (ii) Remove the gate and connect Net2 to Net3. In our case, we always remove Net2 (left input net) and connect Net1(right input net) to the output net (Net3). The CIO decision problem is designer-specific and having very little impact on the approach presented in this paper. Our objective is to remove the maximum number of gates with a constraint that the average error must be within the EM. The constraint assures that our circuit does not produce an unacceptable result during the normal operation. Removing the gate is a continuous process until the result produced by the circuit is within the EM. We use an iterative method to remove one gate at a time. For instance, in the original circuit (Fig. 3a), the OR gate marked with the red star has the lowest weight and need to be pruned in the first iteration. Figure 3b shows the result of the iteration 1 after the removal of one gate and redesigned with an updated weight of each gate. Similarly, Fig. 3c shows the redesigned circuit by removing four gates after iteration 4. The new circuit generated in each iteration consumes less energy as compared to its previous one. We named these intermediate circuits as Green Intermediate Circuits (Green ICs). Several Green ICs will be produced during the entire process. The next process is to test the Green ICs with sample input pattern and compare with the golden result. The circuit is only acceptable if the result obtained by the Green IC is within the EM. We use the following notations to define our measures: C: Represents the given gate-level netlist E M: Error margin

11 Analysis and Redesign of Digital Circuits …

145

G I Ci : Represent the Green ICs obtained after the removal of CG Te : Set of exhaustive input pattern. If the circuit has n input line, then Te contains 2n input patterns. Ts : Sample Input Pattern. Where Ts ⊂ Te j j Ts : Represent a test pattern in Ts i.e., Ts ∈ Ts R: Result obtained from I C Rgi : Result obtained from G I Ci Let xi and yi represents the input line and the output line of a gate-level netlist respectively. The result R (of the given gate-level netlist) and Rgi for a specific input pattern is calculated using the Eq. 1. ϑ = yn−1 2n−1 + yn−2 2n−2 + . . . + y0 20

(1)

where ϑ is replaced either by R or Rgi . The severity of error is estimated by three well-known measures defined in [9]. (i) Error Significance (ES): The maximum amount by which the output of a circuit deviates from the corresponding error-free output. (ii) Error rate (ER): Fraction for test patterns that produce erroneous output, and (iii) Error-Accumulation (EA): Change in error rate over time. In our case, ES will be used as a quantification measure and is calculated using the formula given in Eq. 2:   ∀Ts j ∈ Ts , E S(G I Ci ) =  R − Rgi 

(2)

After we find the ES for each input pattern applied  on a specific Green IC the next task is to compute the average error significance E Savg using the following Eq. 3. Explained below an example persuading the process of calculating measures.  ESavg (GICi ) =

j

Ts ∈Ts

   R − Ri  g

|Ts |

(3)

Example There are three circuits shown in Fig. 3. The circuit in Fig. 3b is obtained from the circuit in a by removing CG, as discussed above. Similarly, circuit c is obtained by following the iteration of the same removal process. The output value of the circuit shown in Fig. 3a, b is same which is (y5 , y4 , y3 , y2 , y1 , y0 ) = (111100) for the input (x3 , x2 , x1 , x0 ) = (1110). Result R of this circuit is obtained using Eq. 1 that is R = 1 × 25 + 1 × 24 + 1 × 23 + 1 × 22 + 0 × 21 + 0 × 20 = 60. Similarly, Rgi of the circuit shown in Fig. 3b is obtained using the same equation and Rgi = 60. Notice that irrespective of the number of gates, both circuits produce the same output for the input (x3 , x2 , x1 , x0 ) = (1110). The ES for this input is 0 (E S = 60 − 60 = 0). But, the result of the circuit shown in Fig. 3c produces a different output that is (y5 , y4 , y3 , y2 , y1 , y0 ) = (111000) and R = 56. Hence the ES for this case is 60 − 56 = 4. Likewise, we tested all the circuits with a set of inputs, and Table 1 summarizes the result along with the calculated ES. The average error significance of our example circuit can be calculated using the result summarized in Table 1 by

146

S. K. Jena et al.

Eq. 3. As discussed, the ESavg is calculated for each Green IC, separately. So, for our example circuit, we calculate the ESavg shown in Fig. 3b, c separately. • ESavg of Fig. 3b: As the ES of this circuit is all 0 for every input; so, the ESavg is also 0. • ESavg of Fig. 3c: ESavg = 0+0+0+4+0+4 = 1.33. 5 The next step of our approach is to obtain the Optimal Green Circuit (Green OIC). The Green IC is called optimal if it contains the minimum number of gates compared to other Green ICs, and the result produced by the circuit should be within the EM. In summary, a Green IC is said to be optimal if it satisfies the following conditions: Condition 1 It must contain minimum number of gate as compared to other Green ICs generated during the entire process. Condition 2 The result produced by this circuit should be within the error margin (EM) supplied by the designer of the circuit. It is obvious that if a circuit contains a minimum number of gates, then it consumes less power as compared to a circuit that contains a greater number of gates. We know that a circuit consumes more power during testing as compared to the normal operation. Hence, we stored the peak power consumption during testing of these circuits and compared it with the original one. We found that the Green OIC consumes 25–40% less energy.

4 Experimental Evaluation In this section, we will implement our technique on some benchmark circuits, starting with a case study. In our case study, we took the C17 circuit from the ISCAS85 benchmark. This circuit has five input lines, two output lines, and six NAND gates. Figure 4 shows the green IC generation process where three versions of the Green ICs are formed, each having one gate less as compared to its previous version. Table 2 summarizes the detailed input, output, ES, and ESavg for all the circuits. In this case, we define the EM value as 1.0. In summary, the average Error Significance of all the Green ICs are as follows: • ESavg (G I C1 ) = 0.68 • ESavg (G I C2 ) = 0.3125 • ESavg (G I C3 ) = 1.3125

11 Analysis and Redesign of Digital Circuits …

147

After analyzing the result summarized in Table 2; we found that the Green ICs GIC2 is our optimal circuit as it satisfies the two conditions described in Sect. 3. Further, we have analyzed and recorded the peak power consumption of several iscas’85 benchmark circuits. The peak power (PP) is measured in mW. To record the PP, we coded each circuit using Verilog code in Vivado Xilinx Environment. Then we applied the test vectors to each circuit and noted down the PP for both the original circuit and generated optimal Green IC. We found that the circuit requires 25–40% less power as compared to the original circuit. Figure 5 shows the comparison graph of both categories of the circuit.

5 Conclusion In this paper, we propose a novel technique to analyze the existing fundamental circuits and redesign it with a smaller number of components. Though the technique produces an acceptable result, it helps in reducing the energy consumption by 25– 40%. As the techniques of Green Computing demands energy consumption, our proposal satisfies the need for demand. The technique of this paper is applied to combinational circuits, and in our future work, we will implement this on sequential circuits. Our approach uses the concept of approximate computing where QoR is a tradeoff between accuracy and power consumption.

References 1. Gaud WS (1968) The green revolution: accomplishments and apprehensions. No. REP-11061. CIMMYT 2. Wu J, Guo S, Li J, Zeng D (2016) Big data meet green challenges: greening big data. IEEE Syst J 10(3):873–887 3. Kudithipudi D, Qu Q, Coskun AK (2013) Thermal management in many core systems. In: Evolutionary based solutions for green computing. Springer, Berlin, Heidelberg, pp 161–185 4. Mowla M, Munjure Md, Ahmad I, Habibi D, Phung QV (2017) A green communication model for 5G systems. IEEE Trans Green Commun Netw 1(3):264–280 5. Qiu C, Shen H, Chen L (2018) Towards green cloud computing: demand allocation and pricing policies for cloud service brokerage. IEEE Trans Big Data 5(2):238–251 6. Arshad R, Zahoor S, Shah MA, Wahid A, Yu H (2017) Green IoT: an investigation on energy saving practices for 2020 and beyond. IEEE Access 5:15667–15681 7. Xu W, Sapatnekar SS, Hu J (2018) A simple yet efficient accuracy-configurable adder design. IEEE Trans Very Large Scale Integrat (VLSI) Syst 26(6):1112–1125 8. Jena SK, Biswas S, Deka JK (2019) Systematic design of approximate adder using significance based gate-level pruning (SGLP) for image processing application. In: International conference on pattern recognition and machine intelligence. Springer, Cham, pp 561–570 9. Breuer MA (2004) Intelligible test techniques to support error-tolerance. In: 13th Asian test symposium. IEEE, pp 386–393 10. Murugesan S (2008) Harnessing green IT: principles and practices. IT Prof 10(1):24–33 11. Berl A, Gelenbe E, Di Girolamo M, Giuliani G, De Meer H, Dang MQ, Pentikousis K (2010) Energy-efficient cloud computing. Comput J 53(7):1045–1051

Chapter 12

Similarity-Based Data-Fusion Schemes for Missing Data Imputation in Univariate Time Series Data S. Nickolas and K. Shobha

1 Introduction Missing values is a serious problem, if further analysis and processing depends on complete data set. Hence, these missing values need to be handled properly if complete data set is necessary for forecasting or predictions. Most of the real time data set from various domain like, climate observation [1], energy industry [2], sensor data and finance are time series data. A deep look into the data shows that missing data are present in majority of the applications where data is measured and logged. The reason for missing values are many like human made data entry errors, nonmeasured data, transmission errors, equipment errors, sensor malfunction, device malfunction, communication errors, and markets shut down for a day. Numerous research works are in progress since many years, on missing value treatment [3]. Based on literature, missing data can be handled in two ways. (a) Ignoring attributes: this is a good way when the percentage of missing values is small. If ignoring attributes introduces a huge bias and if the entities are having high impact in analysis then this method fails. Another method to overcome this drawback of ignoring attributes is imputation technique. (b) Imputation: the process of replacing missing values with credible values. ‘Mean’ and ‘Mode’ imputation techniques are most simple method used for imputing numerical and qualitative missing values, respectively. The drawback of these techniques is that they ignore the inter relationship among elements of dataset. But, majority of forecasting algorithms work based on the existence of inter relation among elements. Hence, it is important to consider inter relationship while imputing missing values, since this consideration introduce less bias in imputed data. S. Nickolas (B) · K. Shobha Department of Computer Applications, National Institute of Technology, Tiruchirappalli, Tamilnadu 620015, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_12

149

150

S. Nickolas and K. Shobha

Some of the popular non-time series imputation techniques considered in literature are: hot deck [4], multiple imputation [3], Expectation Maximization (EM) [5], nearest neighbor method, clustering and particle swarm optimization based imputation algorithm [6]. Rahman ans Islam [7] has proposed algorithm for the imputation of both categorical and numerical missing values based on Random Tree and Random Forest. Rahman et al. [8] has proposed imputation algorithm based on co-appearances, correlations and similarity of attributes. References [9, 10] proposed imputation algorithm based on weighted nearest neighbors and neural network respectively. Imputation in time series data needs a special consideration of time dependencies to perform effective imputation, unlike consideration of co-variates in multivariate data sets. Numerous research works about time series and longitudinal data focuses solely on multivariate data sets [11–13]. Junninen et al. [14] consider algorithms for univariate imputation but has not considered the time series aspects. Many other single imputation and multiple imputation techniques like imputation based on Random Forest, Maximum Likelihood estimation, Expectation Maximization, and predictive mean matching algorithms do not consider time series characteristic of the data set and hence not suitable for univariate time series data. Algorithms such as last observation carried forward, next observation carried forward [15], arithmetic smoothing and linear interpolation algorithms [16] consider time series, but imputed values deviates much from original data series. Considering the drawbacks of the existing imputation algorithms and importance of imputation in univariate time series data, this paper aims to propose an imputation method for univariate time series data.

2 Related Work The demand for complete data in time series prediction has laid the way to numerous research in time series data imputation. Existing time series imputation algorithms, such as Splines, Moving averages and Interpolation methods estimate and fill the missing value using forward or backward observations. Hence, these algorithms achieve lower accurate results when the data set has large number of missing values. Authors of [5] have proposed time series imputation algorithm based on Expectation Maximization (EM) algorithm. Authors of [17, 18] have proposed iterative imputation algorithm by combining EM with kalman filter, EM with PCA and variational Bayes methods. Authors of [19] have proposed algorithm based on the combination of residual short paths with graph-based temporal dependencies. In the literature, numerous works on multivariate time series imputation are available. Authors of [20] have proposed multivariate imputation algorithm Dynammo, by modeling hidden patterns of observation sequences. Authors of [21] have proposed MICE, a sequential linear regression method for multivariate imputation. This algorithm regress variable with missing values on the other available variables and draw values from posterior predictive distribution to replace the missing value. Authors of

12 Similarity-Based Data-Fusion Schemes …

151

[22] have proposed an autoregressive (AR) algorithm to address time series prediction with missing values, with the assumption that, missing values can be represented as autoregressive form of previous non-missing values and missing values. Some of the existing univariate imputation techniques are Interpolation [16], Last Observation Carried Forward (LOCF) [15], Kalman smoothing [15] and Mean [15] methods. All the above listed multivariate algorithms works significantly well by considering the inter elements relationship. But, in case of univariate time series data, no additional attribute elements are present. Hence it is difficult to employ the existing methods directly on univariate data set. Effective univariate algorithms need to make use of the time series as one of the characteristics. So it is in much need to treat univariate time series data sensibly, and to design and develop imputation algorithms that are much tailored to univariate characteristics. Missing Type Pattern Understanding causes of missing data and the distribution of missing data type helps in selecting appropriate imputation algorithm. Missing data type is of three categories (a) Missing completely at random (MCAR), (b) Missing at random (MAR) and, (c) Not missing at random (NMAR) [23, 24]. Given a data set with missing values, identifying and assigning missing values to particular category are difficult, since the underlying missing mechanism are unknown. MAR and NMAR missing pattern identification needs domain knowledge and manual analysis of the patterns, whereas t-test and chi-square test can be used for MCAR missing pattern finding [25]. Univariate time series imputation algorithms treat MAR and MCAR patterns in similar manner [26].

3 Proposed Model The main aspect of the proposed imputation method for univariate time series data is the consideration of seasonality, trend and residual that exists in almost all time series data. This consideration plays an important role in imputation, since there is no guarantee that system behaves identical on two different days hence these decomposed property consideration plays major role in time series imputation. A univariate time series data is represented as n × d matrix, X, with pattern xt = (x1 , . . . , xd ) ∈ Rd , as a d-dimensional feature vector at time t ∈ {1, . . . , n}. To distinguish between the observed and missing data inside clustered data, an n × d missing indicator matrix, M, with m t = (m 1 , . . . , m d ) ∈ {0, 1} is maintained, whereby ‘1’ indicates missing data and ‘0’ as existence of data. With help the of an indexing operator , all features of the pattern xt marked with ‘1’ in m t can be extracted as shown in Eq. 1:   xt θ m t = x j : ∀j ∈ {1, . . . , d}|m j = 1

(1)

152

S. Nickolas and K. Shobha

The data in the cluster can be separated into two sets: a set of completely observed features and a set of missing features as shown in Eqs. 2 and 3. X obs = {x1 θ ¬m 1 , . . . , xn θ ¬m n }

(2)

X mis = {x1 θ m 1 , . . . , xn θ m n }

(3)

The goal of the approximating model f : X obs → X mis is to minimize the sum of errors for a given vector z t from X obs with respect to the missing vector yt from X mis . Whereby, vector yt would be known for evaluation. min

n 

Err(yt , f (z t ))

(4)

t=1

The proposed algorithm works in following phases: (a) data preparation for imputation algorithm, (b) clustering using unsupervised neural network and (c) imputing missing values in each cluster using a similarity approach. Sequence of steps involved in data preparation, imputation and performance evaluation are shown in Fig. 1 and explained further. Phase 1: Data Preparation for Imputation Algorithm This is the first phase in proposed imputation method. This phase involves simulation of missing data and decomposition of time series data. Simulating missing data (applicable only on complete data set: to build the model): Performance evaluation of proposed imputation method has the difficulty of evaluating on real missing data, because actual values are truly missing. With this condition of the data set, it is difficult to approximate how much deviation is there between real values and imputed values. Hence, one has to evaluate the performance of imputation algorithm on simulated missing data. Simulated missing data can be generated by artificially removing data points. Later on performance evaluation of the imputation method are carried out by comparing imputed data values and real values. In this paper, data set are simulated by considering variable percentage of missing values throughout the data set. Decomposition of time series data: Time series X are decomposed into Seasonal (S), Trend (T ), Cyclical (C) and Residual (R) components. This is made by primarily considering two important factors: decomposition based on rate of change and predictability. In predictability based decomposition, time series data is decomposed into deterministic and non-deterministic (predictable or unpredictable) components. Rate of change based decomposition plays major role in all types of time series analysis, as this method aims to construct different number of component series (that could be

12 Similarity-Based Data-Fusion Schemes …

153

Fig. 1 Different phases in proposed imputation method

used to reconstruct the original, by additions or multiplications) from the observed time series, where, each of these different components has certain characteristics. The resultant of this decomposition are: (a) Trend (T ): reflects the long-term progression of the series. (b) Seasonality (S): exists when time series is influenced by seasonal factors. This occurs over a fixed and known period (day, month, and year). (c) Cyclical (C): represents repeated but non-periodic fluctuations. (d) Residual (R): irregular component, represents the remainder of time series after the other components are separated. Additive based model are represented as shown in Eq. 5: xt = Trend + Seasonality + Residual + Cyclical

(5)

154

S. Nickolas and K. Shobha

Multiplicative based model are represented as shown in Eq. 6: xt = Trend × Seasonality × Residual × Cyclical

(6)

Choosing the right model for decomposition depends on the variation of trend in time series. If variation around trend does not vary with level of the time series, additive model could be used, otherwise, multiplicative model can be applied. In this work, univariate time series are decomposed using Python stats model library. Phase 2: Clustering of Decomposed Data-Working of ART2 To demonstrate the proposed imputation method on real-world data sets we plot each data set in two-dimensional space using ART2. ART2 belongs to family of Adaptive Resonance Theory (ART), developed by Carpenter and Grossberg [27, 28]. This is an unsupervised competitive neural network which accepts analogous continuous input data. ART2 works in a dual way: (a) maps high-dimensional data space into low-dimensional space and (b) as a clustering algorithm: by mapping similar data samples to best matching cluster. In this paper, it is chosen as a clustering algorithm due to its advantage of being quite robust to parameter selection and producing natural clustering results. ART2 is similar to many other clustering algorithms where each pattern is processed by finding the nearest cluster and updating that cluster if the input is closer. Even though working of ART2 looks similar to other clustering algorithms, it is different because of its capability to determine the number of clusters through adaptation. New element is added to the existing cluster (making modifications or changes to existing cluster) only if cluster is adequately close to the input data; otherwise new cluster is formed to handle the input data. This entire process of adding input to existing cluster or forming new cluster based on similarity is known as cluster resonating. Forming of new cluster is truly based on vigilance parameter; a threshold for similarity match between input pattern and clusters. Phase 3: Missing Values Imputation Each clusters formed through the competitive neural network represents best clusters with highly correlated intra cluster elements. In this paper, for univariate time series imputation, similarity-based nearest neighbor method is used to impute missing values in each cluster. The proposed method considers the order of time for decomposition and clusters are formed using feature space of decomposed data. Steps involved in Imputation of missing values in each cluster. Input: Clusters with missing instances. Output: Data set with no missing values. Impute missing values in each cluster based on similar nearest neighbors The kNN algorithm fills the missing values with the average of similar patterns in clusters by comparing with the decomposed features that are not missing. For missing values in xt , kNN imputation predicts the value x j with j ∈ {1, . . . , d} by averaging

12 Similarity-Based Data-Fusion Schemes …

155

the k-nearest patterns [29–31]. kNN works on the principle of distance measure  to get the set of k-nearest patterns. In this work, the Euclidean distance in clustered data is used as . Only patterns compatible with missing vectors are considered for Euclidean distance calculation. Proposed algorithm imputes by considering the feature space rather than the time space. f kNN (xt , m t , j) =

1 k



(xt  θ m t ) j

(8)

xt  ∈Nk (xt ,m t , j)

  The function Nk x  returns the k-nearest patterns(k-arg min) from a set C ⊂ X  ∗ → ¬m t plus the jth feature available that have the same complete features ¬m t   M(t ∗ , j) = 0) : Nk (xt , m t , j) = k − argmin xt  ∈L (x t  θ ¬m t , x t  θ ¬m t )

(9)

L = { xt ∗ ∀t ∗ ∈ {1, ..., n}|¬m t ∗→ ¬m t θ M(t ∗ , j) ) = 0}

(10)

with

Evaluate outcome of Step 1 using MAE, MSE and RMSE, if result are acceptable reconstruct the decomposed data back to its original form, else re-compute values for imputation with different neighbors. *Note-In the √proposed imputation method, value of ‘k’ for kNN imputation is ) . Where, N stands for number of complete sample in clusters. chosen using = (N 2

4 Results and Discussions In this paper, seven data sets are analyzed in order to evaluate the performance of proposed technique and to compare the results with existing methods. These data sets are the benchmark univariate time series data sets used in literature, including Air passenger, Champagne sales, Milk production, Google search, Daily births in Quebec and Electricity-MT124 data set from Data Market repository. The performance of the proposed method are evaluated on simulated missing values (simulated on complete data set) by varying missing percent from 5 to 30%, since evaluation cannot be done on real missing data due to unavailability of true values to compare the ability of proposed algorithm. The performance of the proposed method are compared with other existing methods namely: Residual IMPutaion LSTM (RIMP-LSTM) [19], Interpolation [16], last observation carried forward (Locf) [15], Kalman smoothing [15] and Mean [15]. Imputed data sets are evaluated using MAE, MSE and RMSE and it is also evaluated using one-step-ahead prediction and correlation results.

156

S. Nickolas and K. Shobha

Tables 1, 2, 3, 4, 5 and 6 show average imputation errors of the proposed method, Mean, Kalman, Locf and Interpolation methods on seven data sets using three indicators: MAE, MSE, and RMSE. The air passenger, champagne, milk production and Google search data sets has both trend and seasonality components. The results from Tables 1, 2, 3, 4, 5 and 6 indicate that with different missing percentages the proposed method has the lowest MAE, MSE and RMSE. When compared with the proposed method, Interpolation and Kalman filter have shown good performance. Mean substitution has not showed its good performance in any of the data set with different missing percentage, as it replaces missing values with overall mean. This show that Mean is not suitable and effective for data series having a trend and seasonal components. The daily minimum temperature data set is having only seasonal variation, without trend variations. For this data set, proposed method outperforms indifferent missing percentages, resulting in lower error rates. Whereas, Mean, Locf, Kalman and Interpolation methods have performed almost equally in all missing percentages. Table 1 Result of evaluation metrics of different data set for 5% simulated missing values Data set\Method

Evaluation metrics

5% missing Mean

Air passengers

Kalman

Locf

Interpolation

Proposed

MAE

1.81

0.41

0.69

0.36

0.02

MSE

0.58

0.06

0.15

0.05

0.05

RMSE

0.76

0.24

0.38

0.24

0.13

Daily female births

MAE

11.44

11.34

11.20

11.03

0.03

MSE

7.40

8.69

7.94

8.14

0.02

RMSE

2.72

2.94

2.81

2.85

0.14

Google search

MAE

0.94

0.50

0.66

0.63

0.03

MSE

0.37

0.13

0.33

0.10

0.06

RMSE

0.63

0.31

0.53

0.30

0.10

MAE

0.79

0.62

0.75

0.62

0.02

MSE

0.26

0.21

0.39

0.21

0.01

Champagne sales

0.51

0.46

0.63

0.46

0.11

Minimum daily temperature

RMSE MAE

371.12

373.95

371.15

374.43

0.01

MSE

194.54

220.98

226.54

224.47

0.007

RMSE

13.09

14.86

15.05

14.98

0.12

Sunspot

MAE

53.93

53.52

53.81

53.47

0.02

MSE

23.00

25.52

30.26

27.57

0.01

4.79

5.05

5.50

5.25

0.10

Milk production MAE

RMSE

1.36

0.23

0.40

0.20

0.04

MSE

0.44

0.04

0.062

0.040

0.02

RMSE

0.66

0.20

0.25

0.20

0.15

12 Similarity-Based Data-Fusion Schemes …

157

Table 2 Result of evaluation metrics of different data set for 10% simulated missing values Data set\Method Evaluation metrics

10% missing Mean

Air passengers

Kalman

Locf

Interpolation

Proposed

MAE

3.66

1.19

2.00

1.14

0.02

MSE

1.35

0.31

0.92

0.31

0.06

RMSE

1.16

0.56

0.96

0.56

0.19

MAE

15.25

13.13

13.74

13.10

0.04

MSE

8.64

8.44

9.87

8.81

0.02

RMSE

2.93

2.90

3.14

2.96

0.21

MAE

1.77

0.65

1.07

0.72

0.05

MSE

0.60

0.16

0.40

0.19

0.03

RMSE

0.77

0.41

0.63

0.44

0.18

MAE

1.83

1.18

1.67

1.18

0.06

MSE

0.64

0.46

0.99

0.46

0.03

RMSE

0.80

0.67

0.99

0.68

0.18

MAE

372.08

373.56

372.83

374.04

0.02

MSE

200.19

222.77

227.91

224.66

0.009

RMSE

14.14

14.92

15.06

14.98

0.17

MAE

79.03

67.88

74.11

68.92

0.03

MSE

30.41

30.06

39.72

34.53

0.01

5.51

5.48

6.30

5.87

0.1

Milk production MAE

3.58

1.08

1.37

0.97

0.03

MSE

1.38

0.35

0.50

0.33

0.06

RMSE

1.17

0.59

0.71

0.58

0.23

Daily female births

Google search

Champagne sales

Minimum daily temperature

Sunspot

RMSE

Compared to all other existing methods, Mean method has shown low MSE values, in different missing percentage of daily temperature data set. The monthly sunspot data set is not having any trend variation but it has got the seasonal variation in its values. The values from Tables 1, 2, 3, 4, 5 and 6 (represented in bold) show that the proposed method performs good in all missing percentages and the values of MAE and MSE of other imputation techniques performs almost similar till the missing percentage is 10%. Once the percentage of missing value increases, MSE and MAE values also increase. The daily female birth data set is having values with no trend and no seasonal variations. For this data set, Mean imputation show higher error rate of MAE in all missing percentages, Kalman, Interpolation and Locf show almost similar error rates in all missing percentage. In MSE evaluations, Interpolation and Kalman behaves same, whereas, Locf show higher error rates when compared to these two methods. Mean imputation behaved similar to Kalman and Interpolation till 15% of missing percentage, but MSE and RMSE error rates increased, once the missing percentage increases.

158

S. Nickolas and K. Shobha

Table 3 Result of evaluation metrics of different data set for 15% simulated missing values Data set\Method

Evaluation metrics

15% missing Mean

Air passengers

Kalman

Locf

Interpolation

Proposed

MAE

5.10

2.19

2.009

2.17

0.03

MSE

1.62

0.92

0.84

0.91

0.07

RMSE

1.27

0.95

0.91

0.95

0.23

Daily female births

MAE

18.42

14.78

15.55

14.44

0.05

MSE

9.63

8.83

10.37

9.07

0.02

RMSE

3.10

2.97

3.22

3.01

0.25

Google search

MAE

2.66

1.01

1.45

1.06

0.05

MSE

0.94

0.29

0.49

0.29

0.08

RMSE

0.97

0.54

0.70

0.54

0.24

MAE

3.05

1.77

2.25

1.74

0.07

MSE

1.11

0.65

1.21

0.64

0.03

Champagne sales

1.05

0.80

0.80

0.22

Minimum daily temperature

RMSE MAE

374.02

373.43

374.9

375.94

0.03

MSE

203.63

228.61

222.97

226.66

0.01

RMSE

14.27

15.12

14.93

15.05

0.21

Sunspot

MAE

124.38

95.41

110.69

96.89

0.04

MSE

49.90

42.46

57.02

48.44

0.01

RMSE

1.10

7.06

6.51

7.55

6.95

0.17

Milk production MAE

4.88

1.60

2.58

1.58

0.03

MSE

1.91

0.54

1.26

0.54

0.06

RMSE

1.38

0.74

1.12

0.74

0.28

Comparison of proposed imputation method with results of [19] Proposed imputation method is also compared with recent imputation technique, RIMP-LSTM, proposed by [19]. The results of imputation technique and one-step-ahead prediction are taken from [19]. Tables 7 and 8 represents the RMSE of proposed and existing univariate time series imputation method and one-step-ahead prediction tasks respectively. Imputation results (RMSE) of proposed method and other techniques on Electricity-MT124 and Daily births data sets are as shown in Table 7. Proposed technique has outperformed other imputation techniques in almost all cases by retaining dependency of decomposed data. The daily female birth data set of Quebec is having values with no trend and no seasonal variations. Whereas, the Electricity-MT124 data set is having seasonal variation. In the randomly simulated data set, with varying percentage of missing value,

12 Similarity-Based Data-Fusion Schemes …

159

Table 4 Result of evaluation metrics of different data set for 20% simulated missing values Data set\Method Evaluation metrics

20% missing Mean

Air passengers

Kalman

Locf

Interpolation

Proposed

MAE

6.61

2.51

2.66

2.42

0.03

MSE

2.12

0.95

0.96

0.95

0.08

RMSE

1.45

0.97

0.98

0.97

0.28

MAE

21.86

16.38

17.65

16.18

0.06

MSE

10.91

9.35

11.50

9.71

0.02

RMSE

3.30

3.05

3.39

3.11

0.29

MAE

3.82

1.53

2.09

1.56

0.07

MSE

1.45

0.43

0.76

0.47

0.10

RMSE

1.20

0.66

0.87

0.68

0.27

MAE

4.13

2.25

3.26

2.22

0.08

MSE

1.49

0.82

1.79

0.72

0.03

RMSE

1.22

0.91

1.34

0.91

0.25

MAE

401.91

385.95

383.42

383.11

0.04

MSE

210.23

232.16

225.39

227.34

0.01

RMSE

14.49

15.23

15.01

15.07

0.25

MAE

149.77

112.88

135.79

115.53

0.05

MSE

57.36

48.59

70.04

57.37

0.02

7.57

6.97

8.36

7.57

0.20

Milk production MAE

6.71

2.37

3.85

2.35

0.03

MSE

2.75

0.89

2.12

0.89

0.07

RMSE

1.65

0.94

1.45

0.94

0.32

Daily female births

Google search

Champagne sales

Minimum daily temperature

Sunspot

RMSE

proposed method has outperformed with lower RMSE for imputation and prediction technique. With missing percentage of 10–30%, RMSE of proposed imputation method were considerably very low and negligible compared to other existing methods and the same thing is true for prediction RMSE, where RMSE is low and negligible for 10–20% missing rate. As the missing percentage increases, RMSE of imputation and prediction results increases slightly. Evaluation of proposed imputation method is carried out by comparing the onestep-ahead prediction results of data set after imputing with proposed method and other imputation methods. Since, the proposed and existing imputation methods cannot carry out prediction by themselves, they need to be combined with prediction algorithms after imputation. For comparison of prediction accuracy, results are taken from [19] for two data set. For one-step-ahead prediction, LSTM is used as a predictor due to the non-linearity nature of data set. The results of proposed imputation method were promising compared to the rest of methods. The error rates of one-step-ahead prediction for two data sets (Daily births in Quebec and Electricity-MT124 data set) are shown in Table 8.

160

S. Nickolas and K. Shobha

Table 5 Result of evaluation metrics of different data set for 25% simulated missing values Data set\Method

Evaluation metrics

25% missing Mean

Air passengers

Kalman

Locf

Interpolation

Proposed

MAE

7.90

3.03

3.61

2.92

0.04

MSE

2.52

1.09

1.42

1.08

0.09

RMSE

1.58

1.04

1.19

1.04

0.32

Daily female births

MAE

27.11

18.55

19.70

18.65

0.07

MSE

13.14

10.21

11.62

10.44

0.03

RMSE

3.62

3.19

3.40

3.23

0.34

Google search

MAE

4.38

1.77

2.47

1.97

0.09

MSE

1.62

0.47

0.85

0.55

0.13

RMSE

1.27

0.68

0.92

0.74

0.30

MAE

4.74

2.17

2.91

1.87

0.03

MSE

1.76

0.73

1.008

0.895

0.08

Champagne sales

1.32

0.85

0.92

0.30

Minimum daily temperature

RMSE MAE

406.38

391.25

389.93

388.77

0.05

MSE

213.10

234.56

225.61

229.95

0.01

RMSE

14.59

15.31

15.02

15.16

0.27

Sunspot

MAE

175.58

130.78

162.32

133.48

0.06

MSE

65.52

54.63

83.94

65.54

0.02

RMSE

1.004

8.09

7.39

9.16

8.09

0.22

Milk production MAE

8.44

2.80

4.99

2.77

0.04

MSE

3.31

0.99

2.51

0.98

0.09

RMSE

1.82

0.99

1.58

0.99

0.35

To conclude, putting all the observations together, proposed imputation method for time series data outperforms all other existing imputation methods with low error rates, resulting in values close to real data. This performance of proposed method is due to the fact that the cluster elements are having the dependency of each decomposed components and choosing the similar values from intra cluster for imputation leading to values approximately, same as that of original value.

5 Validation of Imputation Technique Proposed imputation algorithm results are also statistically evaluated using Concordance Correlation Coefficient (CCC) test. This test measures the agreement between two variables. [32] proposes strength-of-agreement criteria for CCC to assess the degree of equivalence between original and imputed values. In this work, to assess

12 Similarity-Based Data-Fusion Schemes …

161

Table 6 Result of evaluation metrics of different data set for 30% simulated missing values Data set\Method Evaluation metrics

30% missing Mean

Air passengers

Kalman

Locf

Interpolation

Proposed

MAE

1.81

3.90

4.72

3.79

0.042

MSE

0.58

1.42

1.85

1.41

0.10

RMSE

0.76

1.19

1.36

1.18

0.35

MAE

30.68

20.32

21.84

20.70

0.09

MSE

14.71

10.71

12.62

11.20

0.03

RMSE

3.83

3.27

3.55

3.34

0.37

MAE

5.52

2.52

3.70

2.24

0.16

MSE

2.01

0.86

1.78

0.77

0.12

RMSE

1.41

0.93

1.33

0.88

0.34

MAE

5.56

2.94

3.44

2.66

0.09

MSE

1.93

1.07

1.25

0.96

0.03

RMSE

1.39

1.03

1.11

0.99

0.33

MAE

439.91

389.93

406.35

400.85

0.06

MSE

235.70

230.77

238.22

230.69

0.02

RMSE

15.35

15.19

15.4

15.18

0.30

MAE

201.69

201.69

185.23

151.33

0.06

MSE

73.37

73.37

94.48

72.92

0.02

8.56

8.56

9.72

8.53

0.25

Milk production MAE

10.01

3.21

5.59

3.15

0.04

MSE

3.83

1.09

2.7

1.08

0.09

RMSE

1.95

1.04

1.64

1.04

0.39

Daily female births

Google search

Champagne sales

Minimum daily temperature

Sunspot

RMSE

the degree of agreement, evaluations are performed on ‘Minimum Daily Temperatures’ by setting two-sided 95% confidence interval. Bias correction factor measures how far best-fit line deviates from a line at ‘45’ degrees. If the outcome of bias correction is ‘1’, it means that there is no deviation from the ‘45’ degree line, indicating best outcome of test. Experimental outcomes from Table 9 show that the proposed method has low bias when compared to other existing methods. As per criteria given by [32], proposed method bias lies in ‘Almost Perfect’ range up to 20% of imputed data. At 25–40% imputed data, bias lies in ‘substantial’ range. Hence, the proposed imputation method was evaluated only up to 30% of missing pattern, so that the imputed values will have high correlation and low bias with original dataset.

0.41

0.41

0.43

0.45

30

40

50

0.39

50

0.37

0.38

40

20

0.38

30

10

0.36

20

Daily births

0.35

10

Electricity-MT124

Forward

Missing %

Data set

0.36

0.35

0.37

0.36

0.34

0.33

0.32

0.30

0.27

0.27

Indicator

Table 7 Imputation results(RMSE) for univariate time series

0.53

0.39

0.36

0.34

0.28

0.46

0.41

0.40

0.36

0.33

Spline

0.37

0.35

0.34

0.33

0.32

0.31

0.31

0.30

0.29

0.28

MA

0.36

0.33

0.32

0.29

0.28

0.33

0.32

0.31

0.30

0.29

EM

0.32

0.32

0.32

0.31

0.32

0.27

0.28

0.27

0.27

0.26

Kalman

0.30

0.26

0.26

0.25

0.25

0.25

0.24

0.23

0.22

0.20

IMP LSTM

0.23

0.22

0.22

0.21

0.20

0.24

0.22

0.22

0.21

0.18

RIMP LSTM

0.20

0.18

0.13

0.05

0.01

0.21

0.15

0.08

0.04

0.01

Proposed method

162 S. Nickolas and K. Shobha

0.30

0.34

0.35

0.45

30

40

50

0.31

50

0.27

0.29

40

20

0.28

30

10

0.26

20

Daily births

0.25

10

Electricity-MT124

Forward

Missing %

Data set

0.36

0.38

0.38

0.38

0.37

0.33

0.32

0.31

0.29

0.29

Indicator

0.53

0.32

0.29

0.30

0.25

0.36

0.35

0.32

0.26

0.26

Spline

0.37

0.32

0.32

0.28

0.27

0.29

0.28

0.27

0.25

0.23

MA

Table 8 One-step-ahead prediction results(RMSE) for univariate time series

0.36

0.29

0.28

0.27

0.26

0.27

0.27

0.26

0.25

0.24

EM

0.32

0.31

0.29

0.27

0.26

0.27

0.27

0.26

0.25

0.24

Kalman

0.30

0.27

0.28

0.26

0.27

0.24

0.23

0.23

0.22

0.22

IMP LSTM

0.23

0.25

0.24

0.22

0.24

0.24

0.23

0.22

0.21

0.21

RIMP LSTM

0.20

0.15

0.11

0.05

0.01

0.18

0.15

0.11

0.05

0.01

Proposed method

12 Similarity-Based Data-Fusion Schemes … 163

164

S. Nickolas and K. Shobha

Table 9 Results of Concordance Correlation Coefficient of Proposed and Existing Methods Percentage\Bias (%)

Proposed

Mean

LOCF

Kalman

Interpolation

5

0.99

0.89

0.90

0.90

0.90

10

0.99

0.89

0.90

0.90

0.90

15

0.99

0.88

0.90

0.89

0.89

20

0.99

0.87

0.90

0.89

0.89

25

0.98

0.86

0.90

0.89

0.89

30

0.98

0.85

0.90

0.88

0.89

40

0.96

0.82

0.90

0.87

0.89

6 Conclusion and Future Work In this paper, similarity-based imputation method is proposed for missing value imputation in univariate time series data sets. The proposed imputation method is evaluated on publicly available data set from Data Market and UCI repository. Missing values were randomly simulated on this data set with different missing percentage. The proposed method works in three phases: (a) decomposition of univariate series, (b) clustering of decomposed data and, (c) imputing missing values in each cluster using similarity-based nearest neighbor approach. Decomposition of time series data helped to retain the dependency among each component. The imputed data set were evaluated using MSE, MAE and RMSE metrics, and prediction results. Proposed clustering based imputation method has shown improvement when compared with existing methods for a varying missing percentage values from 5 to 30%. Proposed method is also statistically evaluated using CCC to show the correlation of imputed and original data set. Comparison of evaluation metric results of the proposed and other existing methods clearly show that the proposed method outperforms the existing methods. As a future work, the proposed method will be evaluated on online and large sized univariate and multivariate time series data set from different applications.

References 1. Ghil M, Vautard R (1991) Interdecadal oscillations and the warming trend in global temperature time series. Nature 350(6316):324 2. Billinton R, Chen H, Ghajar R (1996) Time-series models for reliability evaluation of power systems including wind energy. Microelectron Reliab 36(9):1253-1261 3. Rubin DB (2004) Multiple imputation for nonresponse in surveys, vol 81. Wiley 4. Ford B (1983) An overview of hot-deck procedures: incomplete data in sample surveys 2 5. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the em algorithm. J Roy Stat Soc Ser B (Methodol) 39(1):1–22 6. Gautam C, Ravi V (2015) Data imputation via evolutionary computation, clustering and a neural network. Neuro Comput 156:134–142

12 Similarity-Based Data-Fusion Schemes …

165

7. Rahman MG (2013) Islam MZ missing value imputation using decision trees and decision forests by splitting and merging records: two novel techniques. Knowl Based Syst 53:51–65 8. Rahman MG, Islam MZ Fimus (2014) A framework for imputing missing values using coappearance, correlation and similarity analysis. Knowl Based Syst 56:311–327 9. Tutz G, Ramzan S (2015) Improved methods for the imputation of missing data by nearest neighbor methods. Comput Stat Data Anal 90:84–99 10. Gheyas IA, Smith LS (2010) A neural network-based framework for the reconstruction of incomplete data sets. Neurocomputing 73(16–18):3039–3065 11. Engels JM, Diehr P (2003) Imputation of missing longitudinal data: a comparison of methods. J Clin Epidemiol 56(10):968–976 12. Spratt M, Carpenter J, Sterne JA, Carlin JB, Heron J, Henderson J, Tilling K (2010) Strategies for multiple imputation in longitudinal studies. Am J Epidemiol 172(4):478–487 13. Twisk J, de Vente W (2002) Attrition in longitudinal studies: how to deal with missing data. J Clin Epidemiol 55(4):329–337 14. Junninen H, Niska H, Tuppurainen K, Ruuskanen J, Kolehmainen M (2004) Methods for imputation of missing values in air quality data sets. Atmos Environ 38(18):2895–2907 15. Zeileis A, Grothendieck G (2005) Zoo: S3 infrastructure for regular and irregular time series. arXiv preprint math/0505527 https://doi.org/10.18637/jss.v014.i06 16. Hyndman RJ, Shang HL (2009) Forecasting functional time series. J Korean Stat Soc 38(3):199–211 17. Sinopoli B, Schenato L, Franceschetti M, Poolla K, Jordan MI, Sastry SS (2004) Kalman filtering with intermittent observations. IEEE Trans Autom Control 49(9):1453–1464 18. Oba S, Ma S, Takemasa I, Monden M, Ki M, Ishii S (2003) A Bayesian missing value estimation method for gene expression pro le data. Bioinformatics 19(16):2088–2096 19. Shen L, Ma Q, Li S (2018) End-to-end time series imputation via residual short paths. In: Asian conference on machine learning, pp 248–263 20. Li L, McCann J, Pollard NS, Faloutsos C (2009) Dynammo: mining and summarization of coevolving sequences with missing values. In: Proceedings of the 15th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 507–516 21. White IR, Royston P, Wood AM (2011) Multiple imputation using chained equations: issues and guidance for practice. Stat Med 30(4):377–399 22. Anava O, Hazan E, Zeevi A (2015) Online time series prediction with missing data. In: International conference on machine learning, pp 2191–2199 23. Little RJ, Rubin DB (2019) Statistical analysis with missing data, vol 793. Wiley 24. Zhu B, He C, Liatsis P (2012) A robust missing value imputation method for noisy data. Appl Intell 36(1):61–74 25. Little RJ (1988) A test of missing completely at random for multivariate data with missing values. J Am Stat Assoc 83(404):1198–1202 26. Moritz S, Sardá A, Bartz-Beielstein T, Zaefferer M, Stork J (2015) Comparison of different methods for univariate time series imputation in R. arXiv preprint arXiv:151003924 27. Luo J, Chen D (2008) An enhanced art2 neural network for clusteringanalysis. In: First international workshop on knowledge discovery and data mining (WKDD 2008). IEEE, pp 81–85 28. Carpenter GA, Grossberg S (2017) Adaptive resonance theory. Springer 29. García S, Luengo J, Herrera F (2015) Data preprocessing in data mining. Springer 30. Friedman J, Hastie T, Tibshirani R (2001) The elements of statisticallearning, vol 1. Springer Series in Statistics, New York 31. Oehmcke S, Zielinski O, Kramer O (2016) kNN ensembles with penalized DTW for multivariate time series imputation. In: 2016 international joint conference on neural networks (IJCNN). IEEE, pp 2774–2781 32. McBride G (2005) A proposal for strength-of-agreement criteria for linsconcordance correlation coefficient. NIWA Client Report: HAM2005-062

Chapter 13

Computational Study on Electronic Properties of Pd and Ni Doped Graphene Mehak Singla and Neena Jaggi

1 Introduction Graphene is a two-dimensional material with variety of superior properties due to which it has been widely utilized in different fields such as Field Effect Transistors (FETs), Capacitors, Sensors, etc. Graphene has emerged as material with great potential [1, 2]. It has a large surface area (2630 m2 /g), high electron mobility and thermal conductivity, high mechanical strength and low Johnson noise [3]. All these properties have been shown to be beneficial for the future applications [4]. These properties can further be improved after doping according to the necessity [5–7]. Graphene is a sp2 hybridized honeycomb-shaped lattice and substitutional doping leads to the fracture of ideal sp2 hybridization of its structure. Many DFT studies have been concentrated on the applications based on graphene incorporated with foreign atoms [8]. Huang, Bing [9] studied the electronic properties of graphene nanoribbons after doping with boron and nitrogen for applications in graphene electronics. The optical properties of B-doped, N-doped, and BN co-doped graphene have also been investigated by Rani et al. [10]. The presence of graphene edges also affects the electronic properties of nanoscale graphene [11]. Transition metal atom doping is expected to have interesting properties [12, 13]. The incorporation of appropriate transition metal atom is an effective way to tune the properties of pure graphene. Moreover, the transition metals act as catalytic metal particles [14]. Experimental studies have shown that incorporation of Pd atom in graphene results in enhanced M. Singla · N. Jaggi (B) Department of Physics, National Institute of Technology, Kurukshetra, India e-mail: [email protected] M. Singla e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_13

167

168

M. Singla and N. Jaggi

gas sensing ability due to the density of Pd nanoparticles and increased electrical conductivity [15]. Most gas sensors operate on the principle of electrical conductivity transition. According to the previous study, Pd has polarizable bands and is considered a natural candidate for creation of magnetism in coated graphene [16]. Amidst the transition metals, Ni atom has unusual interactions with the graphene network. Wei, Gao, et al [17] investigated the use of Ni-doped graphene as a flexible sorbent in water purification. Motivated by the experimental and theoretical investigations, presently we discussed the effects of Pd and Ni doping on the properties of graphene. The binding energies of pure, Pd-doped and Ni-doped graphene have been evaluated. Further, the electronic properties have also been studied. The results of this study could be helpful in understanding the electronic behavior of doped graphene for various applications.

2 Computational Methods In this study, all the geometries were optimized using GAUSSIAN 09 [18] software. The geometry consists of 45 carbon atoms with 17 hydrogen atoms attached at the terminals for saturation. The calculations were done using B3LYP [19] calculation method. The basis set used for Ni atom doped graphene sheet was 6-31G (d, p) [20] and for Pd atom doped graphene sheet was LANL2DZ [21]. In LANL2DZ basis set, the electrons close to the nucleus are considered in an approximate way by effective core potential. The binding energies per atom [22] for PG (pure graphene), Pd-G (Palladium doped graphene) and Ni-G (Nickel doped graphene) are calculated as follows: E(C45 H17 ) − 45E(C) − 17E(H) 62

(1)

Eb (Pd) =

E(C44 PdH17 ) − 44E(C) − E(Pd) − 17E(H) 62

(2)

Eb (Ni) =

E(C44 NiH17 ) − 44E(C) − E(Ni) − 17E(H) 62

(3)

Eb (PG) =

where EPG , EPd−G , ENi−G are energies of optimized structures of pure, palladium doped and nickel doped graphene and EPd , ENi are the corresponding energies for palladium and nickel atom. The band gap energies were calculated with the observed values of Lowest Unoccupied Molecular Orbital (LUMO) energy and Highest Occupied Molecular Orbital (HOMO) energy as follows: Eg = ELUMO − EHOMO

(4)

13 Computational Study on Electronic Properties of Pd and Ni Doped …

169

The Density of States (DOS) spectrum was compared utilizing GAUSSUM 3.0 [23] with Full Width Half Maximum (FWHM) of 0.3 eV [19]. Frontier molecular orbitals for the complexes have been plotted. These refer to the orbitals at the outer edges of the molecule. Also, the charge transferred were analyzed with the help of Mulliken charge analysis and NBO analysis done on the basis of previously mentioned basis sets, respectively.

3 Results and Discussion 3.1 Geometry Optimization and Binding Energies All the calculations were done with B3LYP method using density functional theory. One carbon atom in graphene sheet was replaced by palladium and nickel atom as shown in Fig. 1. The Pd and Ni atom bulge out from the graphene surface [24] as their atomic radius is larger than carbon atom. The bond length of C-C in pure graphene sheet was observed to be 1.424 Å which changed to 1.972 Å for Pd-C in Pd doped graphene very similar to the previous studies [21] and 1.803 Å for Ni-C in accordance with the previous result [24] in Ni-doped graphene. The binding energy per atom for the optimized geometry of PG, Pd-G, and Ni-G estimated were −7.409 eV, − 6.836 eV, and −7.28 9 eV, respectively (Table 1).

3.2 Electronic Properties The charge distribution for the systems was observed using Mulliken charge analysis (Fig. 2) and Natural Bond Orbital analysis (Fig. 3). The charges observed on Pd and Ni were 0.199 e, 0.645 e according to Mulliken charges and 0.575 e, 0.820 e according to NBO charges, respectively, as mentioned in Table 2. From the charge distribution of C-atoms around the dopant atoms, we analyzed that these atoms display more electron affinity leading to electron density deduction around Pd and Ni atoms. This shows the increased reactivity of doped graphene. The band gap energies were also calculated according to Eq. (3). The E.g. value was observed to be 1.518 eV for Pd-G and 1.438 eV for Ni-G (Table 2). In addition, the density of states plot for Pd-G and Ni-G are presented in Fig. 4. Electric dipole moment is an important quantity that represents asymmetry in molecular charge distribution. The observed value for dipole moment of complexes is mentioned in Table 3.

170

M. Singla and N. Jaggi

Fig. 1 Top and side views of Pd-G and Ni-G

Table 1 Binding energy per atom in eV and bonding distances in Å

System

Eb (eV)

d(Å)

PG

−7.409

1.424

Pd-G

−6.863

1.972

Ni-G

−7.289

1.803

4 Conclusions In the present work, we compared the electronic properties of pure graphene surface with the palladium and nickel doped graphene surface. The changes in binding energy per atom values for Pd-G and Ni-G indicates less stability for these complexes and hence more reactivity. All the electronic properties of doped graphene have been changed significantly. Thus, the properties of pure graphene could be tailored by incorporating dopant atoms according to the necessity of applications.

13 Computational Study on Electronic Properties of Pd and Ni Doped …

171

Fig. 2 Mulliken and NBO charge distribution of Pd-G

Fig. 3 Mulliken and NBO charge distribution of Ni-G Table 2 Mulliken and NBO charges, HOMO and LUMO energies and band gap values of Pd-G and Ni-G System Eg (e)

QMulliken (e)

QNBO (eV)

EHOMO (eV)

EFL (eV)

ELUMO (eV)

PG 2.253





−4.247

−3.120

−1.994

Pd-G 1.518

0.199

0.575

−3.971

−3.212

−2.452

Ni-G 1.438

0.645

0.820

−4.241

−3.522

−2.803

172 Fig. 4 Density of States spectrum for PG (i), Pd-G (ii) and Ni-G (iii)

M. Singla and N. Jaggi

13 Computational Study on Electronic Properties of Pd and Ni Doped … Table 3 Electric dipole moment in Debye

System

173

Dipole moment (Debye)

PG

0.091

Pd-G

1.806

Ni-G

1.429

Acknowledgements The authors acknowledge Director, NIT Kurukshetra for providing necessary facilities and fellowship to carry out the present research work.

References 1. Phuc HV et al (2018) First principle study on the electronic properties and Schottky contact of graphene adsorbed on MoS2 monolayer under applied out-plane strain. Surface Sci 668:23–28 2. Das P et al (2019) Graphene based emergent nanolights: a short review on the synthesis, properties and application. Res Chem Intermed 45(7):3823–3853 3. Geim AK, Novoselov KS (2019) The rise of graphene. Nanosci Technol Collect Rev Nat J 11–19 4. Saxena S et al (2011) Investigation of structural and electronic properties of graphene oxide. Appl Phys Lett 99(1):013104 5. Alzahrani AZ (2011) Structural and electronic properties of graphene upon molecular adsorption: DFT comparative analysis. Graphene Simul 1:21–38 6. Zhao M et al (2014) A time dependent dft study of the absorption and fluorescence properties of graphene quantum dots. ChemPhysChem 15(5):950–957 7. Johari P, Shenoy VB (2011) Modulating optical properties of graphene oxide: role of prominent functional groups. ACS Nano 5(9):7640–7647 8. Ajeel FN, Mohammed MH, Khudhair AM (2019) Energy bandgap engineering of graphene nanoribbon by doping phosphorous impurities to create nano-heterostructures: a DFT study. Physica E Low-dimensional Syst Nanostruct 105:105–115 9. Huang B (2011) Electronic properties of boron and nitrogen doped graphene nano-ribbons and its application for graphene electronics. Phys Lett A 375(4):845–848 10. Rani P, Dubey GS, Jindal VK (2014) DFT study of optical properties of pure and doped graphene. Physica E Low-dimensional Syst Nanostruct 62: 28–35 11. Wakabayashi K, Sudipta D (2012) Nanoscale and edge effect on electronic properties of graphene. Solid State Commun 152(15):1420–1430 12. Krasheninnikov AV, Nieminen RM (2011) Attractive interaction between transition-metal atom impurities and vacancies in graphene: a first-principles study. Theor Chem Acc 129(3–5):625– 630 13. Anithaa VS, Shankar R, Vijayakumar S (2017) Adsorption of Mn atom on pristine and defected graphene: a density functional theory study. J Mol Model 23(4):132 14. Zhou S et al (2017) Nitrogen-doped graphene on transition metal substrates as efficient bifunctional catalysts for oxygen reduction and oxygen evolution reactions. ACS Appl Mater Interfaces 9(27):22578–22587 15. Tang X et al (2019) Chemically deposited palladium nanoparticles on graphene for hydrogen sensor applications. Sci Rep 9(1):1–11 16. Uchoa B, Lin C-Y, Castro Neto AH (2008) Tailoring graphene with metals on top. Phys Rev B 77(3):035420 17. Wei G et al (2013) Ni-doped graphene/carbon cryogels and their applications as versatile sorbents for water purification. ACS Appl Mater Interfaces 5(15):7584–7591

174

M. Singla and N. Jaggi

18. Rad AS (2015) First principles study of Al-doped graphene as nanostructure adsorbent for NO2 and N2 O: DFT calculations. Appl Surface Sci 357:1217–1224 19. Gholami S et al (2016) Adsorption of adenine on the surface of Nickel-decorated graphene; a DFT study. J Alloys Compd 686:662–668 20. Düzenli D (2016) A comparative density functional study of hydrogen peroxide adsorption and activation on the graphene surface doped with N, B, S, Pd, Pt, Au, Ag, and Cu atoms. J Phys Chem C 120(36):20149–20157 21. Velázquez-López L-F et al (2019) DFT study of CO adsorption on nitro-gen/boron dopedgraphene for sensor applications. J Mol Model 25(4):91 22. O’Boyle NM, Tenderholt AL, Langner KM (2008) J Comp Chem 29:839–845 23. Shukri MSM et al (2019) Structural and electronic properties of CO and NO gas molecules on Pd-doped vacancy graphene: a first principles study. Appl Surface Sci 494:817–828 24. Jaiswal NK, Srivastava P (2011) Structural stability and electronic properties of Ni-doped armchair graphene nanoribbons. Solid State Commun 151(20):1490–1495

Chapter 14

Design of an Automatic Reader for the Visually Impaired Using Raspberry Pi Nabendu Bhui, Dusayanta Prasad, Avishek Sinha, and Pratyay Kuila

1 Introduction Visually impaired people are incapable of doing most of the visual tasks and out of them reading is the most important task. They face many troubles due to inaccessible infrastructure and social challenges. Visually impaired people always read any document through the Braille machine, where text is converted to Braille literature. Braille technology uses combinations of raised dots to spell letters and numbers. Actually Braille is not a language—it’s a system of writing. Transfer any paper or book into Braille is time taking and complicated. It is not possible to transfer daily information into Braille literature and for that reason they are not updated with the community [1]. The most valuable thing for a disabled person is gaining independence. A blind person can lead an independent life with some specifically designed adaptive things for them. There are lots of adaptive equipment that can enable a blind person to live their life independently, but they are not easily available in the local

N. Bhui (B) · P. Kuila Department of Computer Science and Engineering, National Institute of Technology Sikkim, South Sikkim 737139, India e-mail: [email protected] P. Kuila e-mail: [email protected] D. Prasad IBM India, Bengaluru 560045, India e-mail: [email protected] A. Sinha Department of Computer Science and Engineering, I.K.G.P.T.U., Jalandhar, Kapurthala, Punjab 144603, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_14

175

176

N. Bhui et al.

shops or markets. A blind person needs to hunt and put much effort to get each equipment, which can take them one step closer toward independence. The main objective to develop this device is to read a printed paper or book which is written in multi languages (English, Bengali and Hindi). All the processing is to be done on a low power consuming computer: Raspberry Pi. By using this reader they fell independent because they don’t need any third person. They can feel most of the things by touch and for that reason we built our project based on switches. By pressing the switches any person can control this reader machine.

2 Related Works In [1], the authors proposed a prototype which helps any blind person to listen any text images of English and Tamil language. Reading of text is happened by taking images of the text and converting the image to audio output in the above mentioned languages. This was done with the help of Raspberry Pi 3 model B, a web-camera, Tesseract OCR (Optical Character Recognition) engine and Google Speech API (Application Program Interface) which is the text to speech engine. The disadvantage of the system is that it produces unclear output with incorrect regional accent and also problem in speech for the Tamil language. In [2], the authors proposed a smart book reader for visually challenged based on Optical Character Recognition. The Raspberry Pi 3 kit and Raspberry Pi Camera Module are used here. The Google Tesseract is used for OCR and Pico is used for text to speech in this project. On pre-processing stage this method uses binarization, de-noising, deskewing and segmentation techniques for image clarity purpose. Sometimes a mobile application allows blind people to “read” text by using a Photo-to-speech application. A combination of OCR and Textto-Speech (TTS) framework is integrated. With the help of smart phone take a picture and hear the text that exists in the picture. Some drawbacks are, it’s not providing any automatic system for capturing images [3]. Generally, optical character recognition (OCR) recognizes the texts from the data of captured images. Conversion of scanned or photographed document into electronic transcript is happening here. Digital text synthesized into voice by using the technology of speech synthesis (TTS) and played through any audio system. This system is constructed by using raspberry pi, HD camera and Bluetooth headset [4]. In [5], the authors proposed a model which enables any user to hear any text in real-time without taking any pain of reading. The whole process is established with the help of OCR (Optical Character Recognition) and TTS (Text-to-Speech) frameworks. This combination is happening into Raspberry Pi v2. The disadvantage of the system is that captured image was blurred and for that reason sometimes OCR gives wrong result. The proposed system in [6] guaranteed to read the text present in anywhere for assisting blind persons. The disadvantage of the system is that spell problem for OCR output. In [7], the authors proposed a camera-based label reader for blind persons to read any text. A camera is used for capturing the image of text or board, and then the image is pre-processed and separates the label from that processed image with the help of open CV library. After

14 Design of an Automatic Reader for the Visually …

177

identifying the text, pronunciation of text is happened through voice. A motion-based method is applied to detect the object or the text which is written on the board or hoarding or in any places. Vasanth K et al. [8] proposed a self-assistive device where any live streaming speech is sent to Google API and after conversion of speech to text, speak the speech via speaker and displaying the result onto the LCD screen. But, a good internet facility is needed for this method purpose. In [9], the authors designed a voice based navigation system for blind people in a hospital environment. With the help of ultrasonic sensors and an RFID reader (which are interfaced with Raspberry Pi3) an obstacle avoidance system is designed to locate the exact place in the hospital. Most of the models depend on good internet and most of these models have OCR problems, due to these shortcomings in our proposed method, there is nothing related to internet. Also, there are some processing steps for better result in OCR.

3 System Design 3.1 Working Principle There are three push buttons into our proposed model. The first push button is to choose particular language, the second button is to capture images and the third button is to read the text. Flowchart for proposed model is shown in Fig. 1 and the working principle of proposed system is mentioned below. • At first, we put a paper or book under the Raspberry Pi camera. • Press the first button and count how many times button is pressed, because for every count there is a language.

START

Choose Language

Processing by Raspberry Pi

Ask for Reading the Paper/Book

Press the rd 3 Button

Count No. of

Press the st 1 Button

Times Button Pressed

Camera Captures Image of Paper/Book

Read the Paper/Book through a Speaker

Fig. 1 Flowchart of the approach

STOP

Press the nd 2 Button

178

N. Bhui et al.

• Then by pressing the second button image is captured with the help of a Raspberry Pi Camera. Camera module takes an image of the paper or book. • Converting that image into Grayscale in pre-processing stage, by using OCR (Tesseract) convert image into text and then read the written text by using TTS (eSpeak), these three processes are done through Raspberry Pi. • At last, it asks for reading the paper or a book and for that reason press the third button. After pressing last button, it reads the given text by using a Speaker.

3.2 Raspberry Pi 4 Model B Raspberry Pi 4 Model B is the latest product in the popular Raspberry Pi computer versions. Processor speed, memory, connectivity and multimedia performance are better than previously released Raspberry Pi versions. The Raspberry Pi [10] Foundation provides Raspbian, a Debian-based Linux distribution for download. It has Broadcom BCM2711, quad-core Cortex-A72 (ARM v8), 64-bit 1.5 GHz processor; 1 GB, 2 GB or 4 GB LPDDR4 (depending on model) memory; LAN, Bluetooth 5.0, Gigabit Ethernet, 2 × USB 3.0 ports and 2 × USB 2.0 ports for connectivity; 40 general-purpose input/output (GPIO) pins and a Micro SD card slot for loading operating system and data storage.

3.3 Used Components in Reader Raspberry Pi Camera v2 is used to capture images of text file. High quality 8 megapixel Sony IMX219 image sensor with fixed focus lens is present in Camera v2. On the upper side of the raspberry pi board a small camera socket is present, which is built for camera interfacing [10]. The USB Mini Speaker is used to play the audio generated by a speech synthesizer. It has sleek surfaced, USB power input with 3.5 mm Jack and Power Jack: USB 2.0 (Plug and Play). A Breadboard is used to make up temporary circuits for testing or to try out an idea. No soldering is required so it is easy to change connections and replace components. Parts are not damaged and can be re-used afterward. A Push button is a simple type of switch that controls an action in a machine or some type of process. Push buttons (switches) are often part of a bigger system and are connected through a mechanical linkage; by this linkage we connect our push button with the Raspberry Pi system. Push buttons are putting into the breadboard. Jumper Wires are typically used to connect GPIO pins with the push button. Jumper wires are simply wires that have connector pins at each end, allowing them to be used to connect two points to each other without soldering. Jumper wires typically come in three versions: male-to-male, male-to-female and female-to-female. In our case, we have used male-to-female jumper wires to create a connection between the breadboard and the Raspberry Pi.

14 Design of an Automatic Reader for the Visually …

179

3.4 Proposed Hardware Model By using Raspberry Pi and all the components we have proposed following model: In the following diagrams we have shown the whole project model with a different view: full frontal view, near view & front View in Fig. 2 and side view, corner view and top view in Fig. 3. Actually, here we create a box type model, where we put our paper or book under raspberry pi camera module and after that all the processes are done by using Raspberry Pi. We create this box because of fixed position of the paper or book. For this fixed position we can capture the right image of given texts. Raspberry Pi Camera is interfaced to the Raspberry Pi via its customized slot— CSI Camera Connector. Speakers are interfaced to the Raspberry Pi via 3.5 mm audio out socket. The Power supply is given to the Raspberry by using power adapters. The front side of the Raspberry Pi camera module is downward and we put our pages or book under the camera module for taking images.

Fig. 2 Proposed Model (Full frontal view, Near view and Front view)

Fig. 3 Proposed Model (Side view, Corner view and Top view)

180 Table 1 GPIO Interfacing

N. Bhui et al. Pin No.

Pin name

Description

6

Ground

Push Button 2 (Side 1)

9

Ground

Push Button 3 (Side 1)

11

GPIO 17

Push Button 3 (Side 2)

12

GPIO 18

Push Button 2 (Side 2)

13

GPIO 27

Push Button 1 (Side 2)

14

Ground

Push Button 1 (Side 1)

3.5 GPIO Interfacing General-purpose input/output (GPIO) [11] pins act as input or output and controllable by the user at run time. Which pin work for which push button is described in the following table (Table 1). Actually, here we use three push buttons and these buttons are connected to GPIO pins by using jumper wires. 13 and 14 number GPIO pins are used for the first push button, 6 and 12 numbers GPIO pins are used for the second push button. 9 and 11 number GPIO pins are used for the third push button. Where, 6, 9 and 14 number pins are used for ground purpose and 11, 12 and 13 number pins are used for input-output purpose.

3.6 Proposed Approach Blind persons can feel everything and for that reason they can feel by touch any material. For that reason we used three push buttons for controlling the reader device. The first button is used for selecting the language, the second button is used for capturing image and another button is used for reading purposes. Every switch or push buttons are connected with the Raspberry Pi GPIO pins by using jumper wires. And for that reason we have written the pseudo code (Algorithm 1) in python programming. This proposed model support English, Bengali and Hindi languages. After capturing images at first we process that image for better result. We convert the RGB (specified with: red, green, blue) image into a grayscale image (Algorithm 2). Due to light effect or coloring effect, original image (RGB) may give maximum wrong result. For that reason we convert it into grayscale [12, 13]. Grayscale images are also used for measuring the intensity of light in images, using the following equation: ‘Y = 0.299R + 0.587G + 0.114B’; where, Y is the luma of an image. Luma represents the achromatic image (gray image, an intermediate color between black and white) [14, 15]. After that the image is converted to 256 gray levels black and white image. The image has two backgrounds—a dark background on the top and bottom, and a brighter background between them. The binarization preserves the two backgrounds, but deletes the unwanted pattern behind the text. Conversion of a gray-scale image into binary image is happening through thresholding. In this

14 Design of an Automatic Reader for the Visually …

181

thresholding image segmentation technique at first we select a threshold value, after that classification of 0 (black) and 1 (white) is happening. If the gray level value is lower than the threshold value, then classified as 0 and if the gray level value is equal or higher than the threshold value, then classified as 1 [16–18]. After that we write “ocr.sh” shell program (Algorithm 3) which will convert image to text by using Tesseract OCR and “audio.sh” shell program (Algorithm 4) which will speak generated text. Algorithm 1 : Embedding actions to GPIO inputs. Input: C = Clicks/Press the push-button to choose particular Language (English, Bengali and Hindi) B1 = Push-button 1 with GPIO Pin(27) B2 = Push-button 2 with GPIO Pin(18) B3 = Push-button 3 with GPIO Pin(17) C=0 # Initial Count is Zero B1press = 0 # Button1 is Not Pressed B2press = 0 # Button2 is Not Pressed : Audio output to speak about pressing the push-buttons. Call Subprocess. 1: if (B1 = Pressed) then 2: For every clicks 3: C = C+1. 4: B1press = 1 # Means Button1 is Pressed. 5: end if 6: if (B2 = Pressed) then 7: if (B1press = 0) then 8: Press the First Button to choose your language. 9: end if 10: if (B1press = 1) then 11: Scan Button Pressed. 12: Call Subprocess (“ocr.sh”shell script). 13: B2press = 1 # Means Button2 is Pressed. 14: end if

182

N. Bhui et al. 15: end if 16: if (B3 = Pressed) then 17:if (B1press = 0) then 18: if (B2press = 0) then 19: Press the Second Button for capturing image. 20: end if 21: Press the First Button to choose your language. 22: end if 23:if (B1press = 1) then 24: if (B2press = 1) then 25: Read Button Pressed. 26: Call Subprocess(“audio.sh”shell script). 27: end if 28: end if 29: end if

Algorithm 2 : Python Program for Converting Original Image to Grayscale with Binarization. Input: I = Image File, which is taken using Raspberry Pi Camera Output: I = Image File, which is generated after binarization. 1: if (I = Exist) then 2: Converting original image to Grayscale. 3: Calculating Weighted Average for RGB pixel: 4: 0.299*pixel[0] + 0.587*pixel[1] + 0.114*pixel[2] 5: Convert To Grayscale by using Weighted_Average. 6: Binarizing the Grayscale image. 7: Convert grayscale image to binary by thresholding. 8: Save the image. 9: end if

14 Design of an Automatic Reader for the Visually … Algorithm 3 : Shell Script for OCR(ocr.sh). Input: (1) C = Total Clicks, which are generated from “switch.py”python program (2) I = Image File, which is generated after binarization (3) For every Language: cl1=1, cl2=2 and cl3=3; where cl1 for English, cl2 for Bengali and cl3 for Hindi Output: Generated text file for English/Bengali/Hindi Language (“OutputLanguage.txt”). 1: if (C = 0) then 2: Press the First Button to choose your language. 3: Exit. 4: end if 5: if (C >3) then 6: Maximum press must be Three. 7: Exit. 8: end if 9: Capturing Image using Raspberry Pi Camera. 10: if (I = Exists) then 11: Convert Image. 12: end if 13: Running the Tessseract-OCR Engine. 14: if (C = cl1) then 15: Tesseract converts “img.jpg”to “EnglishOutput.txt”. 16: end if 17: if (C = cl2) then 18: Tesseract converts “img.jpg”to “BengaliOutput.txt”. 19: end if 20: if (C = cl3) then 21: Tesseract converts “img.jpg”to “HindiOutput.txt”. 22: end if

Algorithm 4 : Shell Script for Playing Audio(audio.sh). Input: O = Generated text file (“OutputLanguage.txt”) Output: Audio to speak the output text. 1: if (O = Exist) then 2: Speak “OutputLanguage.txt”text file by using eSpeak. 3: end if

183

184

N. Bhui et al.

Fig. 4 Accessing File

Here, the first algorithm shows how to simply convert the image; the second algorithm shows how all push button procedure is working on, the third algorithm shows how text is converted into audio and fourth algorithm shows how original text is generated from the image with the help of an OCR framework for a particular language.

4 Result and Discussion 4.1 Getting Input Images and Output Text To show all outputs for the paper ‘Design of an Automatic Reader for the Visually Impaired using Raspberry Pi’, we connect a Linux system by using common Wi-fi network and with the help of Secure File Transfer Protocol (SFTP). After following all the steps, we get some outputs in the form of audio. Generated outputs are not possible to show normally, for that reason we have taken out all the outputs from the Linux system by using SFTP and extracting procedure is shown in the following Fig. 4.

4.2 Extracted Images and Generated Output It is not possible to show the audio files normally, for that reason we have taken the screenshots of the generated text outputs. Out of these languages we have shown some outputs of capturing images with its output text file. All screenshots, which are taken from Linux System, are shown below.

14 Design of an Automatic Reader for the Visually …

185

After applying OCR (Tesseract), captured original image converted into text format. Captured original images and screen shots of generating text output in English, Bengali and Hindi Languages are shown in Figs. 5, 6 and 7 accordingly.

5 Conclusion and Future Work The paper ‘Design of an Automatic Reader for the Visually Impaired using Raspberry Pi’ stepwise improved over some similar type projects. This model is built by using different parts: image is captured by using a Raspberry Pi camera, recognize the text by using Tesseract OCR framework and after that read the text through eSpeak TTS. To create a more efficient model with good outcome, processing and optimization are more important. Sometimes OCR gives incorrect text due to processing problem and for that reason final result represents a meaningless text. Due to this reason preprocessing is an important part in the whole system. Actually, if characters are cleared, big then there is no problem, but, due to light issue or small character or presence of images in between text gives little bit unexpected results. In our proposed method due to binarization, it gives better result over English, Bengali and Hindi languages; also, there is no need of internet. The proposed model is already in a feasible state of use due to the fixed box type model. But there are some future works for this model: • Recognizing process of large texts is sometimes slow, for that reason introducing of some distributed technique is reserved for future work. • Now this model can read only English, Bengali and Hindi language, but in future it should allow many other languages. • The increment of the menu options which allow the user to play and pause the audio containing synthesized text. • If any image or diagram is present in between text, then recognition of that image or diagram is an important task, which is also reserved for future work. By extending these future works, many numbers of people will get more benefits from this reader.

186

N. Bhui et al.

Fig. 5 Extracted Image and Generated Output for English Language: (a) Captured Image and (b) Output Text

14 Design of an Automatic Reader for the Visually …

187

Fig. 6 Extracted image and generated output for Bengali language: captured image and output text

Fig. 7 Extracted image and generated output for Hindi language: captured image and output text

188

N. Bhui et al.

References 1. Akila IS, Akshaya B, Deepthi S, Sivadharshini P (2018) A text reader for the visually impaired using raspberry pi. In: Second international conference on computing methodologies and communication (ICCMC), IEEE, pp 778–782 2. Sonth S, Kallimani JS (2017) OCR based facilitator for the visually challenged. In: International conference on electrical, electronics, communication, computer, and optimization techniques (ICEECCOT), IEEE, pp 1–7 3. Neto R, Fonseca N (2014) Camera reading for blind people. Procedia Technol 16:1200–1209 4. Thiyagarajan S, Kumar GS, Kumar EP, Sakana G (2018) Implementation of optical character recognition using raspberry pi for visually challenged person. Int J Eng Technol 7(3.34):65–67 5. Balaramakrishna JN, Tech M, AP WGD, Geetha MJ. Smart text reader from image Using OCR and openCV with raspberry PI 3 6. Nagaraja L, Nagarjun RS, Nishanth MA, NIthin D, Veena SM (2015) Vision based text recognition using raspberry PI. Int J Comput Appl 975:8887 7. Raja A, Reddy KD (2015) Compact camera based assistive text product label reading and voice output for visually challenged people. IJITECH 8. Vasanth K, Macharla M, Varatharajan R (2019) A self assistive device for deaf and blind people using IoT. J Med Syst 43(4):88 9. Reshma A, Rajathi GM (2018) Voice based navigation system for visually impaired people using RFID tag. In: International conference on intelligent data communication technologies and Internet of Things. Springer, Cham, pp 1557–1564 10. Teach, learn, and make with raspberry pi—raspberry pi. https://www.raspberrypi.org/ . Last accessed 07 Dec 2019 11. Gpio—raspberry pi documentation. https://www.raspberrypi.org/documentation/usage/gpio/ . Last accessed 2019/12/07 12. Python programming for raspberry pi, sams teach yourself in 24 hours-Richardblum, Christine bresnahan—Google books. https://books.google.co.in/books/about/Python_Progra mming_for_Raspberry_Pi_Sams.html?id=RYGsAQAAQBAJ&redir_esc=y . Last accessed 07 Dec 2019 13. Camera—processing for pi, https://pi.processing.org/tutorial/camera/ . Last accessed 2019/12/07 14. In image processing applications, why do we convert from rgb to grayscale?—quora. https:// www.quora.com/In-image-processing-applications-why-do-we-convert-from-RGB-to-Gra yscale. Last accessed 2019/12/07 15. Grayscale—wikipedia. https://en.wikipedia.org/wiki/Grayscale. Last accessed 2019/12/07 16. Glossary—pixel values. https://homepages.inf.ed.ac.uk/rbf/HIPR2/value.htm. Last accessed 2019/12/07 17. https://www.research.ibm.com/haifa/projects/image/glt/binar1.html. Last accessed 2019/12/07 18. https://www.programcreek.com/python/example/89427/cv2.threshold. Last accessed 2019/12/07

Chapter 15

Power Maximization Under Partial Shading Conditions Using Advanced Sudoku Configuration Gunjan Bharti, Venkata Madhava Ram Tatabhatla, and Tirupathiraju Kanumuri

1 Introduction Electricity is our basic need, so we generate most of base load electricity by nonrenewable energy sources such as coal, nuclear, etc., which produces CO2 but this causes global warming and pollution. To overcome this, solar energy is used. Solar energy, radiation from the sun is capable of producing electricity with fewer environmental impacts [1]. PV is noiseless and has zero emission. Its maintenance and operation are simple [2]. The major factors that affect the PV system are variation of irradiance, temperature, and aging impact due to which its efficiency and performance decreases. Higher temperature also creates bubbles, corrosion, etc., [3]. A bypass diode is installed to reduce thermal stress due to hotspot but it results in minimization of power generation by slightly lowering the voltage output of PV system [4]. To reduce the effect of bypass diode different interconnection topology such as series–parallel (SP), bridge-link (BL), honey-comb (HC), and total crosstied (TCT) are used [5]. TCT topology is superior to other interconnection topology under various shading conditions [6]. Temporary shading, shading resulting from the building, shading from location, self-shading, and direct shading are some example of partial shading. In a PV array that has one or more PV cells exposed to less radiation than rest of the PV array [7]. The PV characteristics became more complex with multiple peak and lead to hotspot and difficulty in tracking of maximum power under partial shading conditions [8]. To maintain the uniform row current, there are two relocation techniques such as electrical array reconfiguration (EAR) and static reconfiguration technique [9]. In EAR techniques, various sensors and switches are used to G. Bharti (B) · V. M. R. Tatabhatla · T. Kanumuri Department of Electrical and Electronics Engineering, National Institute of Technology, Delhi, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_15

189

190

G. Bharti et al.

detect the shading which increases the cost and complexity of the system [10]. Static reconfiguration technique with fixed TCT interconnection is based on permanent arrangement of modules [11]. For example, in Sudoku arrangement, the positions of one complete column remain unchanged. The shadow remains undistributed so, the output power is reduced and causes multiple peaks in the PV characteristics [12]. Two-phase array reconfiguration has highly complex arrangement [13]. For large PV array simple, sensor less and fixed reconfiguration scheme became more complicated when modules are relocated [14]. The main drawback of competence square-based PV array reconfiguration technique is the laborious task of physical relocation [15]. Based on static reconfiguration technique advanced Sudoku reconfiguration is proposed for generation of maximum power during shading. In this paper, TCT and advanced Sudoku are compared for five types of shading conditions. The organization of paper is as follows: The paper continues with PV system modeling in Sect. 2 which gets the measure of PV module current through practical single diode model. Total cross-tied connection is explained in Sect. 3, and proposed advanced Sudoku reconfiguration is explained in detail in Sect. 4. In Sect. 5, performance evaluation of proposed reconfiguration and TCT arrangement is compared. The paper is finally concluded in Sect. 6.

2 PV System Modeling The mathematical modeling of PV cell can be done using this equation,       V + I RS V + I RS −1 − I = I Ph − IO exp n s VT RP

(1)

where I ph is the photocurrent (A); I 0 is the diode saturation current (A); RS is the series resistance (); RP is the shunt resistance (); ns is the number of cells in series; V T is the thermal voltage equivalent (V); V is the PV module voltage (V); I is the PV module current (A) [16]. A practical single-diode model is used and shown in Fig. 1.

3 TCT connection In Fig. 2, TCT configurations are considered for analyzing. There are 81 modules, where nine column and nine rows are present; r denotes the row, and c denotes column. Applying the KVL, Vm =

9  r =1

Vmr

(2)

15 Power Maximization Under Partial Shading Conditions Using …

191

Fig. 1 Practical single-diode model

Im

I1

I2

I3

I4

I9 V1

11

12

13

14

19

21

22

23

24

29

31

32

33

34

39

41

42

43

44

49

V4

51

52

53

54

59

V5

91

92

93

94

99

V2

V3

Vm

V9

Fig. 2 TCT configuration

where V m is the PV array voltage, and V mr denotes the panel voltage at rth row. The current produced by a PV module is given by, I = K Imax

(3)

192

G. Bharti et al.

K = G/G 0

(4)

where I max is the current produced by the PV module at standard irradiance G0 = 1000 W/m2 , and G is actual irradiance of the PV module. Applying the KCL, total array current is, Im =

9  

 Icr − I(c+1)r = 0 c = 1, 2, 3, . . . , 8

(5)

r =1

4 Advanced Sudoku A logic-based puzzle is introduced in this paper. It has nine squares, and in each square, nine smaller squares are present. Such pattern is called Sudoku pattern. The first digit denotes the number of rows, and second digit denotes column of 9 × 9 array. The Advanced Sudoku puzzle and pattern arrangement are shown in Fig. 3a, b, respectively. Now, the electrical connection of PV modules is like TCT but physical position of the modules is arranged in Sudoku manner so the shading is distributed and gives maximum power. The panel 52 is physically moved to the third row second column but electrical connection of the panel remains in the fifth row. Similarly, the panel 87 which belongs to eight row seventh column is shifted to third row seventh column as shown in Fig. 4. In this way, without changing the electrical connection in the array, physical location of the panels are changed. To analyze the proposed technique, five different shading are considered. The PV characteristics are obtained for each shading conditions for both TCT and Advanced Sudoku arrangements. 9 6 1 4 5 2 3 8 7

3 8 5 9 1 7 4 6 2

7 4 2 6 8 3 9 5 1

8 5 7 1 3 6 2 4 9

4 3 9 2 7 8 5 1 6

1 2 6 5 4 9 7 3 8

5 7 8 3 2 1 6 9 4

2 9 4 8 6 5 1 7 3

6 1 3 7 9 4 8 2 5

91 61 11 41 51 21 31 81 71

32 82 52 92 12 72 42 62 22

(a) Fig. 3 a Advanced Sudoku puzzle, b pattern arrangement

73 43 23 63 83 33 93 53 13

84 54 74 14 34 64 24 44 94

45 35 95 25 75 85 55 15 65 (b)

16 26 66 56 46 96 76 36 86

57 77 87 37 27 17 67 97 47

28 98 48 88 68 58 18 78 38

69 19 39 79 99 49 89 29 59

15 Power Maximization Under Partial Shading Conditions Using …

193

91

32

73

84

45

16

57

28

69

61

82

43

54

35

26

77

98

19

11

52

23

74

95

66

87

48

39

41

92

63

14

25

56

37

88

79

51

12

83

34

75

46

27

68

99

21

72

33

64

85

96

17

58

49

31

42

93

24

55

76

67

18

89

81

62

53

44

15

36

97

78

29

71

22

13

94

65

86

47

38

59

Fig. 4 Advanced Sudoku schematic arrangements

5 Performance Evaluation The performance of the proposed system is evaluated by a 9 × 9 PV array, shown in Fig. 3. Five shading patterns are analyzed on TCT and Advanced Sudoku arrangements. The solar panel specifications at standard test conditions are given in Table 1. Case 1 (Shading pattern 1) In shading pattern 1, we consider three different irradiation levels. The group one receives the irradiance of 1000 W/m2 ; group two receives 600 W/m2 ; and group three receives 400 W/m2 shown in Fig. 5. The location of global maximum power point Table 1 PV module specifications at STC

Parameters

Ratings

Output Power

1.5 W

Open circuit voltage

2.99 V

Short circuit current

0.63 A

Current at maximum power

0.5945 A

Voltage at maximum power

2.556 V

194

G. Bharti et al.

Table 2 Location of global peak in TCT and Advanced Sudoku TCT arrangement

Advanced Sudoku

Row current in Va (Vm ) the order in which panels are bypassed (Im )

Power (Pa) (Vm Im )

Row current in Va (Vm ) the order in which panels are bypassed (Im )

Power (Pa) (Vm Im )

I R6

7

9

63

IR1

8

9

72

I R7

7

8

56

I R8

8

8

64

I R8

7.4

7

51.8

I R4

8

7

56

I R9

7.4

6

44.4

I R5

8

6

48

I R1

9

5

45

I R3

8.2

5

41

I R2

9

4

36

I R7

8.2

4

32.8

I R3

9

3

27

I R9

8.2

3

24.6

I R4

9

2

18

I R2

8.6

2

17.2

I R5

9

1

9

I R6

8.6

1

8.6

400w/m2 11 21 31 41 51 61 71 81 91

12 22 32 42 52 62 72 82 92

13 23 33 43 53 63 73 83 93

14 24 34 44 54 64 74 84 94

1000W/m2

600w/m2 15 25 35 45 55 65 75 85 95 (a)

16 26 36 46 56 66 76 86 96

17 27 37 47 57 67 77 87 97

18 28 38 48 58 68 78 88 98

19 29 39 49 59 69 79 89 99

91 61 11 41 51 21 31 81 71

32 82 52 92 12 72 42 62 22

73 43 23 63 83 33 93 53 13

84 54 74 14 34 64 24 44 94

45 35 95 25 75 85 55 15 65

16 26 66 56 46 96 76 36 86

57 77 87 37 27 17 67 97 47

28 98 48 88 68 58 18 78 38

69 19 39 79 99 49 89 29 59

11 21 31 41 51 61 71 81 91

12 22 32 42 52 62 72 82 92

13 23 33 43 53 63 73 83 93

(b)

14 24 34 44 54 64 74 84 94

15 25 35 45 55 65 75 85 95

16 26 36 46 56 66 76 86 96

17 27 37 47 57 67 77 87 97

18 28 38 48 58 68 78 88 98

19 29 39 49 59 69 79 89 99

(c)

Fig. 5 a TCT PV arrangement, b Advanced Sudoku arrangement, c shading dispersion of Advanced Sudoku arrangement of shading pattern 1

for TCT and Advanced Sudoku arrangements is calculated and presented in Table 2. I R1 = K 11 I11 + K 12 I12 + K 13 I13 + . . . + K 19 I19

(6)

where K 11 = G11 /G0 = 1; where G11 is the solar irradiance of module 11, and I 1 is the current generated by module 1. Assume that the current generated by each module is I m at standard test condition. The current generated by row-1 is, I R1 = 9 × Im

(7)

15 Power Maximization Under Partial Shading Conditions Using …

195

All PV panels in row-2, row-3, row-4, and row-5 are receiving uniform irradiance 1000 W/m2 . The current generated by these rows, I R2 = I R3 = I R4 = I R5 = 9 Im

(8)

In row-6 and row-7, first five PV modules are receiving 1000 W/m2 irradiance. Remaining four modules, two modules are receiving 600 W/m2 , and next two modules are receiving 400 W/m2 irradiance. The current generated by the row-6 and row-7 is, I R6 = I R7 = 5 × Im + 2 × 0.6 Im + 2 × 0.4 Im = 7 Im

(9)

In row-8 and row-9, last four PV modules are receiving 600 W/m2 , and the rest of the modules are receiving 1000 W/m2 irradiance. The current generated by the row-8 and row-9 is, I R8 = I R9 = 5 × Im + 4 × 0.6 Im = 7.4 Im

(10)

Since current generated in first five rows are same but in last four row current are different, so there will be multiple peaks in PV characteristics. The power generated by PV array when no array is bypassed Pa = 9Vm Im

(11)

For Advanced Sudoku arrangement, the current in each row is calculated as, I R1 = I R4 = I R5 = I R8 = 7 Im + 0.6 Im + 0.4 Im = 8 Im

(12)

I R2 = I R6 = 8 Im + 0.6 Im = 8.6 Im

(13)

I R3 = I R7 = I R9 = 7 Im + 2 × 0.6 Im = 8.2Im

(14)

The power of proposed method and TCT are 110.9 W and 102.7 W, respectively. The proposed method has a higher power with a margin of 8.2 W when compared to TCT. The PV characteristics are shown in Fig. 6. The Advanced Sudoku arrangement produces the highest GMPP with enhancements of 7.98% over TCT as shown in Fig. 15. Case 2 (Shading pattern 2) In this case, the array has four irradiations, 1000 W/m2 , 300 W/m2 , 400 W/m2 and 700 W/m2 is shown in Fig. 7. The location of global maximum power point for TCT and Advanced Sudoku arrangements is calculated and presented in Table 3.

196

G. Bharti et al.

Fig. 6 P-V characteristics of shading pattern 1

11 21 31 41 51 61 71 81 91

12 22 32 42 52 62 72 82 92

13 23 33 43 53 63 73 83 93

14 24 34 44 54 64 74 84 94

15 25 35 45 55 65 75 85 95

16 26 36 46 56 66 76 86 96

17 27 37 47 57 67 77 88 97

18 28 38 49 58 68 78 89 98

19 29 39 50 59 69 79 90 99

(a)

1000W/m 2

700W/m 2

400W/m 2

300W/m 2

91 61 11 41 51 21 31 81 71

32 82 52 92 12 72 42 62 22

73 43 23 63 83 33 93 53 13

84 54 74 14 34 64 24 44 94

45 35 95 25 75 85 55 15 65

16 26 66 56 46 96 76 36 86

57 77 87 37 27 17 67 97 47

28 98 48 88 68 58 18 78 38

69 19 39 79 99 49 89 29 59

11 21 31 41 51 61 71 81 91

(b)

12 22 32 42 52 62 72 82 92

13 23 33 43 53 63 73 83 93

14 24 34 44 54 64 74 84 94

15 25 35 45 55 65 75 85 95

16 26 36 46 56 66 76 86 96

17 27 37 47 57 67 77 88 97

18 28 38 49 58 68 78 89 98

19 29 39 50 59 69 79 90 99

(c)

Fig. 7 a TCT PV arrangement, b Advanced Sudoku arrangement, c shading dispersion of Advanced Sudoku arrangement of shading pattern 2

Fig. 8 P-V characteristics of shading pattern 2

In the TCT configuration, all PV panels in row-1, row-2, row-3, row-4, and row-5 are receiving uniform irradiance 1000 W/m2 . The current generated by these rows, I R1 = I R2 = I R3 = I R4 = I R5 = 9 Im

(15)

In row-6 and row-7, first two modules are receiving 400 W/m2 , and next two modules are receiving 700 W/m2 . Remaining five modules are receiving 1000 W/m2 irradiance.

15 Power Maximization Under Partial Shading Conditions Using … 200W/m2 11 21 31 41 51 61 71 81 91

12 22 32 42 52 62 72 82 92

13 23 33 43 53 63 73 83 93

14 24 34 44 54 64 74 84 94

600W/m2

400W/m2 15 25 35 45 55 65 75 85 95

16 26 36 46 56 66 76 86 96

17 27 37 47 57 67 77 87 97

18 28 38 48 58 68 78 88 98

19 29 39 49 59 69 79 89 99

91 61 11 41 51 21 31 81 71

32 82 52 92 12 72 42 62 22

73 43 23 63 83 33 93 53 13

84 54 74 14 34 64 24 44 94

(a)

197

1000W/m2

45 35 95 25 75 85 55 15 65

16 26 66 56 46 96 76 36 86

57 77 87 37 27 17 67 97 47

28 98 48 88 68 58 18 78 38

69 19 39 79 99 49 89 29 59

11 21 31 41 51 61 71 81 91

12 22 32 42 52 62 72 82 92

13 23 33 43 53 63 73 83 93

14 24 34 44 54 64 74 84 94

15 25 35 45 55 65 75 85 95

16 26 36 46 56 66 76 86 96

17 27 37 47 57 67 77 88 97

18 28 38 48 58 68 78 89 98

19 29 39 49 59 69 79 90 99

(c)

(b)

Fig. 9 a TCT PV arrangement, b Advanced Sudoku arrangement, c shading dispersion of Advanced Sudoku arrangement of shading pattern 3

Fig. 10 P-V characteristics of shading pattern 3

200W/m2

100W/m2 11 21 31 41 51 61 71 81 91

12 22 32 42 52 62 72 82 92

13 14 23 24 33 34 43 44 53 54 63 64 73 74 83 84 93 94

15 25 35 45 55 65 75 85 95

(a)

16 26 36 46 56 66 76 86 96

17 18 27 28 37 38 47 49 57 58 67 68 77 78 88 89 97 98

19 29 39 50 59 69 79 90 99

300W/m2 91 61 11 41 51 21 31 81 71

32 82 52 92 12 72 42 62 22

73 84 43 54 23 74 63 14 83 34 33 64 93 24 53 44 13 94

45 35 95 25 75 85 55 15 65 (b)

1000W/m2

500W/m2 16 26 66 56 46 96 76 36 86

57 28 77 98 87 48 37 88 27 68 17 58 67 18 97 78 47 38

69 19 39 79 99 49 89 29 59

11 21 31 41 51 61 71 81 91

12 22 32 42 52 62 72 82 92

13 14 23 24 33 34 43 44 53 54 63 64 73 74 83 84 93 94

15 25 35 45 55 65 75 85 95

16 26 36 46 56 66 76 86 96

17 18 27 28 37 38 47 48 57 58 67 68 77 78 87 88 97 98

19 29 39 49 59 69 79 89 99

(c)

Fig. 11 a TCT PV arrangement, b Advanced Sudoku arrangement, c shading dispersion of Advanced Sudoku arrangement of shading pattern 4

198

G. Bharti et al.

Fig. 12 P-V characteristics of shading pattern 4

11 21 31 41 51 61 71 81 91

12 22 32 42 52 62 72 82 92

13 23 33 43 53 63 73 83 93

14 24 34 44 54 64 74 84 94

15 25 35 45 55 65 75 85 95

16 26 36 46 56 66 76 86 96

(a)

17 27 37 47 57 67 77 88 97

18 28 38 49 58 68 78 89 98

19 29 39 50 59 69 79 90 99

600W/m 2

400W/m 2

300W/m 2

200W/m 2

91 61 11 41 51 21 31 81 71

32 82 52 92 12 72 42 62 22

73 43 23 63 83 33 93 53 13

84 54 74 14 34 64 24 44 94

45 35 95 25 75 85 55 15 65

16 26 66 56 46 96 76 36 86

57 77 87 37 27 17 67 97 47

28 98 48 88 68 58 18 78 38

69 19 39 79 99 49 89 29 59

1000W/m 2 11 21 31 41 51 61 71 81 91

12 22 32 42 52 62 72 82 92

13 23 33 43 53 63 73 83 93

14 24 34 44 54 64 74 84 94

(b)

15 25 35 45 55 65 75 85 95

16 26 36 46 56 66 76 86 96

17 27 37 47 57 67 77 87 97

18 28 38 49 58 68 78 88 98

19 29 39 50 59 69 79 89 99

(c)

Fig. 13 a TCT PV arrangement, b Advanced Sudoku arrangement, c shading dispersion of Advanced Sudoku arrangement of shading pattern 5

Fig. 14 P-V characteristics of shading pattern 5

I R6 = I R7 = 2 × 0.4 Im + 2 × 0.7 Im + 5 Im = 7.2 Im

(16)

In row-6 and row-7, first two modules are receiving 400 W/m2 , and next two modules are receiving 300 W/m2 . Remaining five modules are receiving 1000 W/m2 . The current generated by these rows, I R8 = I R9 = 2 × 0.4 Im + 2 × 0.3 Im + 5 Im = 6.4 Im

(17)

15 Power Maximization Under Partial Shading Conditions Using … Advanced sudoku

120

199

TCT

Ppv(W)

100 80 60 40 20 0 Shading-1 Shading-2 Shading-3 Shading-4 Shading-5 Fig. 15 Comparison of maximum power for Advanced Sudoku and TCT arrangements under different shading patterns

Table 3 Location of global peak in TCT and Advanced Sudoku TCT arrangement

Advanced Sudoku

Row current in Va (Vm ) the order in which panels are bypassed (Im )

Power (Pa) (Vm Im )

Row current in Va (Vm ) the order in which panels are bypassed (Im )

Power (Pa) (Vm Im )

I R9

57.6

I R2

67.5

6.4

9

7.5

9

I R8

6.4

8

51.2

I R4

7.7

8

61.6

I R7

7.2

7

50.4

I R7

7.8

7

54.6

I R6

7.2

6

43.2

I R9

8

6

48

I R1

9

5

45

I R3

8.1

5

40.5

I R2

9

4

36

I R6

8.1

4

32.4

I R3

9

3

27

I R1

8.3

3

24.9

I R4

9

2

18

I R5

8.3

2

16.6

I R5

9

1

9

I R8

8.4

1

8.4

For Advanced Sudoku arrangement, the current in each row is calculated as, I R1 = I R5 = 0.3 Im + 8 Im = 8.3 Im

(18)

I R2 = 2 × 0.4 Im + 0.7 Im + 6 Im = 7.5 Im

(19)

I R3 = I R6 = 0.4 Im + 0.7 Im + 7 Im = 8.1 Im

(20)

I R4 = 0.4 Im + 0.3 Im + 7 Im = 7.7 Im

(21)

I R7 = 2 × 0.4 Im + 7 Im = 7.8 Im

(22)

200

G. Bharti et al.

I R8 = 0.4 Im + 8 Im = 8.4 Im

(23)

I R9 = 0.7 Im + 0.3 Im + 7 Im = 8Im

(24)

The power of proposed method and TCT is 103.9 W and 92.22 W, respectively. The proposed method has a higher power with a margin of 11.68 W when compared to TCT. The PV characteristics are shown in Fig. 8. The Advanced Sudoku arrangement produces the highest GMPP with enhancements of 12.66% over TCT as shown in Fig. 15. Case 3 (Shading pattern 3) In this case, the array has four irradiations, 1000 W/m2 , 200 W/m2 , 400 W/m2 , and 600 W/m2 is shown in Fig. 9. The location of global maximum power point for TCT and Advanced Sudoku arrangements is calculated and presented in Table 4. In the TCT configuration, in row-1 and row-2, first five modules are receiving 1000 W/m2 ; next two modules are receiving 200 W/m2 ; and next and last two modules are receiving 600 W/m2 irradiance. The current generated by row-1 and row-2 I R1 = I R2 = 5 Im + 2 × 0.2 Im + 2 × 0.6 Im = 6.6 Im

(25)

In row-3 and row-4, first five modules are receiving 1000 W/m2 ; next two modules are receiving 200 W/m2 ; and next and last two modules are receiving 400 W/m2 . The current generated by row-3 and row-4 I R3 = I R4 = 5 Im + 2 × 0.2 Im + 2 × 0.4 Im = 6.2 Im

(26)

Table 4 Location of global peak in TCT and Advanced Sudoku TCT arrangement

Advanced Sudoku

Row current in Va (Vm ) the order in which panels are bypassed (Im )

Power (Pa) (Vm Im )

Row current in Va (Vm ) the order in which panels are bypassed (Im )

Power (Pa) (Vm Im )

I R4

6.2

9

55.8

I R5

7.4

9

66.6

I R3

6.2

8

49.6

I R3

7.6

8

60.8

I R2

6.6

7

46.2

I R7

7.6

7

53.2

I R1

6.6

6

39.6

I R8

7.6

6

45.6

I R9

9

5

45

I R1

7.8

5

39

I R8

9

4

36

I R2

7.8

4

31.2

I R7

9

3

27

I R6

7.8

3

23.4

I R6

9

2

18

I R4

8.4

2

16.8

I R5

9

1

9

I R9

8.6

1

8.6

15 Power Maximization Under Partial Shading Conditions Using …

201

All PV panels in row-5, row-6, row-7, row-8, and row-9 are receiving uniform irradiance 1000 W/m2 . The current generated by these rows, I R5 = I R6 = I R7 = I R8 = I R9 = 9 Im

(27)

For Advanced Sudoku arrangement, the current in each row is calculated as (Fig. 9), I R1 = I R2 = I R6 = 0.2 Im + 0.6 Im + 7 Im = 7.8 Im

(28)

I R3 = I R7 = I R8 = 0.2 Im + 0.4 Im + 7 Im = 7.6 Im

(29)

I R4 = 0.4 Im + 8 Im = 8.4 Im

(30)

I R5 = 2 × 0.2 Im + 7 Im = 7.4 Im

(31)

I R6 = 0.2 Im + 0.6 Im + 7 Im = 7.8 Im

(32)

I R9 = 0.6 Im + 8 Im = 8.6 Im

(33)

The power of proposed method and TCT is 104.9 W and 91.65 W, respectively. The proposed method has a higher power with a margin of 13.25 W when compared to TCT. The PV characteristics are shown in Fig. 10. The Advanced Sudoku arrangement produces the highest GMPP with enhancements of 14.45% over TCT as shown in Fig. 15. Case 4 (Shading pattern 4) In this case, the array has five irradiations, 1000 W/m2 , 100 W/m2 , 200 W/m2 , 300 W/m2 , and 500 W/m2 is shown in Fig. 11. The location of global maximum power point for TCT and Advanced Sudoku arrangements is calculated and presented in Table 5. In the TCT configuration, in row-1 and row-2, first two modules are receiving 100 W/m2 ; next two modules are receiving 500 W/m2 ; and remaining modules are receiving 1000 W/m2 . The current generated by row-1 and row-2, I R1 = I R2 = 5 Im + 2 × 0.1 Im + 2 × 0.5 Im = 6.2 Im

(34)

In row-3 and row-4, first two modules are receiving 200 W/m2 ; next two modules are receiving 300 W/m2 ; and remaining modules are receiving 1000 W/m2 . The current generated by row-3 and row-4

202

G. Bharti et al.

Table 5 Location of global peak in TCT and Advanced Sudoku TCT arrangement

Advanced Sudoku

Row current in the Va (Vm ) order in which panels are bypassed (Im )

Power Pa (Vm Im )

Row current in the Va (Vm ) order in which panels are bypassed (Im )

Power (Pa) (Vm Im )

I R4

6

9

54

I R9

7.3

9

65.7

I R3

6

8

48

I R6

7.4

8

59.2

I R2

6.2

7

43.4

I R1

7.5

7

52.5

I R1

6.2

6

37.2

I R8

7.6

6

45.6

I R9

9

5

45

I R5

7.7

5

38.5

I R8

9

4

36

I R4

7.7

4

30.8

I R7

9

3

27

I R7

7.8

3

23.4

I R6

9

2

18

I R3

8.1

2

16.2

I R5

9

1

9

I R2

8.3

1

8.3

I R3 = I R4 = 5 Im + 2 × 0.2 Im + 2 × 0.3 Im = 6 Im

(35)

All PV modules in row-5, row-6, row-7, row-8, and row-9 are receiving uniform irradiance 1000 W/m2 . The current generated by these rows, I R5 = I R6 = I R7 = I R8 = I R9 = 9Im

(36)

For Advanced Sudoku arrangement, the current in each rows are calculated as, I R1 = 0.2 Im + 0.3 Im + 7 Im = 7.5 Im

(37)

I R2 = 0.3 Im + 8 Im = 8.3 Im

(38)

I R3 = 0.1 Im + 8 Im = 8.1 Im

(39)

I R4 = I R5 = 0.2 Im + 0.5 Im + 7 Im = 7.7 Im

(40)

I R6 = 0.1 Im + 0.3 Im + 7 Im = 7.4 Im

(41)

I R7 = 0.5 Im + 0.3 Im + 7 Im = 7.8 Im

(42)

I R8 = 0.1 Im + 0.5 Im + 7 Im = 7.6 Im

(43)

15 Power Maximization Under Partial Shading Conditions Using …

203

I R9 = 0.1 Im + 0.2 Im + 7 Im = 7.3 Im

(44)

The power of proposed method and TCT is 103.3 W and 88.26 W, respectively. The proposed method has a higher power with a margin of 15.04 W when compared to TCT. The PV characteristics are shown in Fig. 12. The Advanced Sudoku arrangement produces the highest GMPP with enhancements of 17.04% over TCT as shown in Fig. 15. Case 5 (Shading pattern 5) In shading pattern 5, the 9 × 9 PV array matrix is partial shaded at the center with various irradiance levels is shown in Fig. 13. In shading pattern 5, the PV array has five irradiation levels such as 1000 W/m2 , 200 W/m2 , 300 W/m2 , 400 W/m2 , and 600 W/m2 . The location of global maximum power point for TCT and Advanced Sudoku arrangements is calculated and presented in Table 6. In the TCT configuration, row-1, row-2, row-7, row-8, row-9 are receiving uniform irradiance 1000 W/m2 . The current generated by these rows, I R1 = I R2 = I R7 = I R8 = I R9 = 9Im

(45)

The current generated in row-3 and row-4 I R3 = I R4 = 5 Im + 2 × 0.4 Im + 2 × 0.2 Im = 6.2 Im

(46)

The current generated in row-5 and row-6 Table 6 Location of global peak in TCT and Advanced Sudoku TCT arrangement

Advanced Sudoku

Row current in Va (Vm ) the order in which panels are bypassed (Im )

Power (Pa) (Vm Im )

Row current in Va (Vm ) the order in which panels are bypassed (Im )

Power (Pa) (Vm Im )

I R4

6.2

9

55.8

I R3

7.5

9

67.5

I R3

6.2

8

49.6

I R6

7.5

8

60

I R6

6.8

7

47.6

I R8

7.5

7

52.5

I R5

6.8

6

40.8

I R7

7.7

6

46.2

I R9

9

5

45

I R1

8

5

40

I R8

9

4

36

I R2

8

4

32

I R7

9

3

27

I R9

8

3

24

I R6

9

2

18

I R5

8.2

2

16.4

I R5

9

1

9

I R4

8.6

1

8.6

204

G. Bharti et al.

I R5 = I R6 = 5 Im + 2 × 0.3 Im + 2 × 0.6 Im = 6.8 Im

(47)

In Advanced Sudoku arrangement, the current in each rows are calculated as, I R1 = I R2 = I R9 = 0.4 Im + 0.6 Im + 7 Im = 8 Im

(48)

I R3 = I R6 = I R8 = 0.3 Im + 0.2 Im + 7 Im = 7.5 Im

(49)

I R4 = 0.6 Im + 8 Im = 8.6 Im

(50)

I R5 = 0.2 Im + 8 Im = 8.2 Im

(51)

I R7 = 0.4 Im + 0.3 Im + 7 Im = 7.7 Im

(52)

The power of proposed method and TCT is 105.3 W and 92.04 W, respectively. The proposed method has a higher power with a margin of 13.26 W when compared to TCT. The PV characteristics are shown in Fig. 14. The Advanced Sudoku arrangement produces the highest GMPP with enhancements of 14.40% over TCT as shown in Fig. 15.

6 Conclusion Partial shading decreases the maximum power of PV array, and it depends on patterns of shading. So, in this paper, Advanced Sudoku is proposed, and for different shading pattern, its maximum power is obtained. It improves the PV power generation under different shading conditions. Five different shading patterns are analyzed and found that Advanced Sudoku gives the maximum power output at every shading pattern. In shading 1, the maximum power is increased by 7.98%; similarly, in shading 2, shading 3, shading 4, and shading 5, the maximum power is increased by 12.66%, 14.45%, 17.04%, 14.40%, respectively, with comparison to TCT arrangement. This method is cost effective, and complexity is decreased.

References 1. Sahoo SK (2016) Renewable and sustainable energy reviews solar photovoltaic energy progress in India: a review. Renew Sustain Energ Rev 59:927–939 2. Moosavian SM, Rahim NA, Selvaraj J, Solangi KH (2013) Energy policy to promote photovoltaic generation. Renew Sustain Energ Rev 25:44–58 3. Manganiello P, Balato M, Vitelli M (2015) A survey on mismatching and aging of PV modules: the closed loop. IEEE Trans Industr Electron 62(11):7276–7286

15 Power Maximization Under Partial Shading Conditions Using …

205

4. Silvestre S, Boronat A, Chouder A (2009) Study of bypass diodes configuration on PV modules. Appl Energ 86(9):1632-1640 5. Sahu HS, Nayak SK, Mishra S (2015) Maximizing the power generation of a partially shaded PV array. IEEE J Emerg Sel Top Power Electron 4(2):626–637 6. Villa LFL, Picault D, Raison B, Bacha S, Labonne A (2012) Maximizing the power output of partially shaded photovoltaic plants through optimization of the interconnections among its modules. IEEE J Photovoltaics 2(2):154–163 7. Pendem SR, Mikkili S (2018) Modeling, simulation and performance analysis of solar PV array configurations (Series, Series–Parallel and Honey-Comb) to extract maximum power under partial shading conditions. Energ Rep 4:274–287 8. Bosco MJ, Mabel MC (2017) A novel cross diagonal view configuration of a PV system under partial shading condition. Sol Energ 158:760–773 9. Tatabhatla VMR, Agarwal A, Kanumuri T (2019) Improved power generation by dispersing the uniform and non-uniform partial shades in solar photovoltaic array. Energ Convers Manag 197:111825 10. Satpathy PR, Sharma R (2019) Power and mismatch losses mitigation by a fixed electrical reconfiguration technique for partially shaded photovoltaic arrays. Energ Convers Manag 192:52–70 11. Wang YJ, Hsu PC (2011) An investigation on partial shading of PV modules with different connection configurations of PV cells. Energy 36(5):3069–3078 12. Tatabhatla VMR, Agarwal A, Kanumuri T (2019) Performance enhancement by shade dispersion of solar photo-voltaic array under continuous dynamic partial shading conditions. J Clean Prod 213:462–479 13. Pillai DS, Rajasekar N, Ram JP, Chinnaiyan VK (2018) Design and testing of two phase array reconfiguration procedure for maximizing power in solar PV systems under partial shade conditions (PSC). Energ Convers Manag 178:92–110 14. Pillai DS, Ram JP, Nihanth MSS, Rajasekar N (2018) A simple, sensorless and fixed reconfiguration scheme for maximum power enhancement in PV systems. Energ Convers Manag 172:402–417 15. Dhanalakshmi B, Rajasekar N (2018) A novel competence square based PV array reconfiguration technique for solar PV maximum power extraction. Energ Convers Manag 174:897–912 16. Tamrakar V, Gupta SC, Sawle Y (2015) Single-diode PV cell modeling and study of characteristics of single and two-diode equivalent circuit. Electr Electron Eng Int J (ELELIJ) 4(3):12

Chapter 16

Investigations on Performance Indices Based Controller Design for AVR System Using HHO Algorithm R. Puneeth Reddy and J. Ravi Kumar

1 Introduction In power systems, it is necessary to maintain output voltage of synchronized generator at constant level. It can be done by using AVR system. In this paper, to improve efficiency, we control AVR system by PID controller. But, designing a PID controller of AVR system is always a challenging task. Previously, various optimization algorithms like particle swarm optimization (PSO) [1], chaotic algorithm [2], artificial bee algorithm (ABC) [3] and pattern search algorithm (PS) [4]. Recently, optimization of PID controller of AVR system is done by using HHO algorithm [5]. But however, this paper compares various performance indices used for optimizing PID controller and deduce the best performance index for AVR system. This paper rests on following sections, Sect. 2 explains model of Automatic Voltage Regulator system, Description of Harris Hawks optimization algorithm is done in Sect. 3, HHO based AVR system and introduction of various standard performance criteria is given in Sect. 4, results of various performance criteria are compared and proposal of best performance criteria is done in Sect. 5 along with comparison of HHO-PID based AVR with GA-PID based AVR system.

R. Puneeth Reddy (B) · J. Ravi Kumar U.G Student, Department of Electronics and Communication Engineering, NIT, Warangal, Telangana, India e-mail: [email protected] J. Ravi Kumar e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_16

207

208

R. Puneeth Reddy and J. Ravi Kumar

2 Automatic Voltage Regulator System Model The typical function of AVR system is to maintain the output voltage magnitude of a synchronous generator at a constant level. Generally, AVR system contains five components, namely amplifier, exciter, generator, sensor and comparator. A basic model of AVR system is shown in Fig. 1 [6]. Mathematic model of basic AVR system is shown in Fig. 2.

2.1 Linerazed Modelof AVR System [7] Amplifier Model There are various types of amplifier such as rotating amplifier, magnetic amplifier or modern electronic amplifier [8]. The transfer function relating to amplifier model of AVR system is denoted by gain of ka and Time constant Ta . The transfer function and parameter limits of Amplifier model are shown in Table 1.

Fig. 1 Basic model of AVR system [6]

16 Investigations on Performance Indices Based Controller Design …

209

Fig. 2 Mathematical model of basic AVR system

Table 1 Parameters of AVR with transfer function and parameters limits [2, 3]

Transfer function PID Controller

kp +

ki S

+ kd S

Parameters 0 ≤ k p ≤ 1.5 0 ≤ ki ≤ 1 0 ≤ kd ≤ 1

Amplifier

ka 1+Ta S

Exciter

ke 1+Te S

Generator

kg 1+Tg S

10 ≤ ka ≤ 40 0.02 ≤ Te ≤ 1 1 ≤ ke ≤ 10 0.4 ≤ Te ≤ 1 0.7 ≤ k g ≤ 1 1 ≤ Tg ≤ 2

Sensor

ks 1+Ts S

ks = 1 0.001 ≤ Ts ≤ 0.06

Exciter Model There are various excitation systems. But practical modern exciter is linear model. It takes Time constant into consideration and also eliminates non linearities [8]. First order transfer function is used for modelling of exciter and it is denoted by gain of ke and Time constant Te . The transfer function and parameter limits of Exciter model are illustrated in Table 1. Generator Model Emf generated in synchronous machine depends on magnetization curve and terminal voltage varies according to generator load [8]. The transfer function of generator model may be represented by gain of k g and Time constant Tg . Transfer function and parameter limits of generator model are represented in Table 1.

210

R. Puneeth Reddy and J. Ravi Kumar

Fig. 3 Mathematical model of AVR system with PID controller

Sensor Model Potential transformer takes charge of sensing voltage and then bridge rectifier rectifies it [8]. Transfer function relating to sensor model is denoted by gain of ks and Time constant Ts . Transfer function and parameter limits of sensor model are shown in Table 1. PID Controller Step response of AVR system is not an efficient one, since oscillations produced seriously reduces overall performance of system. Hence PID controller is connected to basic AVR system to enhance overall performance of system. PID controller helps in reducing steady state error and also gives better metrices. Mathematical model of typical AVR system with PID controller is shown in Fig. 3. Parameters of practical AVR system with transfer functions is shown in Table 1 [8].

3 HHO Algorithm [9] Harris Hawks optimization algorithm [9] is drawn based on nature acts. Co-operative behavior and chasing strategy of Harris Hawks is main inspiration for developing this algorithm. In this strategy, hawks collectively try to grab a prey from various directions. Algorithm mainly stands on exploration phase and exploitation phase. The detail analysis of HHO algorithm is shown in Fig. 4. It basically starts from updating energy of prey using Eq. 4.

3.1 Exploration Phase In this part, exploration phase will be discussed. In general, Harris Hawks can easily figure out position of prey (rabbit) by their eyes, but practically prey cannot be seen easily. Hence hawks take some time to monitor the site to detect a prey maybe after sometime. Equations involving in Exploration phase are given below.

16 Investigations on Performance Indices Based Controller Design …

211

Fig. 4 Flow chart of HHO algorithm

X (t + 1) = X rand (t).r1 |X rand (t).2r2 X (t)| if q ≥ 0.5

(1)

(X rabbit (t).X m (t)).r3 (LB + r4 (UB − LB)) if q < 0.5

(2)

where X (t + 1) is location of hawks in upcoming iteration, X rabbit is position of prey(rabbit), X (t) is current position of hawks, r1 , r2 , r3 , r4 , q are random number generated in interval (0, 1) and will be updated at end of each iteration, LB and UB denote lower and upper bounds of variables respectively, X rand (t) is position of randomly chosen hawk, X m (t) is average location of present size of hawks. Average location of hawks X m (t) is given by   N 1  X m (t) = X i (t) N i=0

(3)

212

R. Puneeth Reddy and J. Ravi Kumar

where X i (t) is location of hawk in iteration t, N is total population size of Hawks.

3.2 Transition from Exploration to Exploitation Energy of prey is given by   t E = E0 1 − T

(4)

where E 0 is initial energy of prey, T is maximum number of iterations. In exploration phase, we consider energy of prey (E) to be greater than or equal to 1. Once energy of prey (E) drops down to less than one, Exploitation phase begins. E ≥ 1 => Exploration Phase, E < 1 => Exploitation Phase

3.3 Exploitation Phase Here hawks take a surprise step on prey by attacking the prey detected in previous iteration. However, prey tries to escape from hawks. Considering escaping behaviours of prey and Hawks movement strategy, best position can be identified by considering four models. In this process we generate a random number r in interval (0, 1) which is defined as follows r < 0.5 => Successful escaping of prey r ≥ 0.5 => Unsuccessful escaping of prey In general, preys will be encircled by hawks softly or hardly depending upon energy of prey. Firstly, hawks try to move closer and closer to targeted prey and then starts besiege process. Besiege process is further classified into two forms Hard besiege and Soft besiege. E ≥ 0.5 => Soft Besiege occurs, E < 0.5 => Hard Besiege occurs

Soft Besiege The conditions which defines the soft besiege are r ≥ 0.5 and E ≥ 0.5. As prey has enough energy for escaping, it tries to mislead by doing random jumps and at last prey gets exhausted. Meanwhile, Hawks encircles the prey softly and grabs prey smartly. This behaviour is represented as

16 Investigations on Performance Indices Based Controller Design …

213

X (t + 1) = X (t) − E|J X rabbit (t) − X (t)|

(5)

X (t) = X rabbit (t) − X (t)

(6)

where r5 is a random number in interval (0, 1) and J = 2(1 − r5 ) represents random jumping capacity throughout escaping procedure. Hard Besiege The conditions which defines the hard besiege are r ≥ 0.5 and E < 0.5. It is clear that prey does not have sufficient energy for escaping and gets captured by hawks easily. This behaviour is represented as X (t + 1) = X rabbit (t) − E|X (t)|

(7)

Soft Besiege With Progressive Dives Conditions which defines the soft besiege with progressive dives are r < 0.5 and E ≥ 0.5. Here prey has enough energy for successful escape, still soft besiege is constructed before surprise attack. To model escaping patterns of prey, levy flight (LF) algorithm is used. LF is utilized to copy the motion of preys during various stages. In this case hawks decide their next move according to equation below Y = X rabbit (t) − E|J X rabbit (t) − X (t)|

(8)

Hawks compare result of this movement with previous dive to check whether dive is going to be efficient or not. We assumed that dive depends on LF based patterns which is represented as follows Z = Y + S × LF(D)

(9)

where D is the dimension of problem and S is a random vector by size 1 × D. The final strategy for updating positions of hawks are X (t + 1) = Y if F(Y ) < F(X (t))

(10)

Z if F(Z ) < F(X (t))

(11)

Hard Besiege With Progressive Rapid Dives The conditions which defines hard besiege with progressive rapid dive are r < 0.5 and E ≤ 0.5. In these conditions, prey does not have sufficient energy for escaping and hard besiege is constructed around it. Here hawks try to minimize average distance with escaping prey. Equations 10 and 11 will represent this stage

214

R. Puneeth Reddy and J. Ravi Kumar

X (t + 1) = Y if F(Y ) < F(X (t)) Z if F(Z ) < F(X (t)) where Y and Z are calculated by Eqs. 8 and 9 respectively.

4 HHO PID Controller In this step, we try to enhance step response of AVR system by finding optimum k p , kd , ki values using HHO algorithm. Range of k p , kd , ki [1] are shown in Table 1. Values of k p , kd , ki are chosen randomly for each iteration. In this paper, various standard performance indices like Integral of the time weighted absolute error [ITAE], Integral square error (ISE), Integral absolute error (IAE), Integral time square error (ITSE) are compared using HHO algorithm. Performance indices which have to be optimized using HHO algorithm are represented by Eqs. 12, 13, 14 and 15. Values of k p , kd , ki resulting after minimizing performance indices are noted after some predefined iterations. ∞ ITAE =

t|e(t)|dt

(12)

e(t)2 dt

(13)

∞ IAE = |e(t)|dt

(14)

0

∞ ISE = 0

0

∞ ITSE =

te(t)2 dt

(15)

e(t) = Vref (s) − Vs

(16)

0

where

In addition to these four performance indices, another performance index named objective function [1] is considered to compare results with GA based PID controller AVR system. It is represented by Eq. 17   W (k) = 1 − e−β M p + E ss + e−β (ts − tr )

(17)

16 Investigations on Performance Indices Based Controller Design …

215

Fig. 5 Model of AVR system with PID controller [1]

Fig. 6 Step response of AVR system without PID controller

where M p denotes the overshoot of step response, E ss represents the steady state error, ts indicates the settling time, tr denotes rise time and β is weighting factor and it ranges from 0.8 to 1.5. This range of β is chosen to decrease settling time and steady state error [1]. Mathematical model of AVR system with PID with considerable gain and Time constant values is shown in Fig. 5 [1]. Step response of AVR system without PID controller is plotted and shown in Fig. 6. On simulating step response obtained, we identified that tr = 0.2607 s, ts = 6.9865 s, E ss = 0.0908 and M p = 65.61%.

5 Results After obtaining k p , ki and kd values, we substitute these values in Fig. 5 and obtained step responses for each performance criteria. Convergence curve indicating best fitness versus Number of iterations is plotted for each performance criteria (Figs. 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 and 18).

216

R. Puneeth Reddy and J. Ravi Kumar

Fig. 7 Convergence curve of ITAE performance index

Fig. 8 Step response of ITAE performance index

Efficiency of Harris hawk optimization based PID is proved by comparing metrices with GA based PID [1]. The values of M p , E ss , tr and ts of both algorithms after minimizing objective function are shown in Table 2 [1]. Convergence curve of Performance criteria such as ITAE, ITSE, ISE, IAE and objective function reaches steady value before number of iterations reaches to fifty, even though HHO algorithm is carried out to hundred iterations for reference purpose. Out of all performance indices, Objective function with β = 1 and β = 1.5 produces zero steady state error value. It can be clearly stated that, best combined values of ts , tr , M p and E ss are obtained through objective function [2] using values β = 1 and β = 1.5. Therefore, It is found out minimizing Objective function instead of

16 Investigations on Performance Indices Based Controller Design …

217

Fig. 9 Convergence curve of IAE performance index

Fig. 10 Step response of IAE performance index

standard performance index such as ITAE, ITSE, ISE, IAE is a preferable choice for PID-based AVR system.

218

R. Puneeth Reddy and J. Ravi Kumar

Fig. 11 Convergence curve of ISE performance index

Fig. 12 Step response of ISE performance index

6 Conclusion In this paper, in depth analysis of efficiency of HHO algorithm is tested by comparing results with GA algorithm [1]. It is proved that HHO algorithm gives a better transient step response when compared to GA algorithm. This paper also compares values of t p , tr , M p and E ss of standard performance indices such as ITAE, ISE, IAE, ITSE and objective function [1] for AVR system using Harris hawk optimization. On simulating, the results showed that Objective function with β = 1 and β = 1.5

16 Investigations on Performance Indices Based Controller Design …

219

Fig. 13 Convergence curve of ITSE performance index

Fig. 14 Step response of ITSE performance index

produced better combined values of t p , tr , M p and E ss . By considering these results, objective function [1] represented by Eq. 17 is proven to be the best performance index indicator.

220

Fig. 15 Convergence curve of objective function (β = 1)

Fig. 16 Step response of objective function (β = 1)

R. Puneeth Reddy and J. Ravi Kumar

16 Investigations on Performance Indices Based Controller Design …

Fig. 17 Convergence curve of objective function (β = 1.5)

Fig. 18 Step response of objective function (β = 3/2)

221

222

R. Puneeth Reddy and J. Ravi Kumar

Table 2 GA-PID controller versus HHO-PID controller p

Number of Controller k p generations

ki

kd

Mp

E ss

1

50

GA-PID

0.7984

0.3158

8.66

0

0.5980 0.2019

1

50

HHO-PID 0.6691

0.57836 0.25734 1.5174 0

0.3934 0.2622

1

100

GA-PID

0.7201

0

0.8645 0.2138

1

100

0.4655 0.3033

0.8861 0.7722

0.3196

4.54

ts

tr

HHO-PID 0.62188 0.45259 0.21664 0.361

0

1.5 50

GA-PID

0

1.0517 0.2003

1.5 50

HHO-PID 0.60469 0.4229

0.20396 0.1283 0

0.4937 0.3189

1.5 100

GA-PID

0.3927

1.5 100

HHO-PID 0.6274

0.7717 0.8372

0.5930 0.6973

0.3507

3.62

0

0.9396 0.1859

0.46702 0.22098 0.5037 0

6.17

0.4566 0.2983

Iterations

50

100

50

100

50

100

50

100

50

100

50

100

Performance criteria

ITAE

ITAE

ITSE

ITSE

ISE

ISE

IAE

IAE

Objective function (β = 1)

Objective function (β = 1)

Objective function (β = l.5)

Objective function (β = 1.5) 50

50

50

50

50

50

50

50

50

50

50

50

Population size

0.6274

0.6046

0.6218

0.6691

15

1.5

1.2307

1.2311

1.3418

1.3418

1.2904

1.2913

kp

0.46702

0.4229

0.4525

0.57836

1

1

1

1

1

1

0.88693

0.88829

ki

0.2209

0.2039

0.2166

0.2573

0.6455

0.6455

1

1

0.6741

0.6742

0.4144

0.4147

kd

4.1471

4.2219

4.1351

6.1612

0.1597

0.1597

0.0687

0.0688

0.0056

0.0056

0.0327

0.0327

Best fitness

0.2983

0.3189

0.3033

0.2622

0.1155

0.1155

0.0876

0.0876

0.1139

0.1139

0.1559

0.1559

tr

0.4566

0.4937

0.4655

0.3934

0.687

0.687

1.3656

1.3655

1.053

1.053

0.8348

0.8346

ts

0.5037

0.1283

0.361

1.5174

22.522

22.522

27.4

27.404

20.866

20.857

17.605

17.629

MP

0

0

0

0

0.0048

0.0048

0.0024

0.0024

0.0048

0.0048

0.0019

0.002

E ss

Table 3 Shown below gives comparison of optimum controller values, M p , E ss , ts and tr values for different input values of Iterations and population size

16 Investigations on Performance Indices Based Controller Design … 223

224

R. Puneeth Reddy and J. Ravi Kumar

References 1. Gaing ZL (2004) A particle swarm optimization approach for optimum design of PID controller in AVR system. IEEE Trans Energ Convers 19(2):384–391 2. dos Santos Coelho L (2009) Tuning of PID controller for an automatic regulator voltage system using chaotic optimization approach. Chaos Solitons Fractals 39(4):1504–1514. Gozde H, Taplamacioglu MC (2011) Comparative performance analysis of artificial bee colony algorithm for Automatic Voltage Regulator (AVR) system. J Franklin Instit 348(8):1927–1946 3. Sahu BK, Panda S, Mohanty PK, Mishra N (2012) Robust analysis and design of PID controlled AVR system using pattern search algorithm. In: 2012 IEEE international conference on power electronics, drives and energy systems (PEDES). IEEE, pp 1–6 4. Ekinci S, Hekimo˘glu B, Eker E (2019) Optimum design of PID controller in AVR system using Harris Hawks optimization. In: 2019 3rd international symposium on multidisciplinary studies and innovative technologies (ISMSIT). IEEE, pp 1–6 5. Gozde H, Taplamacioglu MC, Kocaarslan I (2010) Application of artificial bee colony algorithm in an Automatic Voltage Regulator (AVR) system. Int J Tech Phys Prob Eng 1(3):88–92 6. Yoshida H, Kawata K, Fukuyama Y, Takayama S, Nakanishi Y (2000) A particle swarm optimization for reactive power and voltage control considering voltage security assessment. IEEE Trans Power Syst 15(4):1232–1239 7. Eswaramma K, Kalyan GS (2017) An Automatic Voltage Regulator (AVR) system control using a PI-DD controller. Int J Adv Eng Res Dev 4(6) 8. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris Hawks optimization: algorithm and applications. Future Gener Comput Syst 97:849–872 9. Gozde H, Cengiz Taplamacioglu M (2011) Comparative performance analysis of artificial bee colony algorithm for automatic voltage regulator (AVR) system. J Franklin Inst 348.8:1927– 1946.

Chapter 17

Ethereum 2.0 Blockchain in Healthcare and Healthcare Based Internet-of-Things Devices Vaibhav Sagar and Praveen Kaushik

1 Blockchain in Healthcare Healthcare sector uses centralized application softwares to manage medical history of patients. The data of patients gathered through healthcare Internet-Of-Things devices is vulnerable to attacks, while transmission over a network, which could render the data useless. Not only interoperability of medical softwares used between different healthcare institutions is an ongoing issue, the medical history of patients is shared amongst hospitals on a need per basis. Designing a decentralized healthcare system that provides interoperability, security and real-time access of data to patients and hospitals is a challenge. Taking into consideration the significant features and applications that Blockchain [1] provides, one can only come up with better implementations of the same in the fields of Healthcare [2], Financial applications, Banking, Governance, Supply-Chain [3], Internet-Of-Things (IoT) [4], etc. Blockchain has been quite popular in recent years because of its implementation in designing a decentralized currency—Bitcoin [1] in the year 2008 by Satoshi Nakamoto. Bitcoin [1] is an implementation of Blockchain, with Proof-Of-Work POW as the consensus algorithm which dictates as to how the new Bitcoins [1] can be mined and how the generated blocks are verified and accepted as to be considered the part of the Blockchain. Internet-of-Things (IoT) devices is a name given to any such device that, with the help of sensors, collects data and forwards it onto a network or a data processing V. Sagar (B) · P. Kaushik Department of CSE, MANIT, Bhopal 462003, India e-mail: [email protected] P. Kaushik e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_17

225

226

V. Sagar and P. Kaushik

application. As per the data provided by Juniper Research the number of Internetof-Things (IoT) devices that will be connected in the world is bound to increase to 50 Billion in 2023, which when compared to 21 Billion Internet-of-Things devices in 2018 is a 140% increase [2]. Healthcare is one of the major applications of Blockchain that could leverage the decentralized and security features of Blockchain. Also, it is to be noted that the use of Internet-of-Things (IoT) devices for gathering real-time data of the patients is of great importance and with that arises the problem of cyber-attacks that these Internet-of-Things (IoT) devices are vulnerable to.

1.1 Literature Survey: Blockchain in Healthcare and Internet-of-Things Overview of Blockchain Blocks of information linked together with each block only linked to one previous block and one block ahead, where each block’s data is cryptographically hashed and stored in a decentralized manner is called a Blockchain. Blockchain needs a way of verifying and certifying, called consensus algorithms, the freshly generated blocks to be appended at the end of the chain. Each block of information is immutable once it is hashed and verified by a validator using a consensus algorithm. The consensus algorithms are many such as (i) Proof-Of-Work POW (ii) Proof-Of-Stake POS (iii) Practical Byzantine Fault Tolerance (iv) ProofOf-Capacity (v) Proof-Of-Elapsed-Time (vi) Proof-Of-Burn. Although we are going to discuss two of them in brief since Bitcoin [1] uses Proof-Of-Work POW and the other most popular algorithm for reaching consensus is Proof-Of-Stake POS. Proof-Of-Work POW method is more like solving a mathematical puzzle to get a desired answer in limit time. The person/miner who does it successfully is rewarded with an incentive and the block gets verified and appended to the Blockchain. This mathematical puzzle is more complex than we can imagine and takes a lot of computation power. (ii) Proof-Of-Stake POS method is to guarantee that not one person with enough computation power can keep on mining and determining the new blocks for the Blockchain, rather the mining capacity of each individual will be defined by the stake he/she has in the total cryptocurrency implemented over that Blockchain. For instance, a person having 1% of total Ether (cryptocurrency) would be able to mine only 1% of Proof-Of-Stake blocks in the Blockchain. The concept of developing a Private Blockchain [5] that would not be accessible to the public is not new. Private Blockchain implements the features of the Blockchain in a much restricted yet secure environment. Although, this makes it more centralized and we move farther away from the core idea of the Blockchain i.e. its decentralized feature. The concept of determining how much centralization is needed for userdefined Private Blockchain, which encompasses the security and other features of the Blockchain, yet in a centralized way is still a topic of debate because after-all as

17 Ethereum 2.0 Blockchain in Healthcare and Healthcare …

227

we move further towards Private Blockchain, we also move farther away from Public Blockchain features that makes it so valuable and sought after technologies of today. Implementation of Blockchain in Healthcare aims to provide a system that is decentralized at the least along with added encryption techniques to secure data. Due to the use of Internet-of-Things (IoT) devices, the data that is gathered by these devices is more prone to cyber-attacks and could compromise the medical records of the patient which could mislead the diagnosis of the patient. Blockchain Implementations in Healthcare Systems One such implementation is discussed by Jinglin Qiu et al. [2] where it is duly mentioned that MIT Media Lab proposed a blockchain-based architecture named MedRec [5] for maintaining Electronic Health Records (EHR), in the year 2016. MIT Media Lab tried to leverage the functionalities and the features provided by Blockchain and hence come up with a system that could ease the decades old problems such as—medical data fragmentation, complete medical history access to patients, better quality and quantity of data to facilitate medical research and platform independence. Ethereum [6] is an application of Blockchain technology and was proposed by Vatalik Buterin in the year 2013. One significant update in Ethereum [6] is its block generation period of 10–20 s which is generally mentioned to be around 12 s (avg.). Although the use of Ethereum [6] in Healthcare and the risks it possesses is discussed in Jinglin Qiu et al. [2] where they talk about the open-source nature of Ethereum, which is a public Blockchain, the data stored on it is openly visible to the public, even though it’s encrypted. If some malicious activities were bound to happen, it would render the data inconsistent and damaged. Not to forget the changes that could be made by the administrator of the Electronic Health Records (EHR) such as table changes, name additions, approved manipulation of records, also need to be handled appropriately and consistently in its Blockchain-based implementation MedRec [5]. Evidently, how this is being handled by MedRec [5] is still not known. Sina Rafati Niya et al. [7] discusses a Proof-Of-Stake method called Bazo [7] for Blockchain and Internet-Of-Things (BIoT) devices. They enhanced the basic Bazo [7] design and reinforced it with Sharding [9], Blockchain IoT Wallets, etc. The aim was to provide a performance of the POS-based Bazo Blockchain equipped with sharding for IoT data-streams. An intricate design consisting of an IoT device— Arduino, edge Blockchain client and Bazo miners was fabricated. A successful collection of data by the Arduino device through edge Blockchain client and mining of blocks by Bazo miners located in Google cloud was shown. It could be concluded that the POS-based Bazo can be used for developing fully-functional Blockchains which wouldn’t require excessive computation power. Blockchain Implementations in IoT Devices Ethereum is gaining popularity because of a feature it provides which is called Smart Contract. Smart Contract was first coined by Nick Szabo in 1994 and later it brought innovation to the Blockchain platform. Smart Contract is a piece of code which is executed once a certain condition is fulfilled. In cryptocurrency terms, Smart Contract aims to provide a digital signing, verifying and execution of a contract. These Smart Contracts are written

228

V. Sagar and P. Kaushik

in languages such as Solidity, Serpent, and LLL and operate over the Blockchain. The code moreover looks like an if-else condition that is bound to raise alerts and intimate the blockchain about the occurrence of an expected event. Smart Contracts define a way of writing code over a Blockchain-based application and hence can be an extremely useful tool. Seyoung Huh et al. discusses the usage of Ethereum and Smart Contracts for managing Internet-of-Things (IoT) devices. They conducted their readings on three Raspberry-Pi devices which were used to monitor and transmit the electricity usage, an air conditioner, and a light bulb. With Smart Contract one can simply write a code such as to turn on the power saving mode once the total electricity usage surpasses a user-defined threshold. They were able to setup Smart Contracts on Ethereum and after successful registration of devices, readings started to flow-in. Even though the block generation period of Ethereum is between 10 and 20 s and is a considerable update over a 10 min block generation period in Bitcoin, it is still slow for some domains. In the paper by Dinan Fakhri et al. [8] titled Secure IoT Communication using Blockchain Technology, a development of with and without Blockchain based implementations has been done for IoT devices, specifically smart refrigerator and smart television. For testing the integrity of the network thus formed in both the cases, Sniffing attacks were made. These attacks, when performed on non-Blockchain-based design, were able to capture the content of the error message which displayed the actual contents of the devices itself. Whereas in the case of Blockchain-based design, the contents captured were hexadecimal (due to encryption) and hence not human-readable and cannot be readily converted back to source due to typical encryption process. For testing the security of the networks, avalanche effect of the hash function and the encryption algorithm used for both the network designs was done and was concluded that the security of both the networks was almost similar. Evident from the factual findings of the aforementioned papers, not only Blockchain-based implementations were decentralized but secure as well.

2 Proposed System Design for POS-Blockchain Based Healthcare System with IoT Blockchain with Healthcare and IoT provides many advantages, such as: (i) Access of records to both the parties i.e. the patient and the hospital, (ii) Data is secured using a 256-bit encryption, (iii) Data collected from IoT devices attached to patients is a secure and encrypted transmission (iv) Usage of Smart Contracts to raise alerts and more. Architecture We propose an Ethereum based Blockchain software that will provide for its users easy access to data and interoperability between different systems used

17 Ethereum 2.0 Blockchain in Healthcare and Healthcare …

229

by the hospitals. The data producing would be done by two entities: (i) Patients providing real-time data through the IoT device they are wearing for monitoring of their daily stats (ii) Data logged by the manual interpreters sitting at desks in the hospital who enter data as per the given surgeries conducted on a patient to maintain a medical history of the same. Although, both of these data-gathering mediums are routed through the hospital network and are linked to patients with the help of their unique identifier. A Proof-Of-Stake POS-based implementation of Blockchain where initial accounts of the hospitals will be setup and given an equal stake in the total blockchain which would later serve as the key to validating the blocks being added to the blockchain. Ethereum 1.0 provides a user with the capability to create his/her own blockchain, write smart contracts, and validate the blocks being added to the blockchain thus made. Ethereum 1.0 and Ethereum 2.0 Until now Ethereum 1.0 is available as source code and can be cloned from a GitHub repository and then a local Blockchain for private use can be setup on a host system. Proof-Of-Work implementations can be done using Ethereum 1.0. Also, Ethereum 2.0 with Proof-Of-Stake functionality is set to release in July-2020. Ethereum 2.0 rewards validators with block rewards for enforcing the rules properly. Anyone trying to cheat the system or fails as a valid at or to do his/her job, a fine is enforced with the initial 32 ETH staked. Ethereum 2.0 is set to release in 3 phases starting from Phase 0 where one-way transfer of accounts’ eth to Ethereum 1.0 is designed. Phase 2 talks about support of Smart Contracts [9]. In Fig. 1, it is evident as to how the data is being served to the blockchain. To reach the goal of defining such a blockchain, following are the steps undertaken: Initial setup: LINUX operating system based machine or a virtual machine, Git—a version controlling system installed on the same machine. Furthermore, to differentiate from regular blockchain implementation, the following innovations are made in the system: Initial accounts are setup for 5 Entities/Hospitals in the blockchain to serve as stakeholders in the blockchain with each having 20% of the stake in the total blockchain.

Fig. 1 Design view

230

V. Sagar and P. Kaushik

Fig. 2 Public-key encryption in Blockchain

1. Block generation is done by two types of users: firstly, the hospital, and secondly the patient. 2. The data of a single patient is identified with the help of a unique identifier called the patient ID which is logged along with the data being entered into a block. 3. The data collected from the patient through an IoT device, such as monitors for daily blood pressure and sugar levels while they are not under the hospital’s roof but still under their care and study, is governed with the help of Smart Contracts written in language such as Solidity which would raise alerts on occurrence of specific events. 4. Once a consensus has been reached for a new block of data, the block gets verified and added to the blockchain. 5. Public-key cryptography with 256-Bit encryption is used to secure the data while storing it in the blockchain (Fig. 2). The usage of Proof-Of-Stake POS-based Ethereum 2.0 blockchain where in we have divided the total stake equally amongst the initial accounts of the blockchain governs the fact that not anyone can keep on validating the blocks. Only the accounts initially provided with the stake can validate the blocks and hence the data entered into the blocks by the hospitals and the patients is only validated by the hospitals and act as history for the blocks of the future. Design Challenges There are already many implementations, e.g. Bitcoin cryptocurrency, of Blockchain using Proof-Of-Work as consensus algorithms and where miners/validators are given an incentive for identifying and validating a new block in the existing blockchain. Although it has some disadvantages along with the design challenges that occur while designing Blockchain based softwares: • If there’s no incentive, miners/validators don’t have any driving force to work and provide their computation power towards identifying and validating the new

17 Ethereum 2.0 Blockchain in Healthcare and Healthcare …

231

blocks. And it is quite evident from earlier works that a huge amount of computation power is required to solve the mathematical puzzle involved in verifying and validating the credibility of the new block being added. • Miners/validators are just regular human-beings with good computation power, generally a group of people who invest together in building a system with an architecture potent enough to perform the complex calculations involved in the validation part of the blockchain. This also proves to be a challenge since miners/validators with great computation power will be the ones validating the latest blocks in the blockchain, which creates a concentration of incentive/rewards with few individuals being awarded by the community since they only will be able to get the proof-of-work satisfied. • Consensus algorithms such as (i) Proof-Of-Work (ii) Proof-Of-Stake (iii) Practical Byzantine Fault Tolerance (iv) Proof-Of-Capacity (v) Proof-Of-Elapsed-Time (vi) Proof-Of-Burn used in the verification/validation section of the blockchain are to be identified as the one that suits our purpose best. Although, major implementation of blockchain in crypto-currencies has been Proof-Of-Work implementations, which gives rise to the biasness that people/miners with huge computation power only can get the Proof-Of-Work satisfied and incentivized for the same. The blocks created in blockchain are immutable in nature i.e. once the information is stored in a block, its cryptographic hash is calculated and later verified by the miners/validators, there’s no possible way of editing the contents of the block in the future, if need be. To work around this, we propose a solution of unique identifier per patient also known as patient ID. Every possible data about a single patient is supposed to be entered into blocks with patient ID as the key identifier. So, instead of one block of information per patient, we have more than one block of information per patient, uniquely identified and linked together with the same patient ID. • If we are to develop a blockchain for medical record-keeping, we would be developing a more centralized yet private version of blockchain. One has to find a way of validating the blocks of information at their end without incentivizing the process of doing so because the hospitals can’t give rewards for validating the information stored in their private blockchain. Thus, we propose a solution of a private blockchain with Proof-Of-Stake as the consensus algorithm. The hospitals that would form the base accounts will be given an equal stake in the Blockchain, hence permitting them to mine/validate that percentage of the total blocks in that blockchain. For e.g.—5 hospitals with 20% stake each in the Blockchain would be able to mine the 20% of the total blocks added in the blockchain. Also, since a shared key cryptography with Public-key and Private-key is used to encrypt the data, patients can be provided with the private key of their block of information which they can access and see with proper decryption of their block of information with their respective private keys.

232

V. Sagar and P. Kaushik

3 Conclusion and Future Work The standard Proof-Of-Work POW implementations of Blockchain such as Bitcoin have a 10 min block generation period whereas Ethereum has a block generation period of 12 s avg. which is quite beneficial in applications where data generation is quite faster as compared to other applications because the data that is being generated rapidly needs to be verified and validated at the same speeds. Ethereum 2.0 will be the Proof-Of-Stake POS based implementations. The equal division of stakes between the organizing members of the Blockchain serves the purpose of verification of blocks better than the incentive based validations by blockchain-miners. These implementations of Blockchain beyond crypto currencies are relatively new and are better known as Blockchain 3.0 applications [10]. Ethereum 2.0 is still to release for public usage. Decentralized data handling is a new approach for a large-scale system and need to be tested for ease-of-understanding, ease-of-use, deployment and maintenance by and for the end-user. Although a Blockchain based system design doesn’t look different in the frontend/website based interface, rather it is in the backend where the data is being managed and stored for future access and record keeping. Healthcare implementations need to be understood and deployed at a small scale first to test the advantages of it. Designing a full-edged electronic health record managing system based on Blockchain should be sought after, where hospitals can transfer medical history of patients at a single click of mouse and patients too have access to their up-to-date medical history with ease.

References 1. Nakamoto S (2009) Bitcoin: a peer-to-peer electronic cash system. Posted on Cryptography Mailing List at metzdowd.com 2. Qiu J, Liang X, Shetty S, Bowden D (2018) Towards secure and smart healthcare in smart cities using blockchain. In: 2018 IEEE international smart cities conference (ISC2), IEEE, pp 1–4 3. Aich S, Chakraborty S, Sain M, Lee HI, Kim HC (2019) A review on benefits of IoT integrated blockchain based supply chain management implementations across different sectors with case study. In: 2019 21st international conference on advanced communication technology (ICACT). IEEE, pp 138–141 4. Huh S, Cho S, Kim S (2017) Managing IoT devices using blockchain platform. In: 2017 19th international conference on advanced communication technology (ICACT). IEEE, pp 464–467 5. Azaria A, Ekblaw A, Vieira T, Lippman A (2016) Medrec: using blockchain for medical data access and permission management. In: 2nd international conference on open and big data (OBD). IEEE, pp 25–30 6. Ethereum WG (2014) A secure decentralised generalised transaction ledger. In: Ethereum project yellow paper, vol 151, pp 1–32 7. Niya SR, Schiller E, Cepilov I, Maddaloni F, Aydinli K, Surbeck T, Stiller B (2019) Adaptation of Proof-of-Stake-based blockchains for IoT data streams. In: IEEE international conference on blockchain and cryptocurrency (ICBC). IEEE, pp 15–16

17 Ethereum 2.0 Blockchain in Healthcare and Healthcare …

233

8. Fakhri D, Mutijarsa K (2018) Secure IoT communication using blockchain technology. In: International symposium on electronics and smart devices (ISESD). IEEE, pp 1–6 9. EthGasStation (2019) What is staking. https://ethgasstation.info/blog/what-is-staking/. Last accessed on 25 February 2020 10. Maesa DDF, Mori P (2020) Blockchain 3.0 applications survey. J Parallel Distrib Comput

Chapter 18

IoT-Based Solution to Frequent Tripping of Main Blower of Blast Furnace Through Vibration Analysis Kshitij Shinghal, Rajul Misra, and Amit Saxena

1 Introduction The coke (essentially impure carbon) burns in the blast of hot air to form carbondioxide and results in an exothermic reaction. This reaction is the main source of heat in the furnace. At the high temperature at the bottom of the furnace, carbon-dioxide reacts with carbon to produce carbon monoxide. It is the carbon monoxide which is the main reducing agent in the furnace [1, 2]. The common ores of iron are both iron-oxides namely Haematite (Fe2 O3 ) and Magnetite (Fe3 O4 ). These ores can be reduced to iron by heating them with carbon in the form of coke. A Blower in a Blast Furnace provides cold blast at the desired flow and pressure for utilization at the hot blast stoves. The hot air is required to burn the coke in the furnace. Therefore, the reliability and availability of Blower are of utmost importance for the iron making process in Blast Furnace (Figs. 1 and 2). The Blower is driven by a motor with a speed increaser gearbox in-between (refer to Fig. 3). Vibration probes are mounted on the Blower bearings (shown as #7 and 8 in Fig. 3) which help in tripping of the Blower at high vibration levels [3–6]. This prevents the machine from damage due to high vibration in the system. K. Shinghal · A. Saxena (B) Department of Electronics and Communication Engineering, Moradabad Institute of Technology, Moradabad, U.P, India e-mail: [email protected] K. Shinghal e-mail: [email protected] R. Misra Department of Electrical Engineering, Moradabad Institute of Technology, Moradabad, U.P, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_18

235

236

K. Shinghal et al.

Fig. 1 Blower–B of Blast Furnace–1

Blower

Motor Gearbox

Fig. 2 Hot air entry into Blast Furnace

Technical specifications of Blower are given below: Motor RPM = 2990 Motor Rating= 2100 KW Gearbox ratio = 1:2.83 Blower RPM = 8480 Blower Flow Rate: 31500 M3 /h Blower Discharge Pressure = 2.2 kg/cm2 .

18 IoT-Based Solution to Frequent Tripping of Main Blower …

237

Fig. 3 Blower of Blast Furnace–1

2 Problem Description During the routine check-up of first Blower, it was found that the vibration at motor DE bearing was increasing accompanied by abnormal sound. Vibration signature analysis of motor bearings indicated increased clearance of motor DE bearing and its incipient deterioration. The analysis also indicated weakness of motor base as the base vibration in displacement mode was in the range of 190–210 µ as compared to just 30–40 µ at the corresponding points of second Blower. Following corrective actions were taken: • Motor was checked in decoupled condition. Vibration was found high with abnormal sound from DE bearing. Symptoms of increased bearing clearance and base weakness. • Motor base-bolts were re-tightened. Vibration level was still high. • Motor DE bearing was replaced. • Motor base re-grouting is done. After the corrective action, vibration readings of second Blower reduced drastically from 40.2 mm/s to just 6.1 mm/s. Motor bearing sound had become normal. Motor base vibration in displacement mode reduced from 210 µ to just 25 µ. But after running for two weeks, second Blower started tripping at high vibration level. Since the vibration probes were mounted on Blower bearings only, it was assumed that the vibration level on these bearings was shooting up.

238

K. Shinghal et al.

The vibration probes were by-passed for time-being and vibration measurement was done. Surprisingly, there was no increase in vibration readings at any of the bearings. Then, the vibration probes were interchanged with the probes of first Blower. Even then, there was tripping in second Blower. Reason was not clear. The problem puzzled everybody.

3 Extrapolative Study of the Problem Vibration level on the bearings of second Blower was not high when measured by portable vibration data collector but still the installed probes were sensing high vibration leading to the frequent tripping of the Blower. Following steps were taken in finding the source of high vibration in the system: Step #1: Running the Blower at No-load i.e. throwing the discharge air to the atmosphere in the direction parallel to Blower base columns: It was found that there was no tripping of Blower at no-load condition. This indicated that there was no mechanical problem of unbalance or eccentricity of Blower impeller. Step #2: Trending the vibration readings of on-line probes and comparing the same measured by portable vibration data collector: For two days, the readings of on-line probes were observed and compared with the vibration readings measured through portable vibration data collector. The observation was done on hourly basis. It was observed that there was no appreciable change in any of the vibration readings. Refer to the vibration readings in Tables 1 and 2. The above readings were measured and recorded during no-load condition of the Blower. From the tabulated readings, it was clear that vibration trend was almost constant in both the cases i.e. vibration measured through portable data collector and the vibration readings as shown by vibration probes. Besides, the casing temperature trend of Blower bearings was normal. This meant that the system was stable at no-load condition. Step #3: Interchanging the vibration probes of First Blower and second Blower: The vibration probes of first Blower were installed on second Blower. These probes also sensed high vibration at load conditions and the second Blower tripped. It was clear that the previous probes of second Blower were not mal-functioning. It was also clear that there was some internal problem with the system. Step #4: Vibration measurement at different points on the motor, gearbox, and Blower Base: It was found that there was no relative high vibration at any of the Base. The readings were in the range of 25–40 µ only which is quite normal. Therefore, base condition was normal.

18 IoT-Based Solution to Frequent Tripping of Main Blower …

239

Table 1 Blower DE bearing (shown as #7 in Sketch–01) Day

Time

Displacement (µ, pk-pk)

Casing temp. °C

Readings measured On-line readings through portable vibration data collector Horizontal Monday

Tuesday

Vertical

Axial

Probe 1/Probe 2

12:30 PM

9

8

11

28/48

75

01:30 PM

10

7

10

30/46

76

02:30 PM

10

8

8

28/44

77

03:30 PM

10

8

9

29/44

75

04:30 PM

10

9

11

28/44

74

07:00 PM

10

8

10

28/46

75

09:00 PM

9

6

7

29/46

74

09.00 AM

10

8

10

29/47

76

12.30 PM

10

8

10

28/44

73

02.30 PM

9

7

10

28/45

74

05.00 PM

10

8

10

28/44

74

Table 2 Blower NDE bearing (shown as # 8 in Sketch–01) Day

Monday

Tuesday

Time

Displacement (µ, pk-pk)

Casing temp. °C

Readings measured through portable vibration data collector

On-line readings

Horizontal

Vertical

Axial

Probe 1/Probe 2

12:30 PM

7

6

6

70/73

48

01:30 PM

6

8

4

68/70

48

02:30 PM

7

5

5

68/69

48

03:30 PM

6

6

5

70/70

50

04:30 PM

8

8

7

70/70

48

07:00 PM

8

7

6

69/71

49

09:00 PM

9

5

6

70/71

48

09.00 AM

9

6

5

72/72

50

12.30 PM

9

7

6

73/72

46

02.30 PM

8

7

6

72/72

47

05.00 PM

9

8

6

72/72

45

240

K. Shinghal et al.

Step #5: Vibration measurement on Suction and Discharge ducts: Discharge duct had 90° bends at two points as shown in Figs. 4 and 5. At these 90° bends, the maximum thrust of pressurized air will be felt. Here, the thrust will tend Fig. 4 Discharge duct from Blower

Fig. 5 Discharge duct on ground

18 IoT-Based Solution to Frequent Tripping of Main Blower …

241

to shake or vibrate the discharge duct in a lateral/ horizontal direction i.e. parallel to earth surface. If the duct is adequately supported at the 90° bends then the vibration level will be absorbed or dampened. At no-load i.e. when the discharge air was thrown to atmosphere in vertical direction opposite to the pressured air thrust in the discharge line; the vibration readings were taken on the suction duct. It was found to be in the range of 20–40 µ which was normal. The readings were compared with the measurements done at the corresponding points on the Suction duct of First Blower. The readings were more or less the same. Similarly, the vibration readings were taken on the discharge duct at no load. It was found that the lateral readings were in the range of 30–70 µ with fluctuations in readings. Here, the readings were found to be little bit higher when compared with the readings taken on the corresponding points on the discharge duct of first Blower. But second Blower was running at no-load while first Blower was running on load. This necessitated recording the readings of the discharge duct of second Blower at load conditions. The vibration probes were by-passed for time being and vibration measurement was done at load condition. To the utter surprise of everyone, the discharge duct had the lateral vibration level in the range of 150–350 µ. Step #6: Installing IoT-based sensor system near blowers for analysis: Installing an IoT-based system for analysis of problems was required. The real challenge was to implement an IoT-based system near the blowers [7–16]. However, with proper shielding and armoring it became possible to successfully deploy the IoT-based system in real industry environment shown in Fig. 6. In present case, the unpredicted impact of exaction work led to a heavy production loss to the company as it took almost three days in finding and correcting the problem, but with help of IoT-based system timely prediction of the strength of foundation base is possible.

4 Root Cause of the Problem There was an excavation work near second Blower for new machinery installation. It was observed that the soil, near the main support column of the discharge duct, had become loose. The excavation work must have affected the bearing strength of the soil. The main support column of the discharge duct must have become weaker due to non-compactness of the peripheral soil area. During no-load condition, air was thrown to atmosphere which exerted downward thrust on the main column support. But during load condition, the discharge air exerted lateral thrust on the main support column. Since the peripheral soil near the main support column was loose, the main support column could not provide the adequate rigidity against the pressurized air thrust in lateral direction. This resulted in the high vibration level on the discharge duct. Refer to Fig. 7 that illustrates in detail the reason for high vibration in the duct during load-condition.

242

Fig. 6 IoT-based sensor system installed near Blower

Fig. 7 Direction of discharge air during no-load and load conditions

K. Shinghal et al.

18 IoT-Based Solution to Frequent Tripping of Main Blower …

243

Fig. 8 Excavated area being re-filled, compacted, and cemented

High lateral vibration in discharge duct induces pulsation force inside the duct carrying the voluminous pressurized air. The pulsation phenomenon shakes or vibrates the duct resulting in structural vibrations. As a result, the connected structure/casing start vibrating beyond the permissible limit. Consequently, the probe near the connected structure/casing senses the back-ground vibration and trips the machine depending upon the set vibration limit.

5 Corrective Actions The soil around the main support column was loose due to excavation. The excavated area near the main support column was re-filled, compacted, and then cemented. A wall of cementing cum grouting work was done which provided stability to the foundation base. The exposed surfaces of column of Blower base and the main support of discharge duct were covered and peripherally supported by cementing cum grouting work. This helped to grip the soil leading to its compactness. Refer to Fig. 8 showing the cement wall near the main support column.

6 Results After strengthening the foundation through proper concreting and reinforcement work, the lateral vibration level of Discharge duct reduced from 350 µ to just 50 µ in the load condition. There was no tripping in second Blower at load condition. IoTbased system was installed and vibration probes were functioning normal. IoT-based

244

K. Shinghal et al.

system performed vibration measurement on motor, gearbox, and blower bearings, it showed normal readings. Base vibration of motor, gearbox and blower was also normal. IoT-based system is installed and it has been running satisfactorily without any tripping problem (owing to high vibration on blower bearings).

7 Conclusion The real task was to implement an IoT-based system near the blowers. However, with proper shielding and armoring it became possible to successfully deploy the IoT-based system in real industry environment. Generally, it is observed excavation work in the vicinity of large machinery has adverse effect on the bearing strength of the soil. This weakens the foundation base as the soil becomes loose especially when the supporting columns are exposed. The loose soil reduces the compactness or rigidity in the foundation base. Therefore, excavation work near structural base is not desirable. It must be forbidden near the critical plant machinery. Here, the unpredicted impact of exaction work led to a heavy production loss to the company as it took almost three days in finding and correcting the problem, but with help of IoT-based system timely prediction of the strength of foundation base is possible. Acknowledgements Authors are very thankful to the general manager and all the staff of Jindal Steel and Power Ltd., Raigarh for extending their support to perform experiments on real time industrial machines and drives. The staff of JSPL supported guided and helped in performing various experiments. The authors are also thankful to Prof. Rohit Garg, Director MIT & the Management of MITGI for constant motivation and support.

References 1. Silva L, Giancotti K, Amaro F, Pires I (2014) Analysis of burn out and random trips at starting of a 2984 kW induction motor driving a main blower. In: 2014 IEEE industry application society annual meeting, Vancouver, BC, pp 1–5. https://doi.org/10.1109/IAS.2014.6978450 2. Silva SM, Cardoso Filho BJ, Cardoso MG, Rocha Braga M (2003) Blower drive system based on synchronous motor with solid salient-pole rotor: performance under starting and voltage sag conditions. IEEE Trans Ind Appl 39(5):1429–1435. https://doi.org/10.1109/tia.2003.816511 3. Silva LA, Giancoti K, Amaro F, Amariz Pires I (2017) Analyzing burn out and random trips at starting: induction motors driving main Blowers. IEEE Ind Appl Mag 23(4):66–75. https:// doi.org/10.1109/mias.2016.2600690 4. Misra R, Shinghal K, Saxena A, Agarwal A (2020) Industrial motor bearing fault detection using vibration analysis. In: International conference on intelligent computing and smart communication 2019, algorithms for intelligent systems. Springer Nature Singapore Pte Ltd. https://doi. org/10.1007/978-981-15-0633-8_86 5. Brumbach ME, Clade JA (2012) Industrial maintenance. Cengage Learning 6. Taylor JI (1990) The vibration analysis handbook: a practical guide for solving rotating machinery problems. IPP Books

18 IoT-Based Solution to Frequent Tripping of Main Blower …

245

7. Sung G, Shen Y, Keno LT, Yu C (2019) Internet-of-Things based controller of a three-phase induction motor using a variable-frequency driver. In: 2019 IEEE Eurasia conference on IoT, communication and engineering (ECICE), Yunlin, Taiwan, pp 156–159 8. Kunthong J, Sapaklom T, Konghirun M, Prapanavarat C, Na Ayudhya PN, Mujjalinvimut E, Boonjeed S (2017) IoT-based traction motor drive condition monitoring in electric vehicles: part 1. In: 2017 IEEE 12th international conference on power electronics and drive systems (PEDS), Honolulu, HI, pp 1184–1188 9. Siddiqui KM, Sahay K, Giri VK (2014) Health monitoring and fault diagnosis in induction motor—a review. Int J Adv Res Electr Electron Instrum Eng 3(1) 10. Khademi A, Raji F, Sadeghi M (2019) IoT enabled vibration monitoring toward smart maintenance. In: 2019 3rd international conference on internet of things and applications (IoT), Isfahan, Iran, pp 1–6 11. Chanv B, Bakhru S, Mehta V (2017) Structural health monitoring system using IoT and wireless technologies. In: 2017 international conference on intelligent communication and computational techniques (ICCT), Jaipur, pp 151–157 12. Yaseen M, Swathi D, Kumar TA (2017) IoT based condition monitoring of generators and predictive maintenance. In: 2017 2nd international conference on communication and electronics systems (ICCES), Coimbatore, pp 725–729 13. Pawar RR, Wagh PA, Deosarkar SB (2017) Distribution transformer monitoring system using Internet of Things (IoT). In: 2017 international conference on computational intelligence in data science (ICCIDS), Chennai, pp 1–4 14. Misra R, Shinghal K, Saxena A (2020) Improvement in maintenance practices for enhancing the reliability of SMS mould oscillators. In: Springer-international conference on electrical and electronics engineering (ICEEE-2020), 28–29 February 2020 15. Goundar SS, Pillai MR, Mamun KA, Islam FR, Deo R (2015) Real time condition monitoring system for industrial motors. In: 2015 2nd Asia-Pacific world congress on computer science and engineering (APWC on CSE), Nadi, pp 1–9 16. Kalyanraj D, Prakash SL, Sabareswar S (2016) Wind turbine monitoring and control systems using Internet of Things. In: 2016 21st century energy needs—materials, systems and applications (ICTFCEN), Kharagpur, pp 1–4

Chapter 19

A Survey on Hybrid Models Used for Hydrological Time-Series Forecasting Shivashish Thakur and Manish Pandey

1 Introduction Time series refers to the sequence of observations or we can say data points ordered in time. Time series forecasting discovers its application in many areas like business, finance, electricity, weather forecasting, earthquake prediction, stocks, econometrics, and signal processing [1]. Time series analysis and prediction are part of temporal data mining [1]. Forecasting helps us to look into the future and make decisions thus playing a vital role in many fields. The broad classification of the techniques for forecasting time series data can be divided into four parts—Traditional forecasting techniques like regression method, multiple regression method, exponential smoothing which are based on mathematical formulae and calculations. One such widely used method is the Auto-Regressive Integrated Moving Average (ARIMA) method. Stochastic forecasting techniques (SVM, LSSVM), using which a model is built and prediction are done using this model. Soft computing based techniques include NNs and ANNs and other neural-networks that are useful in the prediction of trends, seasonal components and can handle non-linearity if present in the series and last is fuzzy-based forecasting which depends on the fuzzy logic system or fuzzy sets [1]. Forecasting and prediction of hydrological time series data is a complex process as we have to deal with highly nonlinear and unstable characteristics as it is affected by many factors like climate change, socio-economic development projects, infiltration, vegetal cover, evaporation, and transpiration [2, 3]. Human activities likewise influence environmental change by consuming petroleum products which discharge carbon dioxide into Earth’s atmosphere [2]. To choose the most reasonable model for prediction purposes demands broad involvement with forecasts and time-series S. Thakur (B) · M. Pandey Department of CSE, MANIT, Bhopal 462003, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_19

247

248

S. Thakur and M. Pandey

characteristics [4]. This makes real-time prediction of hydrological time series with minimal error a challenge to several analysts and data scientists and a race to come up with the best forecasting model for the same. The methodologies for hydrological time series forecasting can be divided into procedure-driven models which consider the internal physical mechanism of hydrological phenomena, here a lot of information is required for adjustment and approval of the model and second are the data-driven models otherwise called black-box strategies which recognize the connection between input and output mathematically. The data-driven models require less quantitative data and offer better prediction performance as compared to process-driven models [5]. Data-driven models can again be classified into two major categories: statistical traditional techniques and Artificial intelligence (AI)—machine learning techniques. Both statistical and machine learning methods have been applied to forecast time series data and mixed results were obtained. Machine Learning (ML) methods are proposed as an alternative to the statistical ones and many papers propose new ML approaches, for the most part, neural networks, focusing on accuracy improvements. Neural Network models can be understood as nonlinear functions connecting the inputs and outputs of neurons (also called as nodes) which helps us to choose a ‘set of parameters’ that ‘minimizes the error’ of the ‘optimization problem’ given to the model to learn [6]. Machine learning methods request more computational power and computer expertise than traditional methods. Makridakis [6] thought about the accuracy of well-known ML techniques with that of eight conventional ones for comparison and it was demonstrated that conventional methods were more exact than ML ones implying that a complex and computationally expensive model doesn’t have to consistently ensure a superior prediction result. Therefore, there is no standard universal method that gives a better forecast in all cases than any other method and therefore the forecasting model should be selected according to the behavior of the time series. Time series in case of hydrology is a complex process, it has both linear and nonlinear structures and a single standalone statistical or neural network model failed to give promising results [1, 2, 5, 6]. Traditional statistical techniques like ARIMA require stationary time series data but in the case of hydrological events, the data is characterized as both non-stationary and nonlinear due to its time-varying nature, therefore constraining these techniques to anticipate better results [2], which led researchers to use ML models which generally include Artificial Neural Network models (ANNs), random forest, LSTMs, DBNs, SVMs but these too have their limitations such as over-fitting, sensitive to noise present in time series and parameter selection [7]. Section 2 of this paper gives a comprehensive review of working and types hybrid models used for time series modeling, Hajirahimi and Khashei [8] have divided hybrid models into three categories—parallel hybrid models, series hybrid models and parallel-series hybrid models as shown in Fig. 1. It also takes you through various pre-processing and optimization techniques generally used to achieve better prediction results. Section 3 deals with popular hybrid models applied specifically in

19 A Survey on Hybrid Models Used for Hydrological Time-Series …

249

Fig. 1 Hybrid forecasting model classification

the field of hydrological time series forecasting. Section 4 is related to future works and Sect. 5 concludes the paper.

2 Hybrid Models for Forecasting Time Series Data: A Review Hybrid models from the literature, fusing statistical models like TAR (threshold autoregressive), ARCH (autoregressive conditional heteroscedastic), GARCH (generalized autoregressive conditional heteroscedastic), ARIMA and ES (exponential smoothing) joined with data-driven and nonparametric keen models, for example, fuzzy systems, support vector machines (SVMs) and artificial neural networks (ANNs) are one the most notable crossover models that are utilized broadly in various fields for forecasting to achieve better accuracy. A novel hybridization of ANN, ARIMA, and fuzzy models was proposed by Khashei et al. in 2009 where linear and nonlinear patterns of the series were realized by ARIMA and ANN respectively and fuzzy logics were applied to overcome the information confinement of the ARIMA models giving an increasingly adaptable model for estimating in less available data circumstances [9]. In 2011 Shafie-khah et al. for forecasting electricity prices used ARIMA-RBFNN model with wavelet transform. The network structure was optimized using Particle Swarm Optimization (PSO) method which helped avoid over-fitting and decreased the computational complexity. Khashei et al. (2012) proposed another hybrid model dependent on the ideas of ARIMA and probabilistic neural systems (PNNs). He applied this model on three datasets and concluded that the hybrid modeling approach gave better forecasting results than the individual models used alone. In 2012 Nie, Liu, and Wang depicted a consolidating technique utilizing SVM and ARIMA models for short-term load forecasting.

250

S. Thakur and M. Pandey

Kumar and Thenmozhi in 2014 proposed three models utilizing SVM, ARIMA, ANN, and random forest models for anticipating stock file returns. The ARIMASVM model gave better overall results as compared to the other two hybrid models i.e. ARIMA-ANN and ARIMA-random forest and also to individual isolated models. Real-world time series differ from the simpler ones as they exhibit complex patterns and are often non-stationary and non-linear. One of the serious issues in time series forecasting is picking up the proper model to process a wide range of examples simultaneously. Then again, it has been recorded in the literature that there is no widespread individual model that can at the same time display and break down both direct and nonlinear components in data. Over the years, better prediction results are achieved by using hybrid techniques, how two separate forecasting models are combined can be series, parallel or series-parallel.

2.1 Review of Parallel Hybrid Structure in Time Series Forecasting The very first parallel hybrid structure was proposed in 1969 by Bates and Granger [8]. Two important key factors while designing are—the selection of a combination function which can be linear or non-linear and a suitable weighting approach that can be static or dynamic. Figure 2 shows the framework of parallel hybrid structure. The most common and widely used combination method for parallel hybrid models is the linear combination method where the weighted forecasts of the single individual model are summed to obtain the final forecast. Determining the weights of these individual models is the main issue in parallel hybrid structures. Over the years, many weighting approaches have been proposed which can be classified into static and dynamic weighting approaches. Static weighting approaches use statistical methods i.e. mathematical equations to calculate weight value. Hajirahimi and Khashei [8] have divided static methods into six different classes (1) Averaging method, (2) Minimizing error method, (3) Var-Cov method, (4) Outperformance method, (5) DMSFE method, and (6) Differential method. In static methods, weights are determined by historical data and these methods are widely used in literature but they have the drawback of updating information over time.

Fig. 2 Framework for parallel hybrid forecasting model where time series is fed simultaneously to different models and weighted forecast of each model is combined to produce the final result

19 A Survey on Hybrid Models Used for Hydrological Time-Series …

251

To overcome the limitations of static methods for weight selection dynamic methods were proposed where weights of the components varied with time. A hybrid parallel forecast approach was proposed by Wang et al. [10] which combined four types of neural networks and incorporated a dynamic weighting approach called in-sample training–validation pair-based neural network (TVPNNW) and it was concluded that dynamic weighting strategy gave better results than some of the static methods such as outperformance and Var-Cov, but it is not always true that dynamic combination models will give better results than static ones [11]. The use of linear combination methods for parallel hybrid models can give poor results if there is a non-linear relationship in the data. Shi et al. in 1999 used ANNs to discover the non-linear components of time series. Therefore a non-linear combination parallel hybrid model will always give better results if series has non-linear components.

2.2 Review of Series Hybrid Structure in Time Series Forecasting Zhang [12] proposed the first series hybrid structure. Since then, most of the papers have focused on the series hybrid structure for time series modelling and forecasting combining the individual advantages of separate models sequentially (see Fig. 3) to achieve better accuracy as compared to individual models. Time series of real-word systems have both linear and non-linear components which are modelled by statistical and intelligent models respectively. Series hybrid structure can be divided into (1) linear-nonlinear sequential modelling and (2) non-linear sequential modelling [8]. Below are some works mentioned for each of them and it is seen that linear-nonlinear sequential modelling is used for a far more number of times than nonlinear-linear series hybrid forecasting models. Linear non-linear sequential modelling. In linear-nonlinear sequential modelling, the time series data is fed initially to the statistical part of the hybrid structure which deals with linear patterns and then the output of this (generally residuals of the fit) is fed to an intelligent model like ANNs to mine the non-linear patterns. Generally, the time-series data is decomposed into linear and nonlinear

Fig. 3 Framework for series hybrid structure where linear and nonlinear components of time series are realized by two or more than two models arranged sequentially as shown

252

S. Thakur and M. Pandey

components by a suitable decomposition method like DWT (discrete wavelet transform) or EMD (empirical mode decomposition). The statistical model of the hybrid structure for modelling linear components can be different types of ARIMA models like ARIMAX, SARIMA (seasonal ARIMA) and advanced models like ANNs and SVMs are used for handling the non-linear components of the time series. ARIMAMLP models are most widely used. Banihabib and Ahmadian (2018) for monthly inflow forecasting integrated MARIMA model with NARX neural network which gave better results than ARIMA model [13]. A four-step series model proposed by Mo et al. (2018) which decomposes the time series into a linear part which is modelled by SARIMA model and remaining nonlinear relationships by MLP, SVR and GP. In this paper, a GMDH neural network was also suggested and a hybrid model of ARIMA-GMDH gave better results than SARIMA-SVR, SARIMA-GP, and SARIMA-BP [14]. Table 1 shows some of the ARIMA-ANN based models proposed by researchers where the forecasting accuracy of the single individual standalone model is lower than the hybrid structure. Apart from ANNs used in the latter intelligent model of linear-nonlinear sequential model other non-linear models can also be combined. The goal is again the same i.e. the linear attributes of the series is realized by the statistical model (ARIMA) and the non-linear residuals obtained after forecasting of linear part is realized by SVMs, GPs or any other appropriate non-linear model. Table 2 shows some such combinations proposed by researchers over the years where the combined hybrid model is superior then individual models taken alone. Nonlinear-linear sequential modeling: The sequence of models in the series hybrid forecasting model plays an important role in the accuracy of prediction. Some studies have discussed a nonlinear-linear sequential model, information about which is summarized in Table 3. A series PSOSVR-PSOARIMA hybrid model combining nonlinear SVR with ARIMA [15] was proposed by Alwee et al. in 2013. Khashei and Hajirahimi [16] applied both linear-nonlinear series hybrid model i.e. ARIMAANN and nonlinear-linear series hybrid model i.e. ANN-ARIMA on three datasets and concluded that the former despite having more popularity gave less accurate results. SVR and ARIMA were combined by Che and Wang (2010) by nonlinearlinear approach and results showed that SVR-ARIMA gave better accuracy than standalone ANN, ANN-ARIMA and ANN-SVR [17]. GRANN-ARIMA model was proposed by Sallehuddin et al. and upon comparison with standalone LR, ANN and ARIMA model along with ARIMA-ANN it was found that hybrid nonlinear linear GRANN-ARIMA dominated all others in terms of accuracy. Some studies fall under both linear-nonlinear hybrid series model as well as nonlinear-linear hybrid series model and most of the literature has majorly focused on a linear-nonlinear hybrid model but according to [16] in some cases a nonlinearlinear series hybrid model can generate superior results in terms of accuracy and prediction.

19 A Survey on Hybrid Models Used for Hydrological Time-Series … Table 1 Hybrid forecasting models of type ARIMA-ANN

253

Proposed by

Year

ARIMA model type

ANN model type

Khairalla et al.

2018

ARIMA

MLP

Safari and davallou

2018

ARIMA

MLP

Khashei and Hajirahimi

2017

ARIMA

MLP

Naveena

2017

ARIMA

MLP

Morina et al.

2017

ARIMA

MLP

Gairaa et al.

2016

ARIMA

MLP

Kapi et al.

2015

ARIMA

MLP

Wang and Meng

2012

ARIMA

MLP

Areekul

2010

ARIMA

MLP

Joy and Jones 2005

ARIMA

MLP

Lu et al.

2004

ARIMA

MLP

Zhang

2003

ARIMA

MLP

Moeeni and Bonakdari

2017

SARIMA

MLP

Ozozen et al.

2016

SARIMA

MLP

Jeong et al.

2014

SARIMA

MLP

Khashei et al.

2012

SARIMA

MLP

Mohamed and 2010 Ahmad

SARIMA

MLP

Mohamed and 2008 Ahmad

SARIMA

MLP

Aburto and Weber

SARIMA

MLP

2007

Li et al.

2017

ARIMA

RBFNN

Prayoga et al.

2017

ARIMAX

MLP

Wang et al.

2017

ARIMA

NAR

Xu et al.

2016

ARIMA

ANN

2.3 Review of Parallel-Series Hybrid Structure in Time Series Forecasting Both series hybrid structure and parallel hybrid structure give mixed results in many studies which mean that there is no deterministic accord for picking the better hybrid structure which gives the best forecasting performance in all cases [8]. Researchers

254 Table 2 ARIMA-nonlinear hybrid models for forecasting time series

Table 3 Nonlinear-linear series hybrid models for forecasting time series

S. Thakur and M. Pandey Proposed by

Year

ARIMA model type

Nonlinear model type

Moeeni and Bonakdari

2017

SARIMA

GEP

Wang et al.

2017

ARIMA

SVM

Mitra and Paul 2017

ARIMA

GARCH

Karthika et al.

2017

ARIMA

SVM

Gurnani et al.

2017

ARIMA

SVM

Kavousi-Fard

2017

ARIMA

SVR-MCSO

Ch. Xu et al.

2016

ARIMA

GP

Ming et al.

2014

ARIMA

SVM

Proposed by

Year

Nonlinear model type

Linear model type

Khashei and Hajirahimi

2018

MLP

ARIMA

Khashei and Hajirahimi

2017

MLP

ARIMA

Khashei and Hajirahimi

2016

ANN

ARIMA

Wang et al.

2015

ELM

SARIMA

Alwee et al.

2013

SVM

ARIMA

Che and Wang 2010

SVM

ARIMA

Sallehuddin et al.

GRNN

ARIMA

2008

started combining both the above-mentioned structures thus giving rise to parallelseries hybrid time forecasting model where time-series data is fed to individual models simultaneously in parallel fashion and output of one model whether it may be the error residuals, forecasted value or both is fed as an input to the second model. The aim is to achieve more accurate results by combining the advantages of both models. This new type of hybrid structure was first developed by Khashei and Bijari in 2010 and experiment results indicate that this new type of architecture yields better results than the other two. General working of a parallel-series hybrid model is illustrated in Fig. 4, where, three model types are proposed. Based on the input fed sequentially to the second model which generally is some type of ANN model or any other type of nonlinear model, parallel-series hybrid structure can be divided into three types. In 2014, Ruiz-Aguilar et al. employed SARIMA and ANN models to propose three types of parallel-series hybrid models. He compared these models to SARIMA-ANN series model and also with standalone

19 A Survey on Hybrid Models Used for Hydrological Time-Series …

255

Fig. 4 Framework of series-parallel hybrid structure combining the advantages of both previous structures

versions of each of them. It was concluded that among all the models the type3 parallel-series hybrid structure gave the best forecasting results. Jeong et al. in 2014 and Wongsathan et al. in 2016 also used type-3 model for SARIMA-ANN and ARIMA-ANN respectively and concluded that type-3 architecture gave the best results [8].

3 Review of Latest Hybrid Models Used for Hydrological Time Series Forecasting As seen in the above sections when a time series has complex characteristics like linear and nonlinear unknown mixed patterns, time-varying and non-stationary properties, a single statistical or intelligent model fails to give good forecast results as each of them has their shortcomings. This opened the gates for a new type of hybrid structure for modelling and forecasting that takes into account the advantages of both the statistical models like AR, MA, ARIMA and intelligent models like ANNs, SVMs. Studies have shown that for a real-world time series such as hydrological time-series where nature of the series is complex and difficult such hybrid models have proven to be more useful than the standard statistical or intelligent models alone. Stream-flow at a location in any river’s catchment is one of the most important hydrological variables, thus precise stream-flow predictions are of prime importance for various water resource management strategies and design like flood control, dams, bridges and other hydrological structures [18]. There are majorly two types of mathematical models used for stream-flow forecasting—(1) Rainfall-runoff model and (2) Stream-flow models. Stream-flow models use only hydrological data but Rainfallrunoff model uses both hydrological and climatic data. A catchment’s stream-flow under rainfall input is affected by a number of characteristics like—(1) storm characteristics (2) catchment characteristics (3) geomorphologic characteristics (4) climatic characteristics which makes the entire process of forecasting a complex process [19].

256

S. Thakur and M. Pandey

In the earlier days, traditional time series forecasting techniques were applied for stream-flow forecasting but due to its complex nature (non-stationary, nonlinearity) later on, ML models were used to forecast the same. On modelling and forecasting, hydrological time series data with noisy raw data as an input containing various trends and seasonal variations leads to a decrease in the efficiency of the model [18]. A general framework for a hybrid model for hydrological time series forecasting is an ANN model coupled with traditional models where de-noised and clean data input is of prime importance. Some of the recent hybrid models proposed by researchers are given below. Wang and Lou [20] proposed a hybrid hydrological forecast model combining the advantages of ARIMA and LSTMs. The daily average water level of a hydrological station in Chuhe river basin is used as experimental data. The results were compared with standalone LSTM network (Long Short term memory), ARIMA model and hybrid BP-ANN-ARIMA model. The result shows that MSE (mean square error) of the hybrid model is smaller than the others. Here DWT (Discrete wavelet transform) was applied to de-noise and decompose the series into linear and nonlinear components which were modelled by ARIMA and LSTM respectively. The nonlinear patterns were present in the residuals i.e. error obtained by subtracting the forecasted values by ARIMA from the original time series. On the same experimental dataset of Chuhe River Basin, two more hybrid models were proposed. Xing and Lou [20] used ARIMA + PSO-RBF model based on wavelet transform. Because of the moderate convergence speed of customary RBF neural system’s gradient descent method and error brought about by the randomness of parameter initialization, PSO (Particle swarm optimization) calculation is used. Xie and Lou [21] used ARIMA-SVR hybrid model based on wavelet transform. Here also, the accuracy of the proposed hybrid model was better than that of standalone ARIMA or SVR or RBF (radial basis function). Di C [22] proposed a four-stage hybrid model with stages including de-noising, decomposition, component prediction and ensemble. In the de-noising stage here instead of wavelet decomposition, empirical mode decomposition (EMD) method was used. The outcomes indicated that the proposed EMD (de-noising)-EEMD (decomposition)-RBFNN (prediction)-LNN (ensemble) model is essentially better than all other combination techniques as far as to forecast exactness, including the models ARIMA, EMD-RBFNN and EMD-WA-RBFNN-LNN, and so on. It is also seen that effective de-noising and decomposition always lead to better forecasting results. Nazir et al. [23] proposed two new methods which include three stages i.e.denoising, decomposition, prediction and summation named as WA-CEEMDAN-MM and EMD-CEEMDAN-MM.Four cases of river’s inflow data from Indus Basin System (Indus flow, Jhelum flow, Kabul flow, Chenab flow) were taken and the overall outcomes showed that the proposed hybrid model forecast technique beats other forecasting strategies.

19 A Survey on Hybrid Models Used for Hydrological Time-Series …

257

4 Future Work Improvements related to de-noising of raw time series can lead to better forecast results and better analysis of the time series components. De-noising methods include DWT, EDM, EEDM which worked well in the past but the constant need to achieve a more clean time series is always required. Wavelet-based forecasting models if used inappropriately can introduce errors into the forecast model inputs. According to Quilty and Adamowski [9] the source of this error is due to the boundary condition that is associated with wavelet decomposition. DWT-MRA and MODWT-MRA two widely used methods suffer from these boundary conditions and cannot be used properly for real-world forecasting in hydrology. It was shown that MODWT (maximum overlap DWT) and à trous algorithm (AT) gave better results as compared to abovementioned methods. Moreover, when we use the wavelet transform with soft thresholding better results are obtained as compared with the hard thresholding technique. So, therefore according to the models incorporated and the nature of the series we can approximate a better de-noising method to gain better prediction results. Non-linear models apart from neural networks like GEP and GARCH can also be coupled with ARIMA models. Recurrent Neural Networks (RNNs), unlike multilayer perceptrons, use memory to process variable-length sequences of inputs and adds unambiguous observation order handling allowing it to show temporal dynamic behaviour. But, RNNs suffer from short-term memory, i.e. they cannot carry information from earlier time steps. LSTMs and GRUs eliminate this problem of RNN as they have memory gates internally and can regulate the flow of information and can even model complex multivariate sequences, therefore, using them can be a good option for long-term series forecasting.

5 Conclusion This paper throws light on non-linear, non-stationary and time-varying characteristics of hydrological time series and why related studies over the years are moving towards hybrid models for forecasting. Different types of traditional forecasting methods like ARIMA, ETS are coupled with more sophisticated artificial intelligence methods, generally, neural networks such that the drawbacks of each method is balanced by its other half. We see that there is no single best hybrid model which remains superior to others for all cases and selection of individual models, as well as de-noising methods, plays a critical role in forecast accuracy. Hybridization of two or more models can be done in series, parallel or series-parallel fashion. This depends on the type of input you decide to give to each of the individual models. It is also seen that in all cases and on different time series datasets the performance of hybrid model surpasses the performance of standalone individual models.

258

S. Thakur and M. Pandey

References 1. Mahalakshmi G, Sridevi S, Rajaram S (2016) A survey on forecasting of time series data. In: 2016 international conference computing technologies and intelligent data engineering, pp 1–8. IEEE. https://doi.org/10.1109/icctide.2016.7725358 2. Nazir HM, Hussain I, Faisal M, Shoukry AM, Gani S, Ahmad I (2019) Development of multidecomposition hybrid model for hydrological time series analysis. Complexity 1–14 3. Huang G, Wang L (2011) Hybrid neural network models for hydrologic time series forecasting. In: 2011 fourth international joint conference on computational sciences and optimization, pp 1347–1350. IEEE. https://doi.org/10.1109/cso.2011.147 4. Gjika E, Ferrja A, Kamberi A (2019) A study on the efficiency of hybrid models in forecasting precipitations and water inflow Albania case study. Adv Sci Technol Eng Syst J (ASTESJ) 4(1):302–310 5. Di C, Yang X, Wang X (2014) A four-stage hybrid model for hydrological time series forecasting. PLoS ONE 9(8):1–18. https://doi.org/10.1371/journal.pone.0104663 6. Makridakis S, Spiliotis E, Assimakopoulos V (2018) Statistical and machine learning forecasting methods: concerns and ways forward. PloS One 13(3):1–26. https://doi.org/10.1371/ journal.pone.0194889 7. Liu Z, Jiang P, Zhang L, Niu X (2019) A combined forecasting model for time series: Application to short-term wind speed forecasting. Appl Energy 259:114137. https://doi.org/10.1016/ j.apenergy.2019.114137 8. Hajirahimi Z, Khashei M (2019) Hybrid structures in time series modeling and forecasting: a review. Eng Appl Artificial Intell 86:83–106. https://doi.org/10.1016/j.engappai.2019.08.018 9. Quilty J, Adamowski J (2018) Addressing the incorrect usage of wavelet-based hydrological and water resources forecasting models for real-world applications with best practices and a new forecasting framework. J Hydrol 563:336–353. https://doi.org/10.1016/j.jhydrol.2018. 05.003 10. Wang L, Wang Z, Qu H, Liu S (2018) Optimal forecast combination based on neural networks for time series forecasting. Appl Soft Comput 66:1–17. https://doi.org/10.1016/j.asoc.2018. 02.004 11. Timmermann A (2006) Forecast combinations. Handb Econ Forecast 1:135–196. https://doi. org/10.1016/s1574-0706(05)01004-9 12. Zhang GP (2003) Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 50:159–175 13. Banihabib ME, Ahmadian A (2018) Hybrid MARMA-NARX model for flow forecasting based on the large-scale climate signals, sea-surface temperatures, and rainfall. Hydrol Res 49(6):1788–1803 14. Mo L, Xie L, Jiang X, Teng G, Xu L, Xiao J (2018) GMDH-based hybrid model for container throughput forecasting: selective combination forecasting in nonlinear subseries. Appl Soft Comput 62:478–490. https://doi.org/10.1016/j.asoc.2017.10.033 15. Alwee R, HjShamsuddin SM, Sallehuddin R (2013) Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators. Scien World J 1–11. https://doi. org/10.1155/2013/951475 16. Khashei M, Hajirahimi Z (2018) A comparative study of series arima/mlp hybrid models for stock price forecasting. Commun Statistics Simul Comput 48(9):1–16. https://doi.org/10.1080/ 03610918.2018.1458138 17. Che J, Wang J (2010) Short-term electricity prices forecasting based on support vector regression and auto-regressive integrated moving average modeling. Energy Convers Manag 51(10):1911–1917. https://doi.org/10.1016/j.enconman.2010.02.023 18. Jain A, Kumar AM (2007) Hybrid neural network models for hydrologic time series forecasting. Appl Soft Comput 7(2):585–592. https://doi.org/10.1016/j.asoc.2006.03.002

19 A Survey on Hybrid Models Used for Hydrological Time-Series …

259

19. Zhang B, Govindaraju RS (2000) Prediction of watershed runoff using Bayesian concepts and modular neural networks. Water Resour Res 36(3):753–762. https://doi.org/10.1029/1999wr 900264 20. Wang Z, Lou Y (2019) Hydrological time series forecast model based on wavelet de-noising and ARIMA-LSTM. In: 3rd information technology, networking, electronic and automation control conference (ITNEC), pp 1697–1701. IEEE 21. Xing S, Lou Y (2019) Hydrological time series forecast by ARIMA + PSO-RBF combined model based on wavelet transform. In: 3rd information technology, networking, electronic and automation control conference (ITNEC), pp 1711–1715. IEEE 22. Di C, Yang X, Wang X (2014) A four-stage hybrid model for hydrological time series forecasting. PloS One 9(8). https://doi.org/10.1371/journal.pone.0104663 23. Nazir HM, Hussain I, Faisal M, Shoukry AM, Gani S, Ahmad I (2019) Development of multi decomposition hybrid model for hydrological time series analysis. Complexity. https://doi.org/ 10.1155/2019/2782715

Chapter 20

Does Single-Session, High-Frequency Binaural Beats Effect Executive Functioning in Healthy Adults? An ERP Study Ritika Mahajan, Ronnie V. Daniel, Akash K. Rao, Vishal Pandey, Rishi Pal Chauhan, and Sushil Chandra

1 Introduction The effect of executive control is one of the key features that make the human cognitive system so unique. It encompasses capabilities such as goal and input control, managing pertinent information and conflict control. One of the prominent methods to quantify executive control and cognitive conflict is the Stroop task [1]. In the colorword Stroop task, the individuals are shown words written with dissimilar colors are instructed to respond to the colors as quickly as possible all the while ignoring the words themselves, a task requiring inhibition of regularly practiced reading process. In the conventional Stroop test the incongruent condition, wherein the word and the ink color are not the same (e.g. ‘red’ printed in ‘yellow’ ink), requires the processes of conflict resolution and cognitive control [2]. The engagement of these processes leads to a delayed reaction time (RT) as compared to the congruent conditions, wherein the word and the color match [2]. This delay in reaction time in the incongruent condition as compared to the congruent condition is popularly known as the Stroop Effect [2]. Studies into conflict resolution and Stroop through different modalities like functional magnetic resonance imaging (fMRI), positron emission tomography (PET) R. Mahajan (B) · R. P. Chauhan National Institute of Technology Kurukshetra, Kurukshetra, India e-mail: [email protected] R. V. Daniel · V. Pandey · S. Chandra Institute of Nuclear Medicine and Allied Sciences, Delhi, India A. K. Rao Indian Institute of Technology Mandi, Mandi, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_20

261

262

R. Mahajan et al.

and electroencephalogram (EEG) has found significant activation of anterior cingulate cortex (ACC) during the tasks [2–5]. Early studies suggest that ACC is one of the primary hubs for conflict monitoring, detection and resolution which are processes of executive control [6, 7]. EEG studies have revealed the prominent Stroop-related event related component to be the N400 which is sensitive to congruency manipulations. The N400 is a more negative component in the incongruent condition as compared to congruent conditions, occurring approximately 300–550 ms after the stimulus onset. The N400 is found to be predominant over the frontal and left centralparietal regions [8, 9]. EEG sources reconstruction techniques have traced the origin of this component back to the ACC [10]. The general debate in the research community is that the component is related to the conflict detection and resolution processes that are more prominent in the incongruent condition [9]. However, the EEG studies thus far have not segregated the processes of conflict resolution or detection involved in the generation of the N400 component. Although there have been studies that suggest the involvement of N400 in the identification of interference stemming from the ACC [11, 12], the overall conclusion is that the N400 reflects the processes of general conflict resolution and detection, most likely arising from the ACC. A second event related potential (ERP) component, called the late positive component or the slow component, is reported along with the N400 as being related to the process of conflict resolution and detection [13]. The late positive component is a more positive component as compared to the N400 in the incongruent condition than the congruent condition appearing from 600 to 900 ms after the onset of the stimulus. The cognitive processes underlying this particular component is more paradoxical than the N400. Overall the late positive component is thought to somehow involved in conflict resolution [14], but it also suggested to be involved in semantic processing [8]. Also, traditional ERP measures like the P300 has been extensively used as an indicator of higher-order executive functioning and cognitive control [8]. One of the popular modes of cognitive intervention is meditation. A more modern form of auditory-focused meditation is binaural beats. It was originally identified in 1839 [15], but not used as a form of intervention until recent years. Binaural beats are a kind of auditory sensation that is generally experienced as oscillating beats. They are produced when two sinusoidal waves that are differing slightly in frequency are introduced to each ear independently [15]. For example, if an unmixed tone of 500 Hz is registered in the right ear and unmixed tone of 510 Hz is registered in the left ear, the frequency deduced becomes 505 Hz the modulates in amplitude with a frequency of 10 Hz [15]. The modulated signal is referred to as a beat and is produced when the difference in frequency between the pure tones ranges between 2 and 30 Hz [16]. There are also suggestions that pure tones between 200 and 900 Hz are more effective in eliciting binaural beats than those exceeding 1 kHz. Furthermore, binaural beats are suggested to be probabilistically maximized over 400 Hz [16]. The binaural beat perception is based on the generation of signals from the neurons in the superior colliculus, auditory cortex and medial olivary nucleus [17]. One of the studies [18] with binaural beats reported that a 20 min theta binaural session masked with pink noise observed higher scores of relaxation as compared to those who listened to just the pink noise. The results from the double-blinded

20 Does Single-Session, High-Frequency Binaural Beats …

263

study also suggest that theta-range binaural beat may promote feelings of relaxation post-exercise. Another study [19] on military population suggests that individuals who were exposed to delta and theta binaural beats masked by music for 30 min observed fewer perceived stress symptoms as compared to those individuals who listened without any binaural beats. The individuals who participated in the experimental condition reported comparatively lower stress levels in both work and home settings. In one of a recent study [20], participants exposed to high frequency binaural beats (gamma range) reduced the attentional blink [21] significantly suggesting more cognitive flexibility, hence improving the performance in a divergent thinking task. These studies suggest that binaural beats can be used as an effective tool for general cognitive enhancement. But the behavioral and cognitive effects of using singlesession, high-frequency binaural beats for enhancing executive functioning havenot been documented and hence are much needed in literature. In this experiment, the focus wasto evaluate the effects of high-frequency, singlesession binaural beats on executive functioning in healthy adults. The authors hypothesized that the participants who underwent the high-frequency binaural beats intervention would perform relatively better in the Stroop task compared to the control group (i.e., the participants who did not undergo any intervention).

2 Methodology A total of 40 male participants (20.5 ± 1.4) took part in the experiment. The participants were randomly assigned to either the control or experimental group. Both the control and the experimental group completed the standardized version of the Stroop task. All the participants were right-handed with normal or corrected to normal vision. None of the participants reported any history of neurological diseases or under the influence of any medications. Participants had provided both verbal and written informed consent participate in the experiment. All the participants were from either one of science, technology, engineering or mathematics background. The participants were randomly assigned to the control or experimental group. The experimental group took the standardized version of the Stroop test designed in Open Sesame [22] twice, both before and after the single session binaural beat intervention. The experimental group was exposed to binaural beats in the beta frequency of 25 Hz (250 and 275 Hz for left and right ear respectively). The binaural beats were administered using the Audacity speech recording and editing software.

2.1 Procedure The participants were seated in a comfortable chair at a distance of approximated 1 m from the stimulus screen. The stimuli were displayed on a 24-inch LCD screen. The words were printed with Times New Roman font of size 70-point size. The

264

R. Mahajan et al.

participants were required to respond to the color of the word irrespective of the word being displayed. The participants had to respond to the color red by pressing the UP arrow key, the color blue with the LEFT arrow key, the color green with DOWN arrow key and the color yellow with the RIGHT arrow key. These instructions were also displayed along with the stimuli on the upper right corner of the screen. The words were displayed on a black background. A total of 200 trials, excluding the 10 trials for practice, were presented per participant. Of the 200 trials, 80 were congruent and 120 were incongruent in randomized order. The Stroop task consisted of a fixation point for 250 ms at the center of the screen, followed by the stimulus for 2000 ms or till a response is made. Once the participant has responded or the timeout has reached, a black screen was shown for 1000 ms followed by the next trial. During the initial briefing the participants were instructed to respond to the stimuli as quickly and accurately as possible.

2.2 EEG Data Acquisition The EEG data was recorded using Ag/AgCl electrodes from 64 channel locations conforming to the 10–20 electrode placement system. The recorded data were sampled continuously at 2048 Hz with an input impedance below 5 K. The data was acquired using an eegoTM mylab EEG system (ANT Neuro, Enschede, Netherlands). Figure 1 shows the EEG data acquisition of a subject while executing the Stroop task. Fig. 1 EEG data acquisition of a participant while executing the Stroop task

20 Does Single-Session, High-Frequency Binaural Beats …

265

2.3 EEG Data Analysis The offline EEG analysis was performed using custom scripts using MATLAB 2015b (MathWorks). The signals were zero-phase filtered using FIR bandpass filter with cut-off frequencies at 0.1 and 45 Hz to remove any linear trends. Those segments with high amplitude and variance were marked. The data was then re-referenced to the average electrode. Single trial EEG segments (−200 to 1000 ms) were then epoched from the continuous data. Baseline correction was then performed here using the 200 ms before the stimulus onset. Those segments that were marked as artefacts overlapping with the epoched data, if any, are then removed. The single trial EEG epochs were then averaged to form the ERP waveforms. The ERP operation can mathematically be represented as: x(t) ¯ =

N 1  x(t, k) N k=1

(1)

where, k is the trial number and t is the time that has transpired after kth trial, N is the number of trials and x(t, k) is the recorded trial. The recorded trials can again be represented as the sum of the signal of interest s(t) and the noise η(t, k) as: x(t, k) = s(t) + η(t, k)

(2)

Henceforth, Eq. (1) can then be re-written as: x(t) ¯ = s(t) +

N 

η(t, k)

(3)

k=1

3 Results A one-way analysis of variance (ANOVA) was carried out to compare the main effects of high-frequency, single session binaural beats on the different behavioral and ERP measures mentioned above.

3.1 Behavioral Measures As shown in Fig. 2a, the accuracy in the Stroop task (congruent condition) was higher in the experimental group compared to the sham group (Experimental: μ = 98.33%, σ = 1.35% > Control: μ = 97.91%, σ = 2.14%; F (1, 31) = 0.46, p = 0.5, ηp2

R. Mahajan et al.

(a)

98 96 94 92 90

Accuracy for Incongurent Condition (in %)

Experiment Group

Control Group

100 (c)

98 96 94 92 90

Experiment Group

Control Group

Response Time for Incongruent (in seconds)

Accuracy for Congruent Condition (in %)

100

Response Time for Congruent (in seconds)

266

1

(b)

0.8 0.6 0.4 0.2 0 Experiment Group

Control Group

1 (d)

0.8 0.6 0.4 0.2 0

Experiment Group

Control Group

Fig. 2 a The average accuracy in the congruent condition across both groups, b the average response time in the congruent condition across both groups, c the average accuracy in the incongruent condition across both groups, d the average response time in the incongruent condition across both groups. The error bars show standard errors around point estimates

= 0.97). But the difference did not yield statistical significance. Also, as shown in Fig. 2b, the response time in the Stroop task (congruent condition) was lower in the experimental group compared to the control group (Experimental: μ = 0.79 s, σ = 0.11 s < Control: μ = 0.85 s, σ = 0.13 s; F (1, 31) = 1.75, p = 0.1, ηp2 = 0.92). But the difference did not reach statistical significance. As shown in Fig. 2c, the accuracy in the Stroop task (incongruent condition) was higher in the experimental group compared to the sham group (Experimental: μ = 95.74%, σ = 4.22% > Control: μ = 94.44%, σ = 3.23%; F (1, 31) = 0.94, p = 0.3, ηp2 = 0.88). But the difference did not reach statistical significance. However, the response time in the Stroop task (incongruent condition) was significantly lower in the experimental group compared to the control group (Experimental: μ = 0.89 s, σ = 0.13 s < Control: μ = 0.96 s, σ = 0.15 s; F (1, 31) = 5.44, p < 0.05, ηp2 = 0.95) as shown in Fig. 2d.

20 Does Single-Session, High-Frequency Binaural Beats …

267

3.2 ERP Measures The effect of high-frequency binaural beats on P300 and N400 amplitudes extracted out of the ERP data were also analyzed. Table 1 shows the differences between the P300 amplitudes on different electrodes which reached statistical significance in the congruent condition. Similarly, Table 2 shows the differences between the N400 amplitudes on different electrodes which reached statistical significance in the incongruent condition. Figures 3, 4, 5 and 6 show the averaged ERPs from − 200 ms (before stimulus) to 1000 ms (after stimulus) indicating the P300 and N400 amplitudes. For all the significant electrodes in both the congruent and incongruent conditions. Table 1 Means, standard deviations of the P300 amplitudes and the corresponding F-ratio, p-value and the effect size obtained for the significantly different electrodes in the congruent and incongruent condition Electrodes

Mean and SD experimental F-ratio (F) and control group

p-value (p)

Effect size (η2p )

Exp

5.06

p < 0.05

0.91

Congruent AF7

Control

μ = 1.78 μ = 1.78 µV µV σ = 0.94 µV σ = 0.88 µV CP1

μ = 0.93 μ = 0.39 µV µV σ = 0.56 µV σ = 0.90 µV

3.99

p < 0.05

0.94

CP2

μ = 1.31 μ = 0.86 µV µV σ = 0.5 µV σ = 0.82 µV

3.41

p < 0.05

0.83

CP3

μ = 0.94 μ = 0.44 µV µV σ = 0.59 µV σ = 0.94 µV

3.12

p < 0.05

0.79

8.92

p < 0.05

0.61

Incongruent AF7

μ = 1.89 µV σ = 0.58 µV

μ = 1.34 µV σ = 1.12 µV

268

R. Mahajan et al.

Table 2 Means, standard deviations of the N400 amplitudes and the corresponding F-ratio, pvalue and the effect size obtained for the significantly different electrodes in the congruent and incongruent condition Electrodes

Mean and SD experimental F-ratio (F) and control group

p-value (p)

Effect Size (η2p )

4.19

p < 0.05

0.82

Congruent F4

F6

Exp

Control

μ = −0.59 µV σ = 0.70 µV

μ = −1.22 µV σ = 1.04 µV

μ = −0.39 µV σ = 0.62 µV

μ = −0.93 µV σ = 0.73 µV

5.25

p < 0.05

0.91

Incongruent F5

μ = −0.41 µV σ = 0.73 µV

μ = −0.84 µV σ = 0.63 µV

3.1

p < 0.05

0.98

F6

μ = −0.40 µV σ = 0.69 µV

μ = −0.83 µV σ = 0.76 µV

2.78

p < 0.05

0.93

Fig. 3 The averaged ERPs from −200 ms [before stimulus] to 1000 ms [after stimulus] for different electrodes indicating P300 amplitudes (~250 to ~350 ms) across both groups in the congruent condition: a for electrode AF7, b for electrode CP1, c for electrode CP2, d for electrode CP3. All the averaged differences in P300 amplitudes across these electrodes yielded statistical significance

20 Does Single-Session, High-Frequency Binaural Beats …

269

Fig. 4 The averaged ERPs from −200 ms [before stimulus] to 1000 ms [after stimulus] for different electrodes indicating N400 amplitudes (~350 to ~475 ms) across both groups in the congruent condition: a for electrode F4, b for electrode F6. All the averaged differences in P300 amplitudes across these electrodes yielded statistical significance

Fig. 5 The averaged ERPs from −200 ms [before stimulus] to 1000 ms [after stimulus] for different electrodes indicating P300 amplitudes (~250 to ~350 ms) across both groups in the incongruent condition: a for electrode AF7

3.3 Correlation Analysis A two-tailed Pearson correlation analysis was conducted to examine the strength of the relationship between different dependent variables taken as a part of the experiment. Table 3 shows the statistically significant correlations obtained in the congruent condition and Table 4 shows the statistically significant correlations obtained in the incongruent condition.

270

R. Mahajan et al.

Fig. 6 The averaged ERPs from −200 ms [before stimulus] to 1000 ms [after stimulus] for different electrodes indicating N400 amplitudes (~350 to ~475 ms) across both groups in the incongruent condition: a for electrode F5, b for electrode F6. All the averaged differences in N400 amplitudes across these electrodes yielded statistical significance

Table 3 Statistically significant correlation between different dependent variables in the experiment (congruent condition) Dependent variable 1

Dependent variable 2

P300 amplitude—electrode AF7

Accuracy

P300 amplitude—electrode AF7

Extent of correlation (r)

Significance

0.67

p < 0.05

Response time

−0.95

p < 0.01

P300 amplitude—electrode CP2

Response time

−0.55

p < 0.05

N400 amplitude—electrode F4

Response time

0.44

p < 0.05

Table 4 Statistically significant correlation between different dependent variables in the experiment (incongruent condition) Dependent variable 1

Dependent variable 2

Extent of correlation (r)

Significance

P300 amplitude—electrode CP5

Accuracy

0.84

p < 0.01

4 Discussion In the present work, the effect of high-frequency, single-session binaural beats on executive functioning in healthy adults was evaluated. The participants were randomly divided into groups: experimental and control. The experimental group undertook a 20 min intervention session of beta frequency binaural beats before they executed the Stroop task; the control group directly executed the Stroop task without any intervention. As hypothesized, the experimental group recorded relatively higher

20 Does Single-Session, High-Frequency Binaural Beats …

271

accuracy and relatively slower reaction times in the Stroop task compared to the control group as shown in Fig. 2. This difference, however, did not yield statistical significance. A statistically significant difference in the performance parameters of the experimental and the control group would probably be reached if the beta frequency binaural beats intervention was administered longitudinally (i.e., several instances of intervention) rather than single-session intervention. As shown in Figs. 3, 4, 5 and 6, and Tables 1 and 2, the ERP measures like P300 and N400 also yielded significant results that validated our hypothesis. As shown in Figs. 3 and 5, the averaged P300 amplitudes were significantly higher for the experimental group compared to the control group. These results were consistent with [10], where it was articulated that binaural beats intervention lead to a gradual increase in the excitatory potentials and a gradual decrease in the inhibitory potentials of the neurons, which ultimately leads to better synaptic information transmission across the brain. These effects were particularly evident in the frontal electrodes and the parietal electrodes, which, according to [5], are the premier decision-making and cognitive control centers of the brain. Also, as shown in Figs. 4 and 6, the averaged N400 amplitudes were significantly lower in the experimental group compared to the control group. These results were consistent with [11], which had empirically and qualitatively shown that N400 was an involuntary neurological signature of a decrease in cognitive interference and a gradual drop in N400 indicated the drop in cognitive interference, leading to better clarity in the decision-making centers of the brain. Our results were well complemented by the correlation analyses carried out between the behavioral and the cognitive descriptors (as shown in Tables 3 and 4), with the averaged P300 amplitudes and the accuracy indicating a significant positive correlation and the averaged P300 amplitudes and the response time indicating a significant negative correlation.

5 Conclusion The present ERP study examined effect of high frequency binaural beats on executive functioning in healthy adults. Results from Stroop task though not significant leads to the conclusion that binaural beats have potential to improve executive functioning in individual when exposed to it. In future, we intend to administer highfrequency binaural beats as an intervention through sham-controlled, longitudinal studies where the intervention will be administered through several sessions. Such an undertaking would possibly lead to significantly better performance at transfer. In addition, high-frequency binaural beats could also be used as an effective intervention technique alongside other modern cortical modulation techniques like transcranial direct current stimulation (tDCS) and neuro feedback inpersonalized performance enhancement frameworks.

272

R. Mahajan et al.

Acknowledgements This research was supported by a grant from the Defence Research and Development Organization (DRDO) titled “Vision Research in Cognitive Neuroscience” (ST-14/DIPR734) to Dr. Sushil Chandra.

References 1. Stroop JR (1935) Studies of interference in serial verbal reactions. J Exp Psychol 18:643–662 2. Peterson BS, Skudlarski P, Gatenby JC, Zhang H, Anderson AW, Gore JC (1999) An fMRI study of Stroop word-color interference: evidence for cingulate subregions subserving multiple distributed attentional systems. Biol Psychiat 45:1237–1258 3. Taylor SF, Kornblum S, Lauber EJ, Minoshima S, Koeppe RA (1997) Isolation of specific interference processing in the Stroop task: PET activation studies. NeuroImage 6:81–92 4. Aine CJ, Harter MR (1984) Hemispheric differences in event-related potentials to Stroop stimuli. Ann N Y Acad Sci 425:154–156 5. Markela-Lerenc J, Ille N, Kaiser S, Fiedler P, Mundt C, Weisbrod M (2004) Prefrontal-cingulate activation during executive control: which comes first? Cogn Brain Res 18:278–287 6. Melcher T, Gruber O (2009) Decomposing interference during Stroop performance into different conflict factors: an event-related fMRI study. Cortex 45:189–200 7. Peterson BS, Kane MJ, Alexander GM, Lacadie C, Skudlarski P, Leung H-C, May J, Gore JC (2002) An event-related functional MRI study comparing interference effects in the Simon and Stroop tasks. Cogn Brain Res 13:427–440 8. Appelbaum LG, Meyerhoff KL, Woldorff MG (2009) Priming and backward influences in the human brain: processing interactions during the Stroop interference effect. Cereb Cortex 19:2508–2521 9. Larson MJ, Kaufman DAS, Perlstein WM (2009) Neural time course of conflict adaptation effects on the Stroop task. Neuropsychologia 47:663–670 10. Liotti M, Woldorff MG, Perez R, Mayberg HS (2000) An ERP study of the temporal course of the Stroop color-word interference effect. Neuropsychologia 38:701–711 11. Hanslmayr S, Pastötter B, Bäuml K-H, Gruber S, Wimber M, Klimesch W (2007) The electrophysiological dynamics of interference during the Stroop task. J Cogn Neurosci 20:215–225 12. West R (2003) Neural correlates of cognitive control and conflict detection in the Stroop and digit-location tasks. Neuropsychologia 41:1122–1135 13. Chen S, Melara RD (2009) Sequential effects in the Simon task: conflict adaptation or feature integration? Brain Res 1297:89–100 14. West R, Jakubek K, Wymbs N, Perry M, Moore K (2005) Neural correlates of conflict processing. Exp Brain Res 167:38–48 15. Oster G (1973) Auditory beats in the brain. Sci Am 229:94–102 16. Licklider JCR, Webster JC, Hedlun JM (1950) On the frequency limits of binaural beats. J Acoust Soc Am 22:468–473 17. Wahbeh H, Calabrese C, Zwickey H, Zajdel D (2007) Binaural beat technology in humans: a pilot study to assess neuropsychologic, physiologic, and electroencephalographic effects. J Altern Complement Med 13:199–206 18. McConnell PA, Froeliger B, Garland EL, Ives JC, Sforzo GA (2014) Auditory driving of the autonomic nervous system: listening to theta-frequency binaural beats post-exercise increases parasympathetic activation and sympathetic withdrawal. Frontiers Psychol 5:1248 19. Gantt MA, Dadds S, Burns DS, Glaser D, Moore AD (2017) The effect of binaural beat technology on the cardiovascular stress response in military service members with postdeployment stress. J Nurs Scholarsh 49:411–420

20 Does Single-Session, High-Frequency Binaural Beats …

273

20. Reedijk SA, Bolders A, Colzato LS, Hommel B (2015) Eliminating the attentional blink through binaural beats: a case for tailored cognitive enhancement. Frontiers Psychiatry 6:82 21. Olivers CNL, Nieuwenhuis S (2006) The beneficial effects of additional task load, positive affect, and instruction on the attentional blink. J Exp Psychol Hum Percept Perform 32:364–379 22. Mathôt S, Schreij D, Theeuwes J (2012) OpenSesame: an open-source, graphical experiment builder for the social sciences. Behav Res Methods 44:314–324

Chapter 21

Optimized Data Hiding for the Image Steganography Using HVS Characteristics Sahil Gupta

and Naresh Kumar Garg

1 Introduction The communication of sensitive data on the Internet increased due to advancements in technology [1]. The attacker passively monitors the data by analyzing the communication line referred to as an eavesdropping attack. The cryptography and steganography algorithms used to overcome this issue [2]. The cryptography algorithms scramble the data using an encryption algorithm and communicate it. The scramble data gives attention to the attacker [3]. On the other side, steganography algorithms hide the sensitive data in the cover media and give zero attention to the attacker, as shown in Fig. 1 [4]. Thus, in our work, we have explored steganography algorithms. The steganography word derived from the Greek words means ‘cover writing’ [5]. In the steganography, various cover media used for data hiding. These are text, audio, image, and video [6]. The image is the most preferred cover media due to high embedding capacity and security. In the literature, the Least Significant Bit (LSB) is the most preferred data hiding technique [7]. In the LSB technique, the cover pixel LSB bits replaced with secret data bits without considering the human visual system (HVS) characteristics. It also provides variability when the k-bits hide in the LSB of the cover pixel [8]. Further, to reduce the variability, optimized data hiding techniques are proposed. The most

S. Gupta (B) · N. K. Garg Maharaja Ranjit Singh Punjab Technical University, Bathinda, Punjab, India e-mail: [email protected] N. K. Garg e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_21

275

276 Fig. 1 Block diagram of steganography

S. Gupta and N. K. Garg

Cover Media

Secret Data

Data Hiding Algorithm

Stego Media

popular optimized data hiding techniques are exhaustive search and LSB matchedbased data hiding [9, 10]. The exhaustive search-based technique generates an optimized secret data matrix but takes a long execution time to achieve this goal. The LSB matched-based data hiding technique matches the secret data bits with the cover pixel bits and gives the optimal index according to the matched value [10]. The cover pixel is 8-bit long. Hence, a maximum of 8 iterations is needed to match the secret data bits with the cover pixel bits. At the other end, if the match is not found, the data is hidden in the LSB bits of the cover pixels. The limitation of the LSB match technique is that the embedding capacity is very less due to the optimal index value varies from 0 to 7. Thus, minimum 3-bits are required to hide the optimal index. Besides, most parts of the image used to hide the optimal index. The proposed technique based on LSB matched-based data hiding technique. The proposed technique provides better visual quality, embedding capacity while considering the HVS characteristics. In the cover image, the data hiding in the LSB bits gives lesser variability as compared to the most significant bits (MSB). Thus, we have considered LSB 4-bits for an optimal match between the cover image and secret data bits. This data hiding process requires the only four iterations, and the optimal index value varies from 0 to 3 and minimum 2-bits required to hide the optimal index. The main contribution of this paper is as follows. • Due to the HVS characteristics, the green plane used as a reference plane for data hiding in the red and blue planes. • The red and blue plane smooth and edge region is found based on the reference plane. On the reference plane, the Canny edge detection technique applied for determining the edges and smooth region of the image. • The optimized data hiding achieved in the smooth region due to the high correlation between consecutive pixels and optimal index hide in the edges due to the minimum correlation between the consecutive pixels. • We have achieved better visual quality as compared to the existing techniques. The rest of the paper as follows. Section 2 explains the related work done in the field of optimized data hiding. Section 3 describes the proposed technique. Section 4 shows the experimental results and comparative analysis with the existing techniques. The conclusion is drawn in Sect. 5.

21 Optimized Data Hiding for the Image Steganography Using HVS …

277

2 Related Work In this section, existing optimized data hiding and HVS characteristics discussed. Section 2.1 explained the existing optimized data hiding technique in that cover pixel match with the secret data bits. Section 2.2 explains the HVS characteristics.

2.1 Optimized Data Hiding Technique This technique proposed by authors Pratik D. Shah and R. S. Bichkar in 2018 [10]. In their technique, the secret data is read and split into 2-bit chunks. After that, the secret chunk is match with the cover pixel bits. If the optimal match is found between them, then their corresponding optimal index is determined. On the other hand, if optimal match is not found, then hides the data in the LSB bits of the pixel and their corresponding index (0) determined. Last, the optimal index is hidden in the same image using the genetic algorithm. The whole process of the optimal match is explained in Fig. 2. The secret data and its binary value represented in 2(a). The cover image pixel and its binary value represented in 2(b). The optimal index generated after the optimal match is shown in 2(c). The main limitation of this technique is as follows. • Maximum 8-times, each pixel is processed for the optimal match that increases the execution time of data hiding. • The optimal index value varies from 0 to 7. Thus, minimum 3-bits are required to hide the index value that degrades the visual quality as well as embedding capacity.

Fig. 2 a Secret data and its binary representation. b Cover image pixels and its binary representation. c Optimal index value

278

S. Gupta and N. K. Garg

• The genetic algorithm initialization parameters need to communicate to the receiver. So, the receiver can extract the optimal index value.

2.2 HVS Characteristics In the steganography, colour images most preferred for data hiding due to high embedding capacity and security. The proper selection of the RGB plane for data hiding reduces the visual attacks. According to the theory of colour, the expert says that human eyes have four light receptors [11]. The rods are sensitive to black, white, and grey shades. On the other side, cones are sensitive to various colours. As shown in Fig. 3, the eyes are less sensitive to blue–violet as compared to red and green colour. Hence, variation in blue and red colour pixel intensity gives less attention to human eyes. Therefore, in the proposed technique, HVS characteristics are taken under consideration for data hiding.

3 Proposed Technique We have designed an optimized data hiding technique while taking care of HVS characteristics. The block diagram of the proposed technique is shown in Fig. 4. Initially, read the cover colour image and extract its RGB planes. According to the HVS characteristics, the green colour is most sensitive to human eyes as compared

Fig. 3 Colour wavelength

21 Optimized Data Hiding for the Image Steganography Using HVS …

279

Start

Read Cover Colour Image

Red Plane

2-bit LSB Data Hiding

Stego Red Plane

Green Plane

Blue Plane

Canny Edge Detection

Optimized Data Hiding

Optimal Index

2-bit LSB Data Hiding

Green Plane

Secret Data

Stego Blue Plane

Concatenate

Stego Cover Colour Image

End

Fig. 4 Block diagram of proposed technique

to the other planes. Thus, we have taken the green plane as a reference plane for data hiding in the blue and red planes. The image contains smooth and edge regions. The smooth region pixels have a high correlation among them, and variability in that pixels gives attention to the attacker. On the other side, edge region pixels have a low correlation among them, and variability in that pixels gives less attention to the attacker. The smooth and edge regions of the image determine using the Canny edge detection technique. The Canny edge detection technique generates a binary image in that ‘0’ value represents the smooth, and ‘1’ represents the edge pixels and threshold value of 0.5 is taken. After that, read the secret data. The secret data and the blue plane are input to the optimized data hiding technique. The secret data bits

280

S. Gupta and N. K. Garg

match with the cover pixel bits in the smooth region and optimal index generated. So, minimum variability generated in the smooth region. The reference green matrix generated after applying the Canny edge detection technique given to the red and blue plane for data hiding. After that, the optimal index is hidden in the edge region using 2-bit LSB technique. In last, the stego RGB planes concatenated and stego colour image generated in the output. The proposed technique gives better visual quality, embedding capacity, and takes less iteration for data hiding. The pseudocode for the proposed technique has shown in Table 1. Table 1 Pseudocode for the proposed technique

Input: Cover Image (C), Secret Data (D) Output: Stego Image (S) 1.Start 2. Extract the cover image RGB plane 3. Apply the Canny edge detection technique on reference green plane. E=edge (Green Plane, ‘Canny’ ,0.5) 4.Optimal Match in the Smooth Region For i=1 to Row For j=1 to Col If(E(i,j)== ‘0’) For k=1 to 8 If(Blue plane pixel bits match with secret data bits) Return the optimal index between 0 to 3. Else Hide the secret data in the blue plane pixel LSB bit and return optimal index ‘0’ End End End End End 5.Optimal Index Hiding For i=1 to Row For j=1 to Col If(E(i,j)== ‘1’) Hide the optimal Index in the blue and red plane edge pixels using 2-bit LSB technique. End End End Concatenate the stego RGB plane to reconstruct colour stego image End

21 Optimized Data Hiding for the Image Steganography Using HVS …

281

4 Experimental Results This section shows the experimental results of the proposed technique investigated on the standard dataset images. The dataset images are taken from the USC-SIPI image database [12]. The resolution of the image is 256 X 256 and format .jpg. Initially, we have done the visual quality analysis between cover and stego images. After that, we have also analysed the strength of the proposed technique using visual quality analysis parameters. In this analysis, we measured the peak signal to noise ratio (PSNR) and embedding capacity, etc. In last, we have compared the proposed technique with the existing data hiding techniques. We have MATLAB 2013a for simulation purposes.

4.1 Visual Perceptibility Analysis Our proposed technique has been tested for the number of standard dataset images. We found that from the experimental results, there is an insignificant change in the stego image after data hiding. The cover and stego images look similar, as shown in Table 2.

4.2 Visual Quality Analysis Parameters We have measured the two most essential parameters PSNR and embedding capacity for the proposed technique. The detail description of the parameters given below. • Peak signal to noise ratio (PSNR): This parameter measures the visual quality between the cover and stego image. In the ideal scenario, the PSNR infinite required but due to data hiding, variability generated. The higher the PSNR value, better the visual quality of the stego image [13]. It is calculated using Eq. (1).   PSNR = 10 ∗ log10 Peak2 /MSE

(1)

whereas MSE denotes the mean square error between the cover and stego image and calculated using Eq. (2). Peak denotes that the maximum intensity value can be represented in the image, and its value is equal to 255. 1  (xi j − yi j)2 H × W i=1 j=1 H

MSE =

W

(2)

282

S. Gupta and N. K. Garg

Table 2 Visual perceptibility analysis between cover and stego image

Cover Image

Stego Image

21 Optimized Data Hiding for the Image Steganography Using HVS …

283

Table 3 PSNR for the different images Images (.jpg)

PSNR (in dB)

Lena

58.16

Baboon

51.74

Barbara

54.64

Pepper

57.40

Cameraman

62.60

Table 4 Embedding capacity for the different images Images (.jpg)

Embedding capacity (in bits)

Lena

24,352

Baboon

47,964

Barbara

29,192

Pepper

20,704

Cameraman

26,380

where X ij and Y ij are the pixel value of the cover and stego image, respectively. HW denotes the resolution of the image. In Table 3, we have calculated the PSNR for the different images. The results show that the cameraman image achieves the highest PSNR, and the baboon image achieves the lowest PSNR. In the proposed technique, the PSNR depends on the number of matches between the cover pixel and secret data bits and the number of edges available. • Embedding capacity: This parameter calculates the total number of bits hide in the cover image [14]. It is measured in bits. In Table 4, we have calculated the embedding capacity for the different images. The results show that the baboon image achieves the highest embedding capacity, and the pepper image achieves the lowest embedding capacity. In the proposed technique, the embedding capacity depends on the number of edges available in the image.

4.3 Comparative Analysis with the Existing Techniques In last, we have compared the proposed technique with the most popular optimized data hiding technique in Table 5. The table shows that proposed technique gives better PSNR and approximates the same embedding capacity. In addition, the proposed technique required four iterations per pixel to search the optimal match, whereas the existing technique takes eight iterations per pixel to search the optimal match.

284

S. Gupta and N. K. Garg

Table 5 Comparative analysis with the existing technique Images

Existing technique [10]

Proposed technique

PSNR (in dB) Embedding capacity (in PSNR (in dB) Embedding capacity (in bits) bits) Lena

52.33

32,768

58.16

24,352

Baboon

54.43

32,768

51.74

47,964

Barbara

53.80

32,768

54.64

29,192

Cameraman 52.36

32,768

62.60

26,380

Average

32,768

56.79

31,972

53.23

5 Conclusion and Future Work In this paper, we have designed an optimized data hiding technique. Initially, the colour image is read, and the RGB plane extracted. The HVS characteristics show that green colour more sensitive as compared to the red and green planes. Thus, the green plane is taken as a reference plane to hide the data in the red and blue planes. The green plane has processed to determine the number of edges in the image using a Canny edge detection technique. After that, secret data bits match with cover image pixel bits in the smooth region of the blue plane and optimal match index determine. The optimal match index hides in the edges of the red and blue planes. On the other side, if the optimal match not found, then data is hidden in the LSB of the cover pixel bits. The experimental results show that proposed takes 50% less iteration and provides 5.7% better visual quality in terms of PSNR as compared to the existing technique. In the future, to improve the security and robustness, we will hybrid the proposed technique with the cryptography and error correction algorithms.

References 1. Baig F, Khan MF, Beg S, Shah T, Saleem K (2016) Onion steganography: a novel layering approach. Nonlinear Dyn 84(3):1431–1446 2. Antonio H, Prasad PWC, Alsadoon A (2019) Implementation of cryptography in steganography for enhanced security. Multimedia Tools Appl 78(23):32721–32734 3. Parah SA, Sheikh JA, Assad UI, Bhat GM (2017) Hiding in encrypted images: a three tier security data hiding technique. Multidimension Syst Signal Process 28(2):549–572 4. Mukherjee S, Sanyal G (2019) Edge based image steganography with variable threshold. Multimedia Tools Appl 78(12):16363–16388 5. Muhammad K, Ahmad J, Rehman NU, Jan Z, Sajjad M (2017) CISSKA-LSB: color image steganography using stego key-directed adaptive LSB substitution method. Multimedia Tools Appl 76(6):8597–8626 6. Wang S, Yang B, Niu X (2010) A secure steganography method based on genetic algorithm. J Inf Hiding Multimedia Signal Process 1(1):28–35 7. Perumal K, Muthusamy S, Gengavel G (2019) Robust multitier spatial domain secured color image steganography in server environment. Cluster Comput 22(5):11285–11293

21 Optimized Data Hiding for the Image Steganography Using HVS …

285

8. Mstafa RJ, Elleithy KM (2016) A video steganography algorithm based on Kanade-LucasTomasi tracking algorithm and error correcting codes. Multimedia Tools Appl 75(17):10311– 10333 9. Kanan HR, Nazeri B (2014) A novel image steganography scheme with high embedding capacity and tunable visual image quality based on a genetic algorithm. Expert Syst Appl 41(14):6123–6130 10. Shah PD, Bichkar RS (2018) A secure spatial domain image steganography using genetic algorithm and linear congruential generator. In: International conference on intelligent computing and applications, Springer, Singapore, pp 119–129 11. Singh A, Singh H (2015) An improved LSB based image steganography technique for RGB images. In: IEEE international conference on electrical, computer and communication technologies 12. http://sipi.usc.edu/database/ 13. Mukherjee N, Paul G, Saha SK (2018) An efficient multi-bit steganography algorithm in spatial domain with two-layer security. Multimedia Tools Appl 77(14):18451–18481 (2018) 14. Swain G (2018) High capacity image steganography using modified LSB substitution and PVD against pixel difference histogram analysis. Secur Commun Networks

Chapter 22

Impact of Imperfect CSI on the Performance of Inhomogeneous Underwater VLC System Rachna Sharma and Yogesh N. Trivedi

1 Introduction The earth’s surface covers more than two-thirds with water of ocean and sea. Numerous maritime activities such as archaeology, offshore oil field exploration, port security and tactical surveillance have been monitored continuously, and underwater communication (UWC) becomes the necessity for various commercial applications related to industries and government. Acoustic wireless communication (AWC) has been preferred inside the water over long-range communication (few km) but has drawback of low data rate (up to Kbps) [1]. Visible light communication is emerging as an attractive alternative to AWC due to its ability to support the high data rate [2, 3]. Further, the properties of seawater are transparent to blue and green light (450– 530 nm) and exhibit low attenuation [4]. UWVLC provides data rate up to Gbps in real-time environment [5]. Various literatures presented different types of light communications, i.e. horizontal, vertical and slant. Horizontal communication has been illustrated in [6, 7] at a certain depth of water and addressed the effect of fixed underwater turbulence. Some literature proposed the work on propagation losses with increasing data rates [8, 9]. ANLOS communication has been taken into account with link budget model for received SNR in [4]. The BER with vertical SISO link considering underwater scenario has been evaluated in [10]. In [11], the outage probability and diversity

R. Sharma (B) · Y. N. Trivedi Institute of Technology, Nirma University, Ahmedabad, India e-mail: [email protected] Y. N. Trivedi e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_22

287

288

R. Sharma and Y. N. Trivedi

gain considering MIMO for the vertical link have been analysed. Recently, energy harvesting has been applied in UWVLC with SISO vertical link in [12]. Since the surface of sea is directly imposed with sun rays therefore the temperature of the surface water is more as compared to the deeper dark water. Additionally, the salt (eddy) particles are more in the bottom as compared to surface water [13, 14]. The variation of temperature and amount of salt particles with the depth of water results in an inhomogeneous underwater environment [4, 10]. The pressure is also increased by 10 dbar every 10 m of depth [4]. Thus, the concentration of water is not uniform in the ocean, and it changes with depth. Ocean properties also change with latitude and environmental conditions (seasons) [14, 15]. The depth-dependent vertical profile [10] shows that change of temperature is larger than the change of salinity with depth. The effect of pressure is ignored because of compressibility nature of water at high depth. This inhomogeneous density of water causes the change in the refractive index of seawater which results in the optical turbulence. Optical turbulence initiates the instantaneous fluctuations in the received signal whose strength varies with depth. The weak ocean turbulence effect with depth has been revealed by the authors in paper [10]. Authors presented the modified scintillation index by influencing the eddy diffusivity ratio [16] and analysed the BER for vertical SISO link up to 120-m depth from the surface of the water. In underwater wireless communication systems, on–off keying (OOK) and pulse amplitude modulation are the commonly used modulation techniques [4, 12, 17, 18]. One of the drawbacks of OOK modulation is its poor spectral efficiency. While quadrature amplitude modulation (QAM) is more often used modulation technique in wireless communication systems due to its high spectral efficiency [18, 19], the rectangular QAM (RQAM) is a multifaceted modulation variant of QAM which has got remarkable attention due to its generic nature [20].

2 System Model In this work, we consider a vertical UWVLC system in which the communication links are modelled as a cascaded structure of multiple layers as shown in Fig. 1. The statistical characteristics of each layer vary with the depth. The depth of each layer is limited to maximum 30 m, and this is because the impact of log-amplitude variance is negligible for 30-m thickness irrespective of the depth [10, 11]. The transmitter (Tx) is located just below the sea surface, and receiver (Rx) is at vertical distance of 6 0 m for two layers and 120 m for four layers from  N the transmitter. The total transmission ln , where ln is the thickness of each distance L n can be considered as L n = n=1 layer and N represents number of layers. N is considered to be two and four in this paper. The fading coefficients of each layer are modelled as independent and nonidentically distributed lognormal  N random variables. The overall fading coefficient of h n . Under the assumption of weak oceanic turbuthe link L n is given by h T = n=1 lence, the probability density function (PDF) of h T follows the lognormal distribution

22 Impact of Imperfect CSI on the Performance of Inhomogeneous …

289

Fig. 1 Vertical underwater cascaded channel model

[10, 21] with PDF given by 

 −(ln h T − μh T )2 f h T (h T ) = √ exp , 2σh2T 2π σh T h T 1

(1)

N N μh n and σh2T = n=1 σh2n , in which σh2n denotes the scintillation where μh T = n=1 index for the nth layer. The parameters used to compute the scintillation index are given in [10, 16] and are shown in Table 1. 1 ∞ σh2n

=

8π 2 k02 ln 0

 

dn α 2 αn (α) 1 − cos β(1 − (1 − )β) dαdβ, k0

0

where n (α) is the spatial power spectrum model given by 2 −1



 αn2 χT −1/3 −5/3 ε 1 + C1 (αηn )2/3 α ωn2

n (α) = (4π α ) C0

× ωn2 exp(−C0 C1−2 PT−1 δk ) + drn exp(−C0 C1−2 PS−1 δn ) n n

(2)

290

R. Sharma and Y. N. Trivedi

Table 1 Definition of all variables in (1) and (2)

Parameters

Definitions

C0 = 0.72

Constant

C1 = 2.35

Constant

χT

Dissipation rate of mean square temperature

ε

Dissipation rate of turbulent kinetic energy

ηn

Kolmogorov microscale length

α

Magnitude of spatial frequency

D Sn

Molecular diffusivity of salt

DTn

Molecular diffusivity of temperature

PT Sn

One half of harmonic mean of PSn and PTn

PSn

Prandtl number for salinity

PTn ωn = αn



 

dTo dZ n

βn





d So dZ n

Prandtl number for temperature Relative strength of temperature and salinity fluctuation

βn

Saline concentration coefficient

PTn

Salinity differences between top to bottom boundaries

αn

Thermal expansion coefficient

Drn

Eddy diffusivity ratio

=1

Wave number

λ

Wavelength

− ωn (drn + 1) exp(−0.5C0 C1−2 PT−1Sn δn )



(3)

The δn and drn are defined as δn = 1.5C12 (αηn )(4/3) + C13 (αηn )2 ,

(4)

⎧ √ ⎨ |ωn |/(|ωn | − |ωn |(|ωn | − 1)), |ωn | ≥ 1 drn = 1.85|ωn | − 0.85, 0.5 ≤ |ωn | ≤ 1 ⎩ |ωn | < 0.5 1.5|ωn |,

(5)

and

respectively. Under the weak turbulence, the practical constraint of imperfect channel state information is considered at the receiver. The actual channel h T and estimated channel hˆ T are related as h T = hˆ T + δh T . Channel estimate error δh T is modelled 2 = σδ2 /(1 + ρl 2 σh2T ψo ), as zero mean Gaussian random variable with variance σδh T

22 Impact of Imperfect CSI on the Performance of Inhomogeneous …

291

where ρ indicates channel quality estimate, l is the path loss and ψo = ηr2 Pt2 /σ 2 is the averaged received electrical SNR. Pt is the transmitted optical power, and ηr and σ 2 are receiver and noise variance of receiver. The estimated received variance is 2 = ρl 2 ψ◦ σh4T /(1 + ρl 2 ψ◦ σh2T ). σhˆ2 = σh2T − σδh T T

(6)

The electrical current received at the output of photo-detector at Rx is given as i = RPh T s + v,

(7)

where R represents responsivity, and  and P are attenuation and transmitted optical power, respectively.  the transmitted information symbol with average power

S is considered as 1, E |s|2 = 1 and v is the additive white Gaussian noise (AWGN) with mean 0 and variance σv2 .

3 Outage Probability In this section, we compute the outage probability for the considered underwater vertical communication system. From (7), the instantaneous electrical SNR at Rx is given by Ψ = ψ◦ h 2 ,

(8)

where ψ◦ = R σ 2 P is the average received SNR at the Rx. v Outage occurs when Ψ falls below a predefined threshold ψth . The outage probability is computed as 2 2

2

Pout = P(Ψ < ψth ) = F (ψth ).

(9)

Lemma 1 The distribution of a random variable Y = cX 2 , in which c is a constant and X is a lognormally distributed random variable, is also lognormal  Y ∼ LN 2μh T + ln(c), 2σ y . From Lemma 1, Ψ is also lognormally distributed, Ψ ∼ LN(2μh T +ln(ψ◦ ), 2σh T ). Using Eq. (9), the outage probability is computed as ln(ψth ) − 2μh T − ln(ψ◦ ) . =1− Q 2σh T

Pout

(10)

292

R. Sharma and Y. N. Trivedi

4 ASEP Analysis In this section, we derive the analytical expressions of ASEP for N I × N Q -RQAM, and M-ary PAM schemes. Using PDF-based approach, the ASEP of the considered system can be computed as Ps =

∞

Ps (e|Ψ ) f Ψ (ψ)dψ.

(11)

0

where Ps (e|Ψ ) is the conditional symbol error probability (SEP) for AWGN channel and f Ψ (ψ) is the average PDF of the instantaneous SNR of the received signal at Rx. Differentiating Eq. (10) with respect to ψth , and using ψ in place of ψth for notational convenience, results in the following PDF expression of the instantaneous SNR. 

−(ln(ψ/ψ◦ ) − 2μh T )2 f  (ψ) = √ exp 4σh2T 2 2π σh T ψ 1

 (12)

4.1 N I × N M -RQAM The conditional symbol error probability for N I × N Q RQAM modulation scheme over AWGN channel is given by [20] (13)

and where ϑ2 = 1−1/N Q , ς = d Q /d I , in which d I and d Q denote in-phase and quadrature decision distances, respectively. On substituting (13) and (12) into (11) and rearranging the resulting terms, we get (14) where 1

ℵ1 (ψ, a) = √ 2 2π σh T and

 0



  √ −(ln(ψ/ψ◦ ) − 2μh T )2 Q(a ψ) exp ψ 4σh2T

22 Impact of Imperfect CSI on the Performance of Inhomogeneous …

293

1 ℵ2 (ψ, a, b) = √ 2 2π σh T   √ √  ∞ −(ln(ψ/ψ◦ ) − 2μh T )2 Q(a ψ)Q(b ψ) exp × ψ 4σh2T 0 ln(ψ/ψ◦ )−2μ

hT Substituting = x reduces (15) and (16) into the form 2σh T ∞ 2 −∞ g(x) exp(−x )dx which matches n Gauss–Hermite form and can be solved using wi g(xi ), in which wi and xi are the weights and numerical integral technique as i=1 zeros of the Hermite polynomial, respectively [22]. Using this expression in (15) and (16) and substituting back into (14) result in the following closed-form expression of ASEP.

4.2 M-ary PAM Conditional expression of the symbol error rate (SER) of M-ary pulse amplitude modulation for AWGN channel is given as [23] (15) where A = 2(M − 1)/(M log2 M) and C = 3/((M − 1)(2M − 1)) where M is the constellation size. Substituting (15) and (12) into (11) and following the steps similar to Sect. 4.1 result in the following ASEP expression for M-ary PAM scheme.  n √ A  2 wi Q( C × exp( 2σh T xi + μh T )). PsPAM = √ 2π i=1

(16)

294

R. Sharma and Y. N. Trivedi

5 Numerical and Simulation Results In this section, the outage probability and ASER analytical expressions derived in Sects. 3 and 4 are verified using Monte Carlo simulations. Unless state otherwise, we considered the following values for the simulation study which corresponds to the underwater channel model for high-latitude Pacific Ocean. We consider wavelength = 530 nm, dissipation rate of mean-squared temperature = 10−5 K2 s−3 , dissipation rate of turbulent kinetic energy, per unit mass of fluid = 10−1 m2 s−3 , relative strength of temperature and salinity fluctuations = −3 salinity range from 33 PPT to 36.5 PPT, and temperature range from 1 to 28 °C. The simulation study is performed for two different scenarios, (i) for source–detection vertical separations of 60 m which is modelled using two layers with scintillation indexes [9.2 × 10−2 , 8.32 × 10−2 ] and (ii) for source–detection vertical separations of 120 m, which is modelled as four layers with scintillation indexes [9.2×10−2 , 8.32×10−2 , 7.10×10−2 , 5.57×10−2 ]. In Fig. 2, outage probability for perfect CSI (ρ = ∞) and imperfect CSI (ρ = 1, ρ = 0.2) is presented for both two layers and four layers. It is observed that the theoretical results overlap with the simulation results for all the investigated cases. It can be observed that the outage probability is minimum for the perfect CSI and the system performance degrades with the increase in imperfect CSI. Considering 10−3 as the target ASEP, the SNR required is approximately 30.5 dB and 39.5 dB for perfect CSI (ρ = ∞) in two layers and four layers, respectively, whereas, for the same ASEP, the SNR increases to 38 and 47.1 dB in case of imperfect CSI (ρ = 0.2) 10 0

10 -1 Four Layers

Pout

10 -2

10 -3

10 -4

Two Layers Imperfect CSI ( =0.2) Imperfect CSI ( =1) Perfect CSI Simulation

10 -5

10 -6

0

5

10

15

20

25

°

(dB)

Fig. 2 Outage portability of two layers and four layers

30

35

40

45

50

22 Impact of Imperfect CSI on the Performance of Inhomogeneous …

295

10 0

10-1

ASEP

10-2 Four Layers 10

-3

10-4

Two Layers 4x2-QAM Imperfect CSI ( =0.2) 4x2-QAM Imperfect CSI ( =1) 4x2-QAM Perfect CSI simulation

10-5

10-6

0

10

20

30

40

50

60

(dB) ° Fig. 3 ASEP for 4 × 2 QAM for two layers and four layers

for both two layers and four layers, respectively. Thus, the impact of imperfect CSI becomes more severe with the depth of water. Figure 3 shows the ASEP performance curves of 4 × 2-RQAM for two layers and four layers considering both perfect CSI case (ρ = ∞) and the imperfect CSI (ρ = 1, ρ = 0.2). It is observed that imperfect knowledge of CSI results in significant degradation in the system performance. For example, at SNR = 40 dB, for two layers, the ASEP of 3.46 × 10−5 , 1.5 × 10−4 and 1.2 × 10−3 is achieved for perfect CSI, ρ = 1 and ρ = 0.2, respectively. However, in case of four layers, the achieved ASER increases to 1.4 × 10−3 , 3.4 × 10−3 and 1.2 × 10−2 for perfect CSI, ρ = 1 and ρ = 0.2, respectively. As an interesting observation, it is seen that increase in depth impact of imperfect CSI becomes more profound. Figures 4 and 5 show the ASEP versus SNR performance curves for 4-PAM and 4-QAM schemes for two layers and four layers, respectively. We considered the cases of perfect CSI (ρ = ∞) and imperfect CSI (ρ = 0.2) in our simulation study. It is observed that the simulation curves overlap with the analytical curves, thus confirming the accuracy of our derived analytical expressions. The derived analytical expression for 4-PAM scheme in this paper is more accurate than the one presented in [10] (Fig. 4) in which the author used Q function approximation. It is observed that the imperfection in the channel estimate adversely affects the performance of the system irrespective of the modulation scheme and number of layers. For instance, considering 10−4 as the target ASEP and 4-PAM, it can be seen that the knowledge of perfect CSI results in significant SNR gain of 9.2 and 7.9 dB for two layers and

296

R. Sharma and Y. N. Trivedi

4-PAM Imperfect CSI ( =0.2) 4-PAM Perfect CSI 4-QAM Imperfect CSI ( =0.2) 4-QAM Perfect CSI Simulation

10-1

ASEP

10-2

10-3

10-4

10-5

0

10

20

30

40

50

dB ° Fig. 4 ASEP for 4-PAM and 4-QAM for two layers 100

10-1

ASEP

10-2

10-3 4-PAM Imperfect CSI ( =0.2) 4-PAM Perfect CSI

10-4

4-QAM Imperfect CSI ( =0.2) 4-QAM Perfect CSI Simulation

0

10

20

30

°

dB

Fig. 5 ASEP for 4-PAM and 4-QAM for four layers

40

50

60

22 Impact of Imperfect CSI on the Performance of Inhomogeneous …

297

four layers, respectively. Further, it is observed that, for perfect CSI case, using 4QAM results in an SNR gain of 9.2 dB when compared with 4-PAM scheme. As an interesting observation, it is observed that the performance of PAM with perfect CSI and QAM with imperfect CSI is almost similar for the same constellation size. This is because of the better placement of constellation points in case of QAM as compared to PAM, which maximizes the minimum distance between the constellation points for a given average energy.

6 Conclusion We presented a vertical two-layered and four-layered UWVLC system and studied its performance in terms of outage probability and ASEP. We considered imperfect CSI at the receiver and derived novel closed-form ASEP expressions for PAM and RQAM schemes. It is shown the imperfect CSI results in significant performance degradation which becomes more severe with increase in depth.

References 1. Kaushal H, Kaddoum G (2016) Underwater optical wireless communication. IEEE Access 4:1518–1547 2. Zeng Z, Fu S, Zhang H, Dong Y, Cheng J (2016) A survey of underwater optical wireless communications. IEEE Commun Surv Tuts 19(1):204–238 3. Gussen CM, Diniz PS, Campos M, Martins WA, Costa FM, Gois JN (2016) A survey of underwater wireless communication technologies. J Commun Inf Syst 31(1):242–255 4. Anous N, Abdallah M, Uysal M, Qaraqe K (2018) Performance evaluation of LOS and NLOS vertical inhomogeneous links in underwater visible light communications. IEEE Access 6:22408–22420 5. Shen C, Guo Y, Sun X, Liu G, Ho K, Ng TK, Alouini M, Ooi BS (2017) Going beyond 10meter, gbit/s underwater optical wireless communication links based on visible lasers. In: 2017 opto-electronics and communications conference (OECC) and photonics global conference (PGC), pp 1–3 6. Peppas KP, Boucouvalas AC, Ghassemloy Z (2017) Performance of underwater optical wireless communication with multi-pulse pulse-position modulation receivers and spatial diversity. IET Optoelectron 11(5):180–185 7. Tabeshnezhad A, Pourmina MA (2017) Outage analysis of relay-assisted underwater wireless optical communication systems. Optics Commun 405:297–305 8. Wang C, Yu HY, Zhu YJ (2016) A long distance underwater visible light communication system with single photon avalanche diode. IEEE Photonics J 8(5):1–11 9. Akhoundi F, Salehi JA, Tashakori A (2015) Cellular underwater wireless optical CDMA network: performance analysis and implementation concepts. IEEE Trans Commun 63(3):882– 891 10. Elamassie M, Uysal M (2018) Performance characterization of vertical underwater VLC links in the presence of turbulence. In: 2018 11th international symposium on communication systems, networks digital signal processing (CSNDSP), pp 1–6

298

R. Sharma and Y. N. Trivedi

11. Yilmaz A, Elamassie M, Uysal M (2019) Diversity gain analysis of underwater vertical MIMO VLC links in the presence of turbulence. In: IEEE international Black Sea conference on communications and networking (BlackSeaCom), pp 1–6 12. Ghasvarianjahromi S, Karbalayghareh M, Diamantoulakis PD, Karagiannidis GK, Uysal M (2019) Simultaneous lightwave information and power transfer in underwater visible light communications. In: 2019 IEEE 30th annual international symposium on personal, indoor and mobile radio communications (PIMRC), pp 1–6 13. Millard R, Seaver G (1990) An index of refraction algorithm for seawater over temperature, pressure, salinity, density and wavelength. Deep Sea Res Part A Oceanogr Res Pap 37(12):1909–1926 14. Chester R, Jickells TD (2012) Marine geochemistry, 3rd edn. Wiley-Blackwell 15. Johnson L, Green R, Leeson M (2013) Underwater optical wireless communications: depth dependent variations in attenuation. Appl Optics 52(33):7867–7873 16. Elamassie M, Uysal M, Baykal Y, Abdallah M, Qaraqe K (2017) Effect of eddy diffusivity ratio on underwater optical scintillation index. J Optics Soc Am 34(11):1969–1973 17. Elamassie M, Miramirkhani F, Uysal M (2019) Performance characterization of underwater visible light communication. IEEE Trans Commun 67(1):543–552 18. Majumdar AK, Siegenthaler J, Land P (2012) Analysis of optical communications through the random air-water interface: feasibility for under-water communications. In: Laser communication and propagation through the atmosphere and oceans, vol 8517. SPIE, pp 85170T 19. Cheng M, Guo L, Li J, Zhang Y (2016) Channel capacity of the OAM-based free-space optical communication links with Bessel–Gauss beams in turbulent ocean. IEEE Photonics J 8(1):1–11 20. Dixit D, Sahu P (2014) Performance analysis of rectangular QAM with SC receiver over Nakagami-m fading channels. IEEE Commun Lett 18(7):1262–1265 21. Jamali MV, Chizari A, Salehi JA (2017) Performance analysis of multi-hop underwater wireless optical communication systems. IEEE Photonics Technol Lett 29(5):462–465 22. Abramowitz M, Stegun IA (1972) Handbook of mathematical functions, 10th edn. Dover Publications, National Bureau of Standards, Washington, DC 23. Du KL, Swamy MN (2010) Wireless communication systems: from RF subsystems to 4G enabling technologies. Cambridge University Press

Chapter 23

Pre-configured (p)-Cycle Protection for Non-hamiltonian Networks Vidhi Gupta, Rachna Asthana, and Yatindra Nath Singh

1 Introduction The protection mechanisms plays very important role in today’s world due to the intensive increasing need of high data services in various industrial and telecom areas [1]. In wavelength division multiplexed network (WDM), the means of transmission is light and the endpoint devices communicate with each other with the help of paths formed by the wavelengths. These wavelengths paths are known as lightpaths. If any of the lightpath gets failed, then whole of the system will be affected with high loss of data. So, it becomes mandatory to protect these paths from various failures like fiber cuts, etc., in order to possess smooth flow of data services and avoid economical losses [2, 3]. The intermediate nodes of the network can switch the incoming lightpath to other nodes with two possibilities. It can pass the same wavelength to the other nodes or may switch to other wavelength. If the wavelength is converter, then the process is referred as wavelength conversion, and the device used for such conversion is known as wavelength converter. Wavelength converters have high cost and complexity. They degrade the signal performance during the conversion process. They are necessary in any network in order to reduce the blocking of the wavelength [4]. As the working paths are established in the network, its protection becomes necessary. The protection paths are then formed for the same. During normal operation, V. Gupta (B) Harcourt Butler Technical University, Kanpur, India e-mail: [email protected] R. Asthana Dr. Ambedkar Institute of Technology for Handicapped, Kanpur, India Y. N. Singh Indian Institute of Technology, Kanpur, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_23

299

300

V. Gupta et al.

the traffic is run on the working paths. As soon as failure gets detected, the same traffic gets switch over to the protection path [5]. There can be various protection mechanisms like ring protection, mesh protection, etc., among which the notion of p (pre-configured)-cycles have been found to be very efficient in terms of spare capacity efficiency and very fast in terms of high restorations speed [6–10]. There have been a lot of studies related to p-cycles, in which it has been assumed that the wavelength conversion is available at every node without any bound [11, 12]. However, it will make the network really very costly. We have previously analyzed the WDM network providing p-cycle protection without wavelength converter for hamiltonian network [13]. We are now investigating the concept of p-cycle-based protection in WDM networks without wavelength converters for non-hamiltonian networks; thus, cost of converters can be avoided. However, it will require some other resources. The paper is arranged as follows. Section 2 provides the basics of p-cycle protection. Section 3 gives the conventional approach. In Sect. 4, we have introduced our work. Section 5 gives the results followed by conclusion in Sect. 6.

2 Basics of p-Cycle p-cycle stands for pre-configured protection cycle [2, 3, 7, 8]. These cycles are preformed in order to protect the paths from failures. They are very advantageous as they offer protection with high speed like ring protection. It is so as only switching from failed node to the p-cycle is required. They also possess good efficiency just like mesh protection. It is provided by giving protection to the links that are present on the cycle known as on-cycle link protection as well as for the links that are not present on the cycle but whose end nodes are parts of the cycle known as straddling-link protection [14–16]. When we setup a WDM network with p-cycles, the working paths are setup in accordance with the traffic requirement. These working paths are formed in the working capacity which gets reserved for it. Apart from this, the remaining capacity is called as the spare capacity. p-Cycles are formed in this spare capacity. From all the p-cycles formed some set of p-cycles are efficiently chosen such that all the links present in the WDM network gets protected. In other way, we can say that the minimum spare capacity is determined by selecting those p-cycles that can provide protection to every working capacity present on all the links. The ratio of the spare capacity to the working capacity is known as the capacity efficiency [3, 6, 8]. There are various types of p-cycles that depend on the associated factors. Its two types are as follows. (i) Link p-cycle: It provides protection to the working capacity of the link. As indicated in Fig. 1, network topology with 5 nodes and 6 spans providing protection to the links with p-cycle ABCDE. (ii) Node-encircling p-cycle: It provides protection to the node that fail, by forming cycles and enclosing the protected node. As indicated in Fig. 2, network

23 Pre-configured (p)-Cycle Protection for Non-hamiltonian Networks

301

Fig. 1 5 nodes 6 spans network topology with single link p-cycle

Fig. 2 5 nodes 8 spans network topology with node-encircling p-cycle ABCD

topology with 5 nodes and 8 spans providing protection to the node E with p-cycle ABCD. The on-cycle protection provided by the p-cycle is indicated in Fig. 3. As the link ED fails which is on the cycle ABCDE, the p-cycle will provide protection to the failed link ED through the path EABCD. The straddling-link protection provided by the p-cycles is indicated in Fig. 4. As the link AC fails which is straddling to the cycle ABCDE, the p-cycle will provide protection to the failed link AC through two of the paths. One path will be through ABC while other will be through AEDC thereby increasing efficiency. Hence, p-cycles offer single path protection to the oncycle links while two alternate path protection to the straddling-link. p-cycles are thus very simpler and found to be very attractive solution in providing protection to the network. Fig. 3 On-cycle link failure ED with p-cycle path EABCD

302

V. Gupta et al.

Fig. 4 Straddling-link failure AC with two alternate p-cycle paths ABC and AEDC

3 Conventional Approach The conventional approaches studies have consider the network topology with working paths and protection cycles paths routed using 100% wavelength conversion at every node [2, 11, 14]. In an optical mesh network with the following sets, parameters and variables, following integer linear programming (ILP) can be used to achieve minimum spare capacity [16]. Sets S set of spans indexed by j P set of p-cycles indexed by p L set of wavelengths indexed by l. Parameters cj cost of span j (assumed as 1) wj working capacity on span j x pj 1 if p-cycle p protects span j as on-cycle, 2 as straddling and otherwise 0 δ pj 1 if p-cycle p passes through span j and otherwise 0. Variables np required number of unit capacity copies of p-cycle p sj spare capacity required on span j. Objective min

 ∀ j∈S

cjsj

(1)

23 Pre-configured (p)-Cycle Protection for Non-hamiltonian Networks

303

Subject to wj ≤



x pj n p ∀ j ∈ S

(2)

δ pj n p ∀ j ∈ S

(3)

∀ p∈P

sj =

 ∀ p∈P

n p ≥ 0∀ p ∈ P

(4)

Equation (1) represents the objective of minimizing the total cost of spare capacity required to form p-cycles. Equation (2) ensures that all the working capacity of every span gets 100% protection for a single failure. Equation (3) gives the required spare capacity on every span to form the p-cycles. Equation (4) ensures that the number of unit capacity copies of p-cycles formed should be an integer greater than zero.

3.1 Problems with the Conventional Approaches There is much work done in protecting WDM networks through p-cycle. The work done previously assumed that 100% wavelength conversion is available at every node. If we consider the worst case scenario, then 100% wavelength conversion will requires many converters. Consider a network topology having n number of nodes each with degree d and number of wavelength sw. The number of converters required at each node will be d * w consequently total converters required for the network topology will be n * d * w. For example, In a network topology, we have 10 nodes each with degree 4 using 20 wavelengths then n = 10, d = 4 and w = 20. This will require 10 * 4 * 20 equals to 800 converters. This gives an upper limit. In actual practice, some paths and p-cycles will be setup without using the wavelength converters to the level possible. Thus, employing so many converters is useless as all the converters may not be required. This will unnecessarily add the hardware cost. Besides the expense, wavelength converters add the complexity and also degrade the signal performance. These are the problems with conventional approach. So, we are now exploring an optical mesh network along with p-cycle-based protection but without wavelength converters as it will lead to reduced cost of the network.

4 Our Work Let us consider a network topology with a unit traffic matrix using a single fiber for establishing working paths for every link in each direction. To protect these links using a single fiber only, it is required to establish p-cycles on the spare capacity. These p-cycles cannot be formed on the same wavelengths which are occupied for the working paths. Hence, wavelength converters are necessary.

304

V. Gupta et al.

In our approach, we are modeling the network with separate fibers for protection, eliminating the need of wavelength converters. This will provide a single fiber for establishing the working paths and other fibers for the protection paths. p-cycles are formed on these protection fibers. This will make a fiber-based p-cycles. Whenever a failure will occur, whole of the fiber present on the failed link will be switched over to the p-cycle formed using protection fibers. This will not require switching at individual working wavelength level. As a whole, the working fiber will be switched over to the p-cycles formed on protection fibers capacity independent of wavelengths used in the working fiber. If feasible, then hamiltonian cycle will be the default solution [13]. Hamiltonian cycle is a cycle in the network which passes through all the nodes exactly once [16]. It is not necessary that a network will always have a hamiltonian cycle. A non-hamiltonian network does not possess hamiltonian cycle. Non-hamiltonian networks are those networks in which we cannot find a single cycle passing through all the nodes just once. To provide protection for such networks multiple p-cycles will be required imparting 100% restoration against single failure. These multiple p-cycles can be formed on multiple fibers. So, multiple fibers protection will be required for these networks. We are considering the non-hamiltonian network with working paths established without wavelength converters. They are provided protection by multiple fibers on all links used for forming the p-cycles. To provide on-cycle protection, it will be usual with a single path. However, for straddling protection out of two alternate paths, only one will be used. It happens as we are providing protection to working connections on one fiber only and also working paths are established without the converters. So, if wavelength path is assigned in one direction, corresponding path out of the two alternate paths for straddling protection will be used. Similarly, for the wavelength path assigned in different direction, the other straddling protection path will be used. For example, as shown in Fig. 5 if wavelength is assigned to straddling link A to C, then path of p-cycle will ABC. Similarly, if wavelength is assigned to straddling link C to A, then path of p-cycle will CDEA as shown in Fig. 6.

Fig. 5 p-cycle protection path ABC for straddling-link failure AC with wavelength assigned A to C

23 Pre-configured (p)-Cycle Protection for Non-hamiltonian Networks

305

Fig. 6 p-cycle protection path CDEA for straddling-link failure AC with wavelength assigned C to A

We have formulated the ILP for minimizing the fiber length required without using wavelength converters. Parameters l j length metric on span j δ pj 1 if p-cycle p passes span j otherwise 0 x pj 1 if p-cycle p protect span j as on-cycle as well as for straddling link. Variables np number of unit fiber copies of p-cycles p required. Objective min

 ∀ p∈P

np



δ pj l j

(5)

∀ j∈S

Subject to 1≤



x pj n p ∀ j ∈ S

(6)

∀ p∈P

np ≥ 0 ∀p ∈ P

(7)

Equation (5) provides the objective of minimizing the total spare capacity in terms of minimum fiber length required to form the fiber p-cycle. Equation (6) ensures that every span will be 100% protected by at least a single fiber-based protection. Equation (7) ensures number of unit fiber capacity copies of p-cycle should be an integer greater than zero. Also, our fiber-based protection p-cycle will be independent of the working wavelengths. So there is no need to specify working capacity on spans.

306

V. Gupta et al.

5 Results We have consider two non-hamiltonian test network topologies, net 1 with 12 nodes and 18 spans and net 2 with 11 nodes and 15 spans as shown in Figs. 7 and 8, respectively. The link length in kilometer (km) is indicated above all links. We have considered unit traffic matrix, and then routing is done following the shortest path algorithm. p-cycles are formed using breadth first search algorithm. ILPs are solved with ILOG CPLEX 9. We have compared the two test network topologies for conventional case and our case. In conventional case, routing is done using 100% wavelength conversion capability at every node. We estimated the number of converters for the conventional case in worst case scenario, assuming the number of wavelengths to be 20. Hence, for conventional case, the total number of converters in a network topology will be sum total of converters at each node. But in the proposed method, routing is done without wavelength conversion thus saving converter cost. We estimated the route km of fiber length required for the proposed approach as an alternative to wavelength converters. Now, extra fibers will be required to provide protection, this fiber cost will require to be invested. But, it will be much lesser as compared to the wavelength converters’ cost. As shown in Figs. 7 and 8, p-cycles are formed passing through the links indicated by dashed line, through which the extra fibers will be needed. Each cycle will be formed on separate fiber. Thus, for multiple cycles, multiple fibers will be required. For net 1 (Fig. 7), four extra fibers will be needed on which the p-cycles will be

Fig. 7 Net 1 with 12 nodes and 18 spans with link length in km, showing multiple p-cycles

23 Pre-configured (p)-Cycle Protection for Non-hamiltonian Networks

307

Fig. 8 Net 2 with 11 nodes and 14 spans with link length in km, showing multiple p-cycles

formed to provide the protection. These fibers will be setup on the cycle 1-2-9-81, 2-3-4-10-2, 4-5-6-11-4 and 6-7-8-12-6. Similarly, for net 2 (Fig. 8), two extra fibers will be needed on which p-cycles will be formed to provide protection. These fibers will be setup on the cycle 1-2-3-4-5-6-7-8-9-11-1 and 3-4-10-11-12-3. For conventional case, no extra fiber cost needed to be invested as same fiber will be used for working as well as for protection. The comparison is made for number of converters required and route km of fiber length required considering both the test networks for conventional case and our case. Conventional case requires greater number of wavelength converters, but does not need extra protection fiber as the same fiber is used for working as well as protection. In our case, route km of protection fiber length is additional but with the deduction in the cost of wavelength converters. As converters are costlier devices, they highly increase network cost for the conventional case. But, we have avoided the use of converters with investment in extra fibers for protection. Table 1 shows the comparison of net 1 with 12 nodes and 18 spans. It is found that number of wavelength converters required for conventional case are 720 while our approach requires no converter. However, route km of fiber length for our case is 280 which will be negligible for conventional case. Table 1 Comparison of net 1 with 12 nodes and 18 spans for conventional case and our approach Parameter

Conventional case

Our approach

No. of wavelength converters required

720

NIL

Route km of fiber length required for protection

NIL

280

308

V. Gupta et al.

Table 2 Comparison of net 2 with 11 nodes and 14 spans for conventional case and our approach Parameter

Conventional case

Our approach

Number of wavelength converters required

600

NIL

Route km of fiber length required for protection

NIL

249

Table 2 shows the comparison of net 2 with 11 nodes and 15 spans. It is found that number of wavelength converters required for conventional case are 600 while our approach requires no converter. However, route km of fiber length for our case is 249 which will be negligible for conventional case.

6 Conclusion We have developed an approach of protecting WDM networks with very effective method of p-cycles, without using wavelength converters. As no wavelength converter is being used, the network expenses are deducted in terms of hardware cost. The signal performance will also not deteriorate as no conversion is being performed. We investigated that non-hamiltonian networks can be efficiently protected by multiple p-cycles formed on multiple fibers. With our approach, hardware cost of converter will be avoided with more fiber length.

References 1. Asthana R, Singh YN (2004) Protection and restoration in optical networks. IEEE J Res 50(5):319–329 2. Asthana R, Garg T, Singh YN (2004) Critical span protection with pre-configured cycles. In: The proceedings of international conference photonics. Cochin, India 3. Schupke DA (2006) Analysis of p-cycle capacity in WDM networks. Photon Network Commun 12(1):41–51 4. Szigeti J, Cinkler T (2013) Evaluation and estimation of the availability of p-cycle protected connections. Telecommun Syst 52(2):767–782 5. Grover WD, Stamatelakis D (1998) Cycle-oriented distributed preconfiguration: ring-like speed with mesh-like capacity for self-planning network restoration. In: IEEE international conference on communications conference (ICC’98), vol 1. IEEE 6. Asthana R, Singh YN (2008) Distributed protocol for removal of loop backs and optimum allocation of p-cycles to minimize the restored path lengths. IEEE J Lightwave Technol 26(5):616–628 7. Jaiswal DC, Asthana R (2018) Power-efficient p-cycle protection with power conscious routing in elastic optical networks. In: International conference on current trends towards converging technologies (ICCTCT). IEEE, Coimbatore, pp 1–6 8. Eiger MI, Luss H, Shallcross DF (2012) Network restoration under dual failures using pathprotecting preconfigured cycles. Telecommun Syst 49(3):271–286 9. Zhang Y, Zhang Y, Shen G (2017) Extending FIPP p-cycles to protect straddling paths for dual failure network protection. In: Asia communications and photonics conference (ACP). Guangzhou, China), pp 1–3

23 Pre-configured (p)-Cycle Protection for Non-hamiltonian Networks

309

10. Schupke DA, Gruber CG, Autenrieth A (2002) Optimal configuration of p-cycles in WDM networks. In: Proceedings of IEEE international conference on communications (ICC), vol 5. IEEE, New York, pp 2761–2765 11. Asthana R, Singh YN (2007) Second phase reconfiguration of restored path for removal of loop back in p-cycle protection. IEEE Commun Lett 11(2):201–203 12. Gupta V, Asthana R, Singh YN (2020) p-cycle protection without wavelength converters. In: International conference on inventive computational technologies, ICCMC, 26–28 February. Coimbatore, India 13. Asthana R, Singh YN (2006) Removal of loop back in p-cycle protection: second phase reconfiguration. In: Proceedings of 10th IEEE international conference on communication systems (IEEE ICCS 2006). IEEE, Singapore, pp 1–5 14. Sun Q, Yang Y, Zhou Y (2019) Improved P-Cycle Capacity Optimization Algorithm. In: Asia communications and photonics conference (ACP). Chengdu, China, pp M4A-109 15. Grover WD, Stamatelakis D (2000) Bridging the ring-mesh dichotomy with p-cycles. In: Proceedings of DRCN workshop, pp 92–104 16. Girão A, Kittipassorn T, Narayanan B (2019) Long cycles in Hamiltonian graphs. Israel J Math 229:269–285

Chapter 24

A Novel Approach to Multi-authority Attribute-Based Encryption Using Quadratic Residues with Tree Access Policy Anshita Gupta

and Abhimanyu Kumar

1 Introduction Attribute-based encryption (ABE) [14] has become an interesting part of the research area due to its higher efficiency and high compatibility with the existing computing paradigms. It is a type of public key cryptography [16] that works by attaching the keys with attributes of the user. In an ABE Scheme, the ciphertext and the private keys are dependent on the attributes. The client can find the ciphertext just when the arrangement of attributes is like the attributes of the ciphertext. The significant security issue of the attribute-based encryption is the collusion resistance. There are mainly two types of attribute-based encryption schemes, namely key-policy attributebased encryption (KP-ABE) [6] and the ciphertext-policy attribute-based encryption (CP-ABE) [1] schemes. • KP-ABE: In this scheme, a lot of attributes are utilized to characterize the ciphertext by the encryptor. Furthermore, the private keys are connected to a specific access structure that gives the process about what kind of ciphertexts can be decoded by the client. The specification is done in the key while attributes define the ciphertext. • CP-ABE: The private key is linked to some number of attributes expressed as strings. During encryption, an attribute’s access structure is specified. Even though the ABE system is highly propitious, it suffers from two limitations, the non-efficiency and the non-existence of the revocation methods. It is a problem as many different users can have the same attribute. Here, the attributes are supposed Supported by organization x. A. Gupta (B) · A. Kumar National Institute of Technology, Uttarakhand, India e-mail: [email protected] A. Kumar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_24

311

312

A. Gupta and A. Kumar

to be revoked rather than the users or the keys. However, another limitation to the Sahai and Waters model was the presence of a single trusted third party that must verify a user and the user must be able to prove his identity for attaining a secret key. The user also channels through the trusted third party or the server to receive the private keys linked to the attributes. This means that there must be a presence of a third-party monitoring all the attributes and the users. Further, a new challenge came into existence: Whether it is possible to have an attribute-based encryption with not one server but multiple authorities that work together to provide private keys for every attribute set. Chase [2] presented a model for a multi-authority ABE scheme (M-ABE). In the simple single authority ABE, only main server provides a private key by screening through all the attributes. But in the multi-authority type, the secret is given in different ways for every user. It must be noted that there is no communication between the authorities. There were two main properties of this type of system: One is the presence of global identifier (GID) for every user and other that all the authorities present can verify any user’s GID. Also, there is a presence of central authority. The user gets his private key by just sending the GID to the central authority. The information regarding the attributes is not saved with the authority, but the only job of the central authority is to provide the key for the GID. Further, there is a pseudorandom function (PRF) that provides random private keys in every user. It ensures deterministic yet random keys. The authority calculates a PRF using the GID of the user whenever a user asks for a private key and also uses this in basic key generation. Then, this secret can be reconstructed by a user which has sufficient attributes to use the keys. As the output of different PRFs is different, secrets reconstructed by different users are also different. The extensions of the multi-authority attribute-based encryptions consisted of using large-scale attribute universe. The access structures could be different for many schemes. Various data structures have been used to make the multi-authority scheme more secure or more friendly to use with the existing systems. The model presented here uses the quadratic residues in order to remove the discrepancies of bilinear pairings.

2 Related Work Many cryptographers dealt with the attribute-based encryption and removed the limitations of existing algorithms. The first model for multi-authority ABE was presented by Chase [2]. In this model, any polynomial number of authorities are present that are independent in monitoring the attributes and providing private keys. The user selects a number dk for each authority and an attribute set. The receiver can only decrypt if there are at least dk attributes in the given set for the authority k. This can also be applied to a large-scale attribute universe. Also, there is a description of the variance of this model in which the user can provide the number of attributes for each and every ciphertext from each authority. This model prevents collision that is the problem of the single authority attribute-based encryption. There are two techniques

24 A Novel Approach to Multi-authority Attribute-Based …

313

used in this. Each client uses a global identifier (GID). This provided that any user cannot take another user’s identity and also that each authority can identify a GID. And the other is the central authority. This central authority contains the master key and is thus able to decrypt any message and the attributes of the user are hidden. Then, the user further sends his GID to the central authority, and then, the central authority sends the setup key to the user. This algorithm thus had a limitation of reliance on a single authority and also a single-point failure problem. Another limitation was that the central authority could decrypt any message. Also, the regular GID could be used by the authority to gather information about the user. After this, Lin et al. [12] presented the very first scheme of multi-authority ABE scheme that did not include any central authority. The pseudorandom function is replaced by a polynomial. They gave the proposal for the extension of their threshold scheme to a new multi-authority system without the central authority. Considering the proposed model by Lin et al. [3] again gave a model of M-ABE. This model removed the dependency of central authority (CA), thus preventing the pooling up of information. The central authority would just issue the user’s private key and not attributes. The main idea that was used here was Brent Waters’ [15] suggestion for using the sum of PRFs. Muller et al. [13] extended the idea of central trusted authority to a distributed attribute-based encryption in which attributes and their keys are maintained by independent parties. They proposed all the possible access policies in the form of DNF. They presented a proof to their security for chosen plaintext attacks. Li et al. [11] then came up with the novel notion of hierarchical attribute-based encryption. They combined the secret sharing schemes in ABE with the hierarchical identity-based encryption. This showed a major improvement in the traditional attribute-based encryption. These models, however, lacked certain accountability that when users are anonymous, they can exploit the secrecy by sharing the keys with other unauthorized users. So Li et al. [10] thought of an answer to following the identity of the client that releases the key and afterward ensures trust issues with that client. It added a policy on the receiver’s identity that can be an identity or wildcards. The policy remains anonymous even through it is ciphertext. And thus, it reduced the trust on both the authorities and the users. Their model reduced the accountability on the users, and it was also secure under the DBDH, DLIN and q-DDHI. In CP-ABE, for deciding the private key of the user, there is a misinterpretation of a frequently characterized relationship between the ciphertext related to these attributes and the attributes of a client. Be that as it may, for a client with more than one attribute, the length of the key relies upon the quantity of characteristics. The current techniques that utilization sensibly calculable decoding approaches produce the ciphertext of size at any rate directly shifting with the quantity of attributes. Then, Doshi et al. [4] presented a new model that consisted of both the constant and variable length ciphertext that were used to produce two schemes. The basic approach was to use the AND gates for multi-valued attributes. These models had the limitation

314

A. Gupta and A. Kumar

that they did not consist of authorities that can be created on their own. And thus, a global coordination was essential in every system. Considering all the factors, Lewko et al. [9] presented a model in which any party could become a authority when only a initial set of references are provided. The advantage was that the relation between the attributes can be any and also the attributes could be taken from any set. There was also no need for a central authority. There was a tieing of attributes with the parameters to avoid collusion. Han et al. [7] came up with another variant of M-ABE that focussed on removing the CA dependency. The computation was done at the setup phase itself to remove any kind of overhead. Apart from these, many more algorithms have been presented with some or the other changes in order to make far more secure model of ABE. In this, a proposal for a similar kind of M-ABE is demonstrated in a better complexity, considering the inclusion of quadratic residues instead of bilinear pairings. For presenting a theoretical analysis of the model, there are certain criteria that were used here which were formally presented by Lee et al. [8] in their paper to analyze an ABE scheme. 1. Data Confidentiality When the data is being sent, it encrypted. It should remain unknown to the unauthorized parties even to the cloud. The cloud must contain only the encrypted text. 2. Access control The access rights demonstrated should be different for different users. This allows the same users that are different groups to have different access rights. 3. Scalability The increase in the number of users must not affect the system. It should work efficiently in all the scenarios. 4. User Accountability The user can be dishonest and share his key that may result in various unauthorized users to go through the data that is supposed to remain secret. 5. User Revocation As the user quits the system, all the rights must be taken away from the user as soon as possible. The user is not supposed to access the data anymore. 6. Collusion Resistant The attributes should be combined in any manner to find the appropriate access right. This may make different possible combinations to decipher the authorized text. Now when different attributes are linked using randomization, this makes collusion impossible.

24 A Novel Approach to Multi-authority Attribute-Based …

315

Fig. 1 An M-ABE scheme in cloud

3 Motivation Attribute-based encryption has been a part of major computing systems. For concern, cloud computing has been the known application of implementation of attributebased encryption. As the distributed cloud computing is coming to the stage as shown in Fig. 1,1 there is a strong requirement of the systems that facilitate decentralized control over the systems especially as a majority of information is kept in the cloud. It is important to reduce the overhead and the dependency on a single server to provide the keys.

4 Proposed Multi-authority Attribute-Based Encryption Scheme A brief discussion about the earlier advancements on the M-ABE scheme showed that there have been many developments on the multi-authority encryption in bilinear pairings; a novel approach to the multi-authority encryption is presented here considering the quadratic residues and using the tree access-based policy.

1 https://content.iospress.com/articles/multiagent-and-grid-systems/mgs190304.

316

A. Gupta and A. Kumar

4.1 Quadratic Residues Various models of ABE scheme are implemented using the quadratic residues because of their modular exponentiation which further makes it harder to crack. Quadratic residue [5] can be defined as, If an integer a is congruent to some perfect square modulus n, for an integer x: x2 ≡ a

(mod n).

Then a is referred to as the quadratic residue modulo n; otherwise, it is referred to as the quadratic non-residue.

4.2 Tree-Based Access Policy The schemes with tree access policy are considered to have better expressive qualities as well as it is far more flexible than the linear access schemes. As shown in Fig. 2, there are n trees for each attribute with roots as r10 , . . . , rn0 . The tree has the root as ri0 ; the depth is d1 . Consider any path (ri0 , . . . , ril ) from the root ri0 for the attribute ril of depth l, where the user is provided with the attributes with blue nodes. The blue path represents the user attribute path, and the black path represents the regular path. The private key will be issued for the blue path that starts with a blue node attribute set corresponding to the ciphertext.

Fig. 2 Tree access structure

24 A Novel Approach to Multi-authority Attribute-Based …

317

5 The Basic Steps of the Proposed Scheme 1. Global Setup: Consider the implicit security parameter λ = p · q, for any prime integer p and q, as the input of this system. The output of this setup is the security parameter represented as SK. 2. Authority Setup: This setup is run by all the authorities. Each authority Ai produces its public key PKi for i = 1, 2, . . . , n, when there are n attribute authorities and the private key SKi . 3. Key generation: This is also run by each authority Ai . Each authority is linked with a user u for providing his private key. The authority Ai takes the private key SKi as the input, a set of attributes (A) and global identity of any user, GID. It returns the private key of the user as SKu . SKu = λ ∗

(λ + 5( p + q) 8

(mod λ)

(1)

where (λ , λ) are co-prime. Also, the square root of λ is the private key SKu as it has the knowledge of p and q. If SKu = λ , then λ is the QR modulo λ and if Sku = −λ , then −λ is the quadratic residue modulo λ. 4. Encryption: Now the encryption is done by the user using the public key PKi of each authority Ai and the access tree T A . This gives the ciphertext as C whose decryption is only valid when the private key linked with the set A gives the tree with C. Consider the product of the prime attributes from the access tree be represented as λT . The authority finds randomly the m 1 and m 2 as (m 1 , λ) and (m 2 , λ) are coprimes, respectively, and also mλ1 = mλ2 = m C1 = m 1 +

λT m1

(mod λ)

(2)

C2 = m 2 −

λT m2

(mod λ)

(3)

Thus, the ciphertext is obtained as CT(C1 , C2 ). 5. Decryption: This algorithm requires PKi of the authority Ai and the private key of the user SKu for decrypting the ciphertext C.  m=

(C1 + 2SKu )/λ, if λ is the QR modulo λ (C2 + 2SKu )/λ, if − λ is the QR modulo λ

(4)

318

A. Gupta and A. Kumar

6 An Illustration of the Proposed Scheme to Cloud-Based Environment A very well-known application of the attribute-based encryption is the cloud environment. The cloud extensively being used today still faces the major concern of security. In spite of several contributions in this field, it is a very interesting topic of discussion. The proposed scheme was basically prepared in order to facilitate reduced space and time bounded sharing of data in a general cloud model (Fig. 3). 1. Cloud Server The cloud server is the main virtual server that delivers the content to the users. The data that is stored here is completely encrypted in order to ensure the data confidentiality. 2. Certified Authority (CA) The certified authority produces randomly a common parameter λ. The GIDs of different users are stored with the central authority. 3. Key distribution centers (KDCs) The key distribution centers (KDCs) are responsible for providing the keys like the attribute authorities. The authority setup stage and the key generation stage are run here as demonstrated in the scheme. Each KDC KDCi produces its public key PKi for i = 1, 2, . . . , n, when there are n KDCs and the private key SKi .

Global Setup

SK

Authority Setup

A, GID

Key Generation

Encryption

Fig. 3 The proposed scheme. Source Authors

C

Decryption

24 A Novel Approach to Multi-authority Attribute-Based …

319

Cloud server

C

Data Owner

Encryption site

C

PK1 SK1

PK2 SK2

SKn SKn

KDC1

KDC2

KDCn

Data user

Decryption Site

Key Generation

GId of user

Certified Authority

Global Setup

Fig. 4 A cloud-based model of the proposed scheme. Source Authors

Each KDC is linked with a user u for providing his private key. The center KDCi takes the private key SKi as the input, a set of attributes (A) and global identity of any user, GID. It returns the private key of the user as SKu . SKu = λ ∗

(λ + 5( p + q) 8

(mod λ)

(5)

4. Data owner The data owner is responsible for encrypting the data so that only the users that have the appropriate access control can access the data using the keys. The data owner stores the data in the server which can be further accessed by the authorized users. 5. Data Users These include all the users that are a part of the system. Users may be present in different groups. Even a single group can consist of different users with different access control trees. And thus, the sharing of keys between the users becomes impossible, considering the fact the users are loyal and do not share the keys with unauthorized users (Fig. 4).

7 Analysis and Limitations The scheme is highly dependent on the issues of quadratic residues. Also just for sending x bits using a modulus of 1024 bits, 16K bits for keys will be required. Thus, a major limitation to this scheme is the increased overhead and then the requirement

320

A. Gupta and A. Kumar

Table 1 The prospects of different ABE schemes Scheme 1 2 3 ABE KP-ABE CP-ABE M-ABE Proposed

No Yes Yes Better Better

Yes Yes Yes Better Best

No No No Yes Yes

4

5

6

No No Yes Better Better

No Yes Yes Better Yes

Yes Yes Yes Yes Yes

of more bandwidth in order to send the multiple keys. For sending just an x bit message, 2x·logλ key size is required. But comparing the time complexity, we find it far much better than the existing schemes. The space complexity can be ignored in order to get a much better time-efficient algorithm. The major cracking of this algorithm is possible if the λ is found or the factorization of λ is possible. So, the expectation is that λ is not factorized. The approach is also dependent on the attributes of the user. And now for the issue of quadratic residues, another solution can be made through as: If the user’s attributes are not squared, the mλ is also not possible not be found. Also, the probability of a squared attribute set is very low. A brief comparison to other models discussed on the basis of the criteria that were presented by Lee et al. [8] is presented in Table 1. Our major concern was to reduce the time and space complexity that were the problems suffered by the earlier models that were based on the bilinear pairings. And it was a success to reduce the time taken in building the pairings, and thus, the quadratic residues can be a major solution to the dilemma of bilinear pairings.

8 Conclusion Considering all the revelations that have been done in the field of attribute encryption schemes, a novel proposal of multi-authority attribute-based encryption is laid here. The usage of quadratic residues reduces the overhead of bilinear pairings. The treebased access policy facilitates a broader level of access control, though hiding the attributes of the user. Even considering the limitations of the proposed scheme, the basic idea about the more efficient algorithm is henceforth presented. These can, thus, be used for making more secure distributed or decentralized systems.

References 1. Bethencourt J, Sahai A, Waters B (2007) Ciphertext-policy attribute-based encryption. In: 2007 IEEE symposium on security and privacy (SP’07). IEEE, pp 321–334

24 A Novel Approach to Multi-authority Attribute-Based …

321

2. Chase M (2007) Multi-authority attribute based encryption. In: Theory of cryptography conference. Springer, pp 515–534 3. Chase M, Chow SS (2009) Improving privacy and security in multi-authority attribute-based encryption. In: Proceedings of the 16th ACM conference on computer and communications security, pp 121–130 4. Doshi N, Jinwala D (2011) Constant ciphertext length in multi-authority ciphertext policy attribute based encryption. In: 2011 2nd international conference on computer and communication technology (ICCCT-2011). IEEE, pp 451–456 5. Gauss CF (1966) Disquisitiones arithmeticae, vol 157. Yale University Press 6. Goyal V, Pandey O, Sahai A, Waters B (2006) Attribute-based encryption for fine-grained access control of encrypted data. In: Proceedings of the 13th ACM conference on computer and communications security, pp 89–98 7. Han J, Susilo W, Mu Y, Yan J (2012) Privacy-preserving decentralized key-policy attributebased encryption. IEEE Trans Parallel Distrib Syst 23(11):2150–2162 8. Lee CC, Chung PS, Hwang MS (2013) A survey on attribute-based encryption schemes of access control in cloud environments. IJ Netw Secur 15(4):231–240 9. Lewko A, Waters B (2011) Decentralizing attribute-based encryption. In: Annual international conference on the theory and applications of cryptographic techniques. Springer, pp 568–588 10. Li J, Huang Q, Chen X, Chow SS, Wong DS, Xie D (2011) Multi-authority ciphertext-policy attribute-based encryption with accountability. In: Proceedings of the 6th ACM symposium on information, computer and communications security, pp 386–390 11. Li J, Wang Q, Wang C, Ren K (2011) Enhancing attribute-based encryption with attribute hierarchy. Mob Netw Appl 16(5):553–561 12. Lin H, Cao Z, Liang X, Shao J (2008) Secure threshold multi authority attribute based encryption without a central authority. In: International conference on cryptology in India. Springer, pp 426–436 13. Müller S, Katzenbeisser S, Eckert C (2008) Distributed attribute-based encryption. In: International conference on information security and cryptology. Springer, pp 20–36 14. Sahai A, Waters B (2004) Fuzzy identity-based encryption cryptology eprint archive. Report 86:2004 15. Shacham H, Waters B (2008) Compact proofs of retrievability. In: International conference on the theory and application of cryptology and information security. Springer, pp 90–107 16. Stallings W (2006) Cryptography and network security, 4/E. Pearson Education India

Chapter 25

An Improvement in Dense Field Copy-Move Image Forgery Detection Harsimran Kaur, Sunil Agrawal, and Anaahat Dhindsa

1 Introduction 1.1 Types of Image Forgeries Tampering the digital images has become quite easy in recent years with quite sophisticated and easy to use software like Photoshop, Gimp [1], etc. While tampering some transformations like smoothening, blurring, etc., are usually applied in the post-processing stage so that there are no visible clues of manipulated regions left [2]. As digital images are used as evidence at various places like courtrooms, news, and many more, hence, research on digital image forensics has increased these days. The purpose herein is to develop the detection techniques for the feigned images, as the intention behind the image forgeries is mostly malicious, making them important to detect [3]. Image forgeries can be broadly divided into three main categories: image retouching, image splicing, and image cloning [4]. Image retouching is usually done to enhance or reduce the effect of certain features to make the image look more attractive like it is used in magazines and hoardings. Retouching is usually less harmful whereas image splicing and image cloning can pose a serious threat [5]. In image splicing, the resultant image is formed by clipping the regions from two or more images and pasting into the resultant image [6]. As two different images are forming the resultant image, so the variations in properties like color temperature, noise, illumination changes could be used to detect spliced region but, image cloning also referred to as copy-move forgery is more difficult to detect. The reason behind this H. Kaur (B) · S. Agrawal · A. Dhindsa UIET, Panjab University, Chandigarh, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_25

323

324

H. Kaur et al.

is that in copy-move forgery, some part of the target image is copied and pasted at some other location in the same image [6]. Thus, most of the properties of the two regions are similar, making this type difficult to detect [6]. In Figs. 1, 2 and 3, three different types of image forgeries have been illustrated. Figure 1 shows an example of image retouching, it could be seen that this has not changed the overall meaning of the image, whereas in Figs. 2 and 3, the information represented in the image has changed.

Fig. 1 Image retouching

Fig. 2 Image splicing

25 An Improvement in Dense Field Copy-Move Image Forgery Detection

325

Fig. 3 Image cloning

1.2 Copy-Move Image Forgery Detection Copy-move forgery detection approaches broadly fall into two main categories: active and passive [7]. Digital watermarks [8] and digital signatures [9] come under the category of active detection approaches. A major drawback of active detection approaches is that a watermark or signature must be embedded during the image acquisition stage or before transmission. To detect tampering, the extracted watermark or signature of the image is compared to the original embedded watermark; if it matches, the image is considered to be authentic. Active approaches fail to detect tampering if a watermark or signature was not embedded. So, most of the research is concentrated on the passive detection approaches discussed in the literature below which do not require any prior information.

2 Related Work So far, many passive approaches have been developed for copy-move image tampering detection. Most of these mainly constitutes of 4 to 5 stages (see Fig. 4)

PreInput

Feature Extraction

processing

Feature Matching

Key-point Detection

Fig. 4 Common methodology used for copy-move image forgery detection

Postprocessing

Result

326

H. Kaur et al.

[6]. Key-point detection is an optional stage in the CMFD pipeline. Broadly, CMFD methods can be classified into four types: (1) Block-based (2) Key-point based (3) Hybrid (4) Dense-field. These methods differ in the aspect of how the features are extracted from an image. Preprocessing is done to reduce the dimensionality of the input image. This aids in reducing the computational complexity of methods used in later stages or increasing detection accuracy. Yang et al. [10], Fadl et al. [11], Lee et al. [12], Nithiya et al. [13] used RGB to gray conversion in the pre-processing stage. Alahmadi et al. [14], Ustubioglu et al. [15] used RGB to YCbCr conversion to improve the detection accuracy. In addition to this, in block-based feature extraction techniques, Hilal et al. [16], Cao et al. [17], Ustubioglu et al. [15] have segmented the image into overlapping or non-overlapping fixed or variable size blocks. Pun et al. [18], Shahroudnejad et al. [19], Huang et al. [20] have used simple linear iterative clustering (SLIC) method for meaningful segmentation of the image. In key-point detection techniques, interest points which are usually the high entropy points are first detected in the target digital image. Unique features are extracted at these key-points by using the neighborhood around these key-points but if the key-points with quite similar properties are detected, the number of false matches would increase significantly. A very key-point few detection techniques show good results for the CMFD process. Harris Corner Detector used by Sanchez et al. [21] based on gradients is robust to rotation but not invariant to scale and Laplacian of Gaussian used by Das et al. [22] based on Gaussian filter show excellent results against scaling but is computationally expensive and cannot be used in real-time applications. Therefore, the Difference of Gaussian filter is used by Emam et al. [23] in SIFT key-point detector and descriptor for reducing the computational complexity. Another popular key-point detector, Speeded-up robust feature (SURF) [24] based on the Hessian matrix is more reliable than the Harris detector in terms of repeatability and scaling. Repeatability determines the reliability of the detector; the detector should produce the same interest points under different viewing conditions. Also, it is faster than SIFT but its accuracy is lesser especially when the forged region is rotated and then pasted. In the feature extraction stage, the feature descriptors are produced at the pixels or key-points or from the blocks or segments from the pre-processing stage. Generating descriptors in the CMFD process is quite difficult as they should be discriminative enough to differentiate pixels in the close neighborhood and at the same time, descriptors of copy-move regions should have enough similarity so that forged regions could be detected, thus this stage will specifically affect the detection accuracy. Fridrich et al. [25] divided the image into equal blocks and used discrete cosine transform (DCT) to generate the coefficients and lexicographic sorting was used to increase the speed of matching the features by moving similar feature vectors near. Euclidean distance was then used in the matching phase. Although DCT deals good with JPEG compression, it is unable to deal effectively when rotation and scaling is applied to the forged region. Ustubioglu et al. [15] used DCT to extract features and introduced the method of automatic threshold determination based on the compression history of image. Popescu [26] divided the image into fixed-size blocks used DWT to extract

25 An Improvement in Dense Field Copy-Move Image Forgery Detection

327

the coefficients at multiple resolutions and to reduce the dimensionality of feature vector to 32 elements principal component analysis (PCA) was used, and finally, lexicographic sorting was used to further improve the speed in matching stage. Karsh et al. [27] combined DWT and singular value decomposition (SVT) to reduce the length of the feature descriptor. Mahmood et al. [28] combined DCT and SWT to reduce false positives and increase detection accuracy; further, make the detection more robust against JPEG compression, color reduction, illumination change, and blurring. The features are extracted through SWT and their dimensionality is reduced by applying DCT. A major drawback of block-based techniques is that they fail to give satisfactory results when various transformations like rotation and scaling are applied to the region copied and moved. SIFT and SURF are the most widely used key-point-based methods. Scaleinvariant feature transform (SIFT) initially introduced by Lowe [29], is least affected by scaling and rotation making it the most prominent feature extractor for key-pointbased techniques. Alberry et al. [30], Yadav et al. [31], Das et al. [22], Muzaffer et al. [32], Jin et al. [33], Shahroudnejad et al. [19], Li et al. [34], Huang et al. [20], and many more have used SIFT and its variations for feature extraction. SIFT produced less key-points in flat regions. Yang et al. [10] proposed a modified SIFT algorithm that used a key-point distribution criterion for dispersing the key-points evenly throughout the image improving the detection of forgeries in flat regions. Although SIFT has proved to be quite robust but yields a 128-element feature vector making it really complex and unapt for real-time systems. SIFT and PCA [35] have been combined to produce the descriptor with dimensionality reduced to 32 elements, which is good for the reduction of time in the feature vector matching stage but, at the cost of reduced distinctiveness and also, more time for computation of feature vector than SIFT. Similarly, in another approach, SIFT was combined with LPP [36] for dimensionality reduction. SURF descriptor could be 128, 64, or 32 elements long, lesser the length of descriptor lesser would be computational complexity but at the cost of reduced accuracy. In hybrid methods of CMFD, both block-based and keypoint-based techniques are utilized to improve the results or reduce the complexity. Hashmi et al. [37] used DWT on forged images to decompose and extract the LL part (that contains most of the information) of image and SIFT was then used to describe the features. This combination is fast and provides robust results. Anand et al. [38] used dyadic wavelet transform (DyDWT) on forged images to decompose and extract the LL part of the image and SIFT was used to describe the features; this approach has proven to be more accurate than DWT-SIFT. Dense-field techniques proposed by Cozzolino et al. [39] compute the features at each pixel in the image, making their performance superior to block-based and key-point-based techniques but at the cost of more complexity as features would be generated at each pixel. Therefore, the number of points to be matched would increase significantly as each pixel is used in the process of feature extraction and feature matching. In an effort to reduce the computational complexity in the densefield techniques, lesser dimension feature vector is used. Many fast searching techniques like lexicographic sort [25], and-trees [23], locality sensitive hashing used prior to key-point and block-based methods which showed unsatisfactory results in

328

H. Kaur et al.

terms of computational complexity or robustness. Patch-match algorithm based on approximate nearest neighbor search has been used to match the features efficiently in less time [40].

3 Methodology The basic methodology used in the proposed paper consists of five major steps (see Fig. 4). Below we will discuss each of these steps in detail.

3.1 Preprocessing In this very first step, conversion of the input RGB image into a grayscale image is done to reduce the dimensionality of the image. In our proposed work, nonlinear operation, dilation has been performed on grayscale images to expand the regions and fill the small gaps in boundaries. After dilation, closing has been performed which further aids in filling small holes that could be because of noise. A 3 × 3 kernel with each weight 1 has been used in these operations. Later on, an unsharp masking filter has been used in the process to reduce the small smoothening effect caused because of both dilation and closing. This filter basically subtracts the blurred component of an image from that image resulting in a sharpened image.

3.2 Feature Extraction In the proposed paper, we have used Zernike moments, polar cosine transforms, and Fourier-Mellin transforms in the feature extraction stage. Zernike moments [41] and polar cosine transforms (PCT) [42] provide better robustness to the rotation, so these have been used. Also, FMT [43] based on log-polar sampling provides robustness to scaling. All these feature extraction techniques have been used for producing lowdimensional features. The lengths of feature for Zernike moments, PCT, and FMT are 12, 10, and 25 elements, respectively, which is quite lesser than SIFT and SURF feature vector. Regarding the time taken by each algorithm to detect copy-move forgery when the number of iterations (N it = 8) is kept constant, it was observed that for FMT feature extraction time (3.558 sec per image) is highest because of higher dimensionality of the feature vector, also, consequently feature matching time (7.633 sec per image) is also highest. In comparison, Zernike moments Cartesian and polarhad feature extraction time of 0.907 and 1.105 sec per image, respectively. PCT Cartesian and polar have lowest feature extraction time of 0.708 and 0.865 s per image. For both Zernike moments and PCT, feature extraction time for Cartesian co-ordinates is lesser than polar co-ordinates.

25 An Improvement in Dense Field Copy-Move Image Forgery Detection

329

3.3 Feature Matching The region is considered as duplicated if feature vectors of the two regions are similar. The type of algorithm used in the matching stage typically affects the speed and accuracy of the system. Matching speed needs to be considered specifically, in densefield techniques as in these techniques features are extracted for each pixel making the number of comparisons considerably high. Therefore, we relied on an enhanced patch-match algorithm, which reduces the time complexity significantly [44]. In this method, the nearest neighbors are found by measuring the Euclidian distance between two descriptors and, if this distance is lesser than the pre-defined threshold value (ThresholdEuclidean ), the features are considered as matched. Nevertheless, finding the precise nearest neighbor (NN) is extremely time consuming. So, in patch-match algorithm, an approximate NN search for offset field is implemented. This helps in accelerating the search by fast randomized matching between patches of the image. Let F(x,y) be the feature descriptor for a p × p patch, namelyP(x,y) centered at coordinates (x, y) in an image. Matching algorithm finds another feature descriptor F(x  ,y  ) of p × p patch,P(x  ,y  ) in the same image centered at some different coordinates x  , y  such that the Euclidean distance (DEuclidean ), between F(x,y) and F(x  ,y  ) is minimum. Here, the Euclidean distance is a measure of similarity. Two important phases that improve the efficiency of the patch-match algorithm are propagation and random search. In the first step, every patch P(x,y) in the image is randomly assigned with another patch P(x  ,y  ) to compute the initial approximate NN field. It is obvious that most of the initial assignments would be bad but there would be a certain number of good ones also which hold on to the criterion DEuclidean < ThresholdEuclidean . The main motive of propagation is to propagate the good assignments to the adjacent patches iteratively. By depending upon propagation alone, the risk of being trapped in local minima is high; therefore, to escape from this, a random search phase is added. Let P(x  ,y  ) be the current approximate NN of randomly from exponentially decreasing P(x,y) . The approximate NN field   is chosen distance of patch centered on x  , y  until the search radius is less than one pixel. This process is iterated until it converges. Further, an enhanced patch-match algorithm improves the invariance of an algorithm to rotation and scale. The core of this enhanced algorithm is based on extending the search process from 2 dimensions of (x, y) to 4 dimensions, namely (x, y, θ, s) where θ represents a degree of rotation and s represents scale. Initialization, propagation, and random search are done on the range of possible co-ordinates, orientations, and scales for convergence of the algorithm.

3.4 Post-processing Postprocessing is done to reduce the probability of false-positive matches and enhance the forgery detection process. Several techniques have been used for this like

330

H. Kaur et al.

segmentation or clustering-based and techniques based on thresholds and morphological operations. Agglomerative hierarchical clustering (AHC) has been combined with a random sample consensus (RANSAC) to provide quite robust false matches removal algorithm [45]. Besides, simple linear iterative clustering (SLIC) has been combined with RANSAC [46] to divide the matching regions into meaningful patches for false match removal but overall clustering or segmentation-based techniques prove to be relatively slow. Therefore, in our approach, we have used threshold-based constraints and morphological operations. Firstly, median filtering is done over the circular window of radius 4, it helps in the removal of outliers thereby improving the detection process without changing the overall behavior of image. Then, DLF error is computed over radius 6 in which a pair of patches are considered as tampered only if the DLF error between them is lesser than some pre-defined threshold T DLF . Also, the regions closer than distance T D2 and regions smaller than T s pixels are removed because pixels in the neighborhood are highly correlated like in case of smooth background these might not actually represent the forged regions. Lastly, dilation is done to pacify the effect of erosion on copy-move regions caused due to median filtering and DLF, and a mask is generated to show the forged region.

4 Experimental Setup, Results, and Discussions The experimental results presented in this paper are carried out on MATLAB R2016a (64-bit) on a system having a processor with Intel® Core™ i3-4010U CPU @ 1.70 GHz, 4 GB RAM. Five different features, namely ZM-cart, ZM-polar, PCT-cart, PCT-polar, and FMT (log-polar) have been evaluated on 100 forged and the 100 original ground truth images for the same forged images used from CoMoFoD small (512 × 512) dataset [47]. True positive (TP) represents the number of tampered images, which are correctly predicted as forged, false negative (FN) represents number of missed forged images, false positive (FP) represents the number of authentic images incorrectly detected as tampered, and true negative (TN) as number of original images correctly predicted as authentic. For evaluating the performance of different techniques four metrics, namely accuracy, precision, recall, and F1 score and the formulas used for their calculation are represented below. Precision signifies probability that a detected forgery is actually a forgery, while recall also referred to as true positive rate (TPR) signify the probability that a forged image is detected. F1 Score is a measure that takes both precision and recall into account. Accuracy =

TP + TN TP + TN + FP + FN

(1)

TP . TP + FP

(2)

Precision =

25 An Improvement in Dense Field Copy-Move Image Forgery Detection

Recall = F1 score =

331

TP TP + FN

(3)

2 × Precision × Recall Precision × Recall

(4)

Results in Table 1 show that increasing the dense linear fitting error from T DLF = 300, which were proposed in [40], was successful in improving the accuracy to a small extent. Further, reducing the minimum size of clones improved the results. In all, combining both increased dense linear fitting error and reduced minimum size of clones resulted in increased accuracy above 5% in each case. Figure 5a, e represents the ground truth and Fig. 5b, f represents the forged images that are input to our Table 1 Accuracy (%) for different techniques by varying T DLF and T S DLF error (T DLF )

Min. size of clones (T S )

ZM-cart (%)

ZM-polar (%)

PCT-cart (%)

PCT-polar (%)

FMT (%)

200

1200

67

66.5

69.5

68

68.5

300

1000

74

73

73.5

74

77

300

1200

68.5

68

70.5

69.5

71

300

1400

66

66.5

68.5

69

70

400

1000

75.5

75

76.5

77.5

78

400

1200

70

70.5

72

71.5

75

400

1300

67.5

69

70.5

71

72

Fig. 5 a Ground truth image. b Input-forged image. c No mask at T DLF = 300, T s = 1200. d No matching at T DLF = 300, T s = 1200. e Ground truth image. f Input-forged image. g Mask generated for forged region at T DLF = 400, T s = 1000. h Matched region at T DLF = 400, T s = 1000

332

H. Kaur et al.

Table 2 Applying different operations on PCT-polar features with T DLF = 400 and T S = 1000 Sr. No. Operation used on PCT-polar feature with Accuracy (%) Recall Precision F1 score T DLF = 400, T S = 1000 1

Dilation

81

0.71

0.8875

0.7889

2

Opening

77

0.62

0.8857

0.7624

3

Closing

80

0.66

0.9167

0.7675

4

Erosion

77.5

0.65

0.8667

0.7429

5

Dilation and closing

82

0.73

0.8902

0.8022

6

Dilation, closing, and unsharp masking

83.5

0.75

0.9036

0.8197

algorithm at two different values. Figure 5c, g clearly shows the effect difference, a forged region in this figure is detected by PCT-polar, the result was pristine with T DLF equal to 300 and T S equal to 1200 was later on successfully detected at T DLF = 400 and T S = 1000. A similar effect was observed when T DLF was increased. Thus, from the above analysis, T DLF = 400, and T s = 1000 were chosen, which gave more precise results. After this, morphological operations with a 3 × 3 kernel, and an unsharp masking filter are applied to image in the pre-processing stage. Table 2 shows the results when different operations are applied. Observations indicate out of four different morphological operations applied, recall was highest for dilation and precision was highest for closing. Therefore, these two were combined with dilation followed by closing which increase the F1 score and accuracy. In the next step, an unsharp masking filtering was done followed by a combination of morphological operations to pacify the smoothening effect. Applying this combination, accuracy increased from 77.5 to 83.5%, and the F1 core improved to 0.8197 for PCT-polar feature extraction. Lastly, Fig. 6 indicates the comparison of accuracy, precision, recall, and F1 score of proposed method versus [40] for all different techniques. It can be seen that results improved in all measures with our proposed methodology. This was majorly due to an increase in the number of true positives, which improved the F1 score.

5 Conclusion By employing dense-field techniques enhanced by Patch-Match algorithm, copymove image forgeries can be detected efficiently. Further, accuracy can be improved by increasing DLF error and reducing the minimum size of clones to detect the smallforged regions. Further, a combination of morphological operations and unsharp masking can enhance the detection process. The results from five different feature extraction techniques indicate that the highest accuracy is achieved for FMT, i.e., 85% and also, the F1 score of this feature extraction technique is highest is 0.8715 but the average computational time for this method is also highest from all five detection methods. The computational time for Zernike moments and PCT is near

25 An Improvement in Dense Field Copy-Move Image Forgery Detection

333

Fig. 6 Comparison of results [proposed] versus [40]

seven seconds per image whereas for FMT, it is near twelve seconds per image. These algorithms can become viable a choice for image forgery detection with lesser computational complexity and time with reasonably accurate results. Still, more work can be done on enhancing accuracy for forgeries in which the size of the duplicated region is quite small, with minimum impact on FP. Moreover, the majority of algorithms deal with forgery detection in affine transformations; algorithms should be improved for detecting tampering in non-affine transformations also.

References 1. Garg T, Saini H (2017) A review on various techniques of image forgery detection. Int J Eng Technol Sci Res 4(4):490–493 2. Luo W, Huang J, Qiu G (2006) Robust detection of region-duplication forgery in digital image. In: 18th international conference on pattern recognition (ICPR’06), vol 4. IEEE, Hong Kong, China, pp 746–749 3. Popescu AC, Farid H (2004) Exposing digital forgeries by detecting duplicated image regions. Tech. Rep. TR2004–515, Dartmouth College, United States, pp 1–11 4. Farid H (2009) A survey of image forgery detection. IEEE Signal Process Mag 26(2):16–25 5. Elwin JGR, Aditya TS, Shankar SM (2010) Survey on passive methods of image tampering detection. In: 2010 international conference on communication and computational intelligence (INCOCCI). IEEE, Erode, India, pp 431–436 6. Teerakanok S, Uehara T (2019) Copy-move forgery detection: a state-of-the-art technical review and analysis. IEEE Access 7:40550–40568 7. Lin X, Li JH, Wang SL, Cheng F, Huang XS (2018) Recent advances in passive digital image security forensics: a brief review. Engineering 4(1):29–39

334

H. Kaur et al.

8. Zhang L, Zhou PP (2010) Localized affine transform resistant watermarking in region-ofinterest. Telecommun Syst 44(3–4):205–220 9. Cheddad A, Condell J, Curran K, Mc Kevitt P (2010) Digital image steganography: survey and analysis of current methods. Sig Process 90(3):727–752 10. Yang B, Sun X, Guo H, Xia Z, Chen X (2018) A copy-move forgery detection method based on CMFD-SIFT. Multimedia Tools Appl 77(1):837–855 11. Fadl SM, Semary NA (2017) Robust copy–move forgery revealing in digital images using polar coordinate system. Neurocomputing 265:57–65 12. Lee JC (2015) Copy-move image forgery detection based on Gabor magnitude. J Vis Commun Image Represent 31:320–334 13. Nithiya R, Veluchamy S (2016) Key point descriptor based copy and move image forgery detection system. In: 2016 second international conference on science technology engineering and management (ICONSTEM). IEEE, Chennai, India, pp 577–581 14. Alahmadi A, Hussain M, Aboalsamh H, Muhammad G, Bebis G, Mathkour H (2017) Passive detection of image forgery using DCT and local binary pattern. SIViP 11(1):81–88 15. Ustubioglu B, Ulutas G, Ulutas M, Nabiyev VV (2016) A new copy move forgery detection technique with automatic threshold determination. AEU Int J Electron Commun 70(8):1076– 1087 16. Hilal A, Hamzeh T, Chantaf S (2017) Copy-move forgery detection using principal component analysis and discrete cosine transform. In: 2017 sensors networks smart and emerging technologies (SENSET). IEEE, Beirut, Lebanon, pp 1–4 17. Cao G, Chen Y, Zong G (2015) Detection of copy-move forgery in digital image using locality preserving projections. In: 2015 8th international congress on image and signal processing (CISP). IEEE, Shenyang, China, pp 599–603 18. Pun CM, Chung JL (2018) A two-stage localization for copy-move forgery detection. Inf Sci 463:33–55 19. Shahroudnejad A, Rahmati M (2016) Copy-move forgery detection in digital images using affine-SIFT. In: 2016 2nd international conference of signal processing and intelligent systems (ICSPIS). IEEE, Tehran, Iran, pp 1–5 20. Huang HY, Ciou AJ (2019) Copy-move forgery detection for image forensics using the superpixel segmentation and the Helmert transformation. EURASIP J Image Video Process 2019(1):68 21. Sánchez J, Monzón N, Salgado De La Nuez A (2018) An analysis and implementation of the harris corner detector. Image Process On Line 8:305–328 22. Das T, Hasan R, Azam MR, Uddin J (2018) A robust method for detecting copy-move image forgery using stationary wavelet transform and scale invariant feature transform. In: 2018 international conference on computer, communication, chemical, material and electronic engineering (IC4ME2). IEEE, Rajshahi, Bangladesh, pp 1–4 23. Emam M, Han Q, Li Q, Zhang H (2017) A robust detection algorithm for image copy-move forgery in smooth regions. In: 2017 international conference on circuits, system and simulation (ICCSS). IEEE, London, UK, pp 119–123 24. Wang C, Zhang Z, Zhou X (2018) An image copy-move forgery detection scheme based on A-KAZE and SURF features. Symmetry 10(12):706 25. Fridrich AJ, Soukal BD, Lukáš AJ (2003) Detection of copy-move forgery in digital images. In: Proceedings of digital forensic research workshop (DFRWS’03). IEEE, Ohio, United States 26. Popescu AC, Farid H (2005) Exposing digital forgeries by detecting traces of resampling. IEEE Trans Signal Process 53(2):758–767 27. Karsh RK, Laskar RH (2017) Robust image hashing through DWT-SVD and spectral residual method. EURASIP J Image Video Process 2017(1):31 28. Mahmood T, Mehmood Z, Shah M, Saba T (2018) A robust technique for copy-move forgery detection and localization in digital images via stationary wavelet and discrete cosine transform. J Vis Commun Image Represent 53:202–214 29. Lowe DG (1999) Object recognition from local scale-invariant features. In: Proceedings of the seventh IEEE international conference on computer vision, vol 2. IEEE, Kerkyra, Greece, pp 1150–1157

25 An Improvement in Dense Field Copy-Move Image Forgery Detection

335

30. Alberry HA, Hegazy AA, Salama GI (2018) A fast SIFT based method for copy move forgery detection. Future Comput Inf J 3(2):159–165 31. Yadav N, Kapdi R (2015) Copy move forgery detection using SIFT and GMM. In: 2015 5th Nirma University International Conference on Engineering (NUiCONE). IEEE, Ahmedabad, India, pp 1–4 32. Muzaffer G, Ulutas G (2017) A fast and effective digital image copy move forgery detection with binarized SIFT. In: 2017 40th international conference on telecommunications and signal processing (TSP). IEEE, Barcelona, Spain, pp 595–598 33. Jin G, Wan X (2017) An improved method for SIFT-based copy–move forgery detection using non-maximum value suppression and optimized J-Linkage. Sig Process Image Commun 57:113–125 34. Li Y, Zhou J (2018) Fast and effective image copy-move forgery detection via hierarchical feature point matching. IEEE Trans Inf Forensics Secur 14(5):1307–1322 35. Li K, Li H, Yang B, Meng Q, Luo S (2014) Detection of image forgery based on improved PCA-SIFT. In: Wong WE, Zhu T (eds) Computer engineering and networking, vol 277. lecture notes in electrical engineering. Springer, Cham, pp 679–686 36. Su B, Kaizhen Z (2012) Detection of copy forgery in digital images based on LPP-SIFT. In: 2012 international conference on industrial control and electronics engineering. IEEE, Xi’an, China, pp 1773–1776 37. Hashmi MF, Hambarde AR, Keskar AG (2013) Copy move forgery detection using DWT and SIFT features. In: 2013 13th international conference on intelligent systems design and applications. IEEE, Bangi, Malaysia, pp 188–193 38. Anand V, Hashmi MF, Keskar AG (2014) A copy move forgery detection to overcome sustained attacks using dyadic wavelet transform and SIFT methods. In: Nguyen NT, Attachoo B, Trawi´nski B, Somboonviwat K (eds) Intelligent information and database systems—2014 Asian conference on intelligent information and database systems, vol 8397. Lecture notes in computer science. Springer, Cham, pp 530–542 39. Cozzolino D, Poggi G, Verdoliva L (2015) Efficient dense-field copy–move forgery detection. IEEE Trans Inf Forensics Secur 10(11):2284–2297 40. Abdalla YE, Iqbal MT, Shehata M (2017) Copy-move forgery detection based on enhanced patch-match. Int J Comput Sci Issues (IJCSI) 14(6):1–7 41. Ryu SJ, Kirchner M, Lee MJ, Lee HK (2013) Rotation invariant localization of duplicated image regions based on Zernike moments. IEEE Trans Inf Forensics Secur 8(8):1355–1370 42. Li Y (2013) Image copy-move forgery detection based on polar cosine transform and approximate nearest neighbor searching. Forensic Sci Int 224(1–3):59–67 43. Wu Q, Wang S, Zhang X (2011) Log-polar based scheme for revealing duplicated regions in digital images. IEEE Signal Process Lett 18(10):559–562 44. Barnes C, Shechtman E, Goldman DB, Finkelstein A (2010) The generalized patchmatch correspondence algorithm. In: Daniilidis K, Maragos P, Paragios N (eds) Computer vision– 2010 European conference on computer vision, vol 6313. Lecture notes in computer science. Springer, Berlin, Heidelberg, pp 29–43 45. Warif NBA, Wahab AWA, Idris MYI, Salleh R, Othman F (2017) SIFT-symmetry: a robust detection method for copy-move forgery with reflection attack. J Vis Commun Image Represent 46:219–232 46. Yang F, Li J, Lu W, Weng J (2017) Copy-move forgery detection based on hybrid features. Eng Appl Artif Intell 59:73–83 47. Tralic D, Zupancic I, Grgic S, Grgic M (2013) CoMoFoD—new database for copy-move forgery detection. In: Proceedings ELMAR-2013. IEEE, Zadar, Croatia, pp 49–54

Chapter 26

Scheduling-Based Energy-Efficient Water Quality Monitoring System for Aquaculture Rasheed Abdul Haq and V. P. Harigovindan

1 Introduction Aquaculture plays an important role in ensuring global food security for the world population which is expected to reach 9.8 billion by 2050 [1]. It is also a fast-growing food sector with a great significance on the economy by providing jobs especially people from rural areas. Aquaculture is one of the best methods for food production with the least impact on the environment. Earlier conventional [2] methods were used for checking the water parameters. It involved manual collection and transport from laboratory to site, and site to the laboratory. It had the disadvantage of delay and cost for manual labour. Both will affect the production and adding up to the cost of production [3]. The monitoring of aquaculture farms using sensor and microcontrollers will automate the whole process eliminating the labour cost involved and delay. With an added advantage of producing a large amount of data which is vital for the further development of aquaculture. The water quality parameters such as temperature, dissolved oxygen (DO), salinity, pH level, alkalinity, ammonia, nitrate, and turbidity need to be continuously monitored. Each of the parameters has significance in providing sustained growth of fish and yield. Dissolved oxygen (DO) is an important water quality parameter, which shows the level of dissolved oxygen in the water. The quality of water impacts the growth of fish and quality [4]. Each species of fish has different tolerance level Rasheed Abdul Haq (B) · V. P. Harigovindan Department of Electronics and Communication Engineering, National Institute of Technology Puducherry, Karaikal 609609, India e-mail: [email protected] V. P. Harigovindan e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_26

337

338

Rasheed Abdul Haq and V. P. Harigovindan

towards various water quality parameters. The farmer must be aware of them for proper management of the farm. The challenge here is to reduce the energy consumption of the water quality monitoring (WQM) system. The energy scarcity and efficiency in wireless sensor node (WSN) are vital issues that affect the performance of any system based on WSN. There are already a lot of energy efficiency schemes applied to sensor nodes for sleep and wake-up scheduling. These systems are not designed specifically for aquaculture water quality monitoring. They are designed on general methods for increasing the energy efficiency of a system. The major application of WSN being sensing data remotely so energy efficiency is very important as these sensor nodes are working on batteries with limited energy storage. We face energy issues as these sensors are remotely located and it is almost impossible to replace batteries. The key reasons for energy consumption in any system are sensing, processing, and data transmission. And data transmission is the major contributor to energy consumption. When coming to sensing, the energy consumption of each sensor is different. Improving energy efficiency by scheduling sensors is one of the most effective methods. This system works by switching OFF the sensor part of the system while not in use. Sensing frequency of each parameter is set depending on its significance and nature. In aquaculture, all the parameters are not that unstable. Some parameters like dissolved oxygen and temperature need to be monitored every hour. Most of the other parameters only need to be monitored once every 24 h which will save the energy for sensing. Also, all these data need not be transmitted to the central node for saving energy. Only in case of parameter falling out of our normal range or in case of a query, these data need to be transmitted. Significant contributions of this work are as follows: • A water quality monitoring system for aquaculture with practical and customization friendly architecture is developed. End-user can customize the system by using a different types of sensors according to their requirement and budget. • Simple sleep and wake-up scheduling technique is implemented on this system and energy saving is evaluated. The results show that energy efficiency can be significantly improved using this technique.

2 Related Work A study was done on different energy saving and harvesting techniques to find the most suitable method for extending the life of the WSN node for aquaculture. The goal here is to extend the life of the sensor node without any periodic maintenance or battery replacement. Energy efficiency through power reduction for WSNs is studied as well for increasing the life of the sensor node is also studied.

26 Scheduling-Based Energy-Efficient Water Quality Monitoring …

339

2.1 Power-Saving Methods The WQM system aims to monitor the water quality parameter using sensor nodes located in different parts of the large pond. These sensor nodes collect the data and transmit to the central node using any of the suitable communication protocol. These nodes will be having rechargeable batteries with a fixed capacity. For prolonged and proper working of these sensor nodes, we have to minimize the total power consumption (sensing, processing, transmitting). Different studies have been conducted in this area. In this section, various techniques suitable for aquaculture WQM system are studied. Sleep and Wake-up Scheduling. The power consumption during idle mode is less compared to other modes like transmission and reception of data. The sensor nodes will be in idle mode, for most of the time, hence even a small reduction by sleeping of the nodes will reduce the power consumption of system largely. During the sleep mode, no data collection or communication occurs. The sensor node will wake up to collect data and transmit or store it and go back to sleep mode for saving energy. This can be achieved by duty cycling [5], MAC protocols, and topology control [6]. Authors of [7] proposed a ZigBee-based WSNs for WQM system. Experimental results showed a difference in the current consumption of just 6.7 mA between active and sleep modes. But with less than 10% active time, this will have a good impact on overall savings. Data-driven Methods. Data transmission consumes more power compared to sensing and processing hence some methods are employed to reduce the transmission. For example, data is collected and merged into a single packet by using the merging technique. In a WQM system, the collected data will be uploaded to the cloud, which requires Wi-Fi or GPRS for connectivity to the cloud. Wi-Fi/GPRS used for Internet connectivity consumes more power compared to ZigBee [8] which is used for communication between nodes. So, all the data collected in different nodes are sent to one node (sink node) and then only the relevant data is uploaded after removing redundant data. Data gathering and merging will help to reduce the usage of Wi-Fi or GPRS connectivity. Thus, reducing the power consumption thereby extending the battery life. In [9], a combination of data merging, gathering, and compression techniques is implemented achieving a good result. Another work [10] has used a different technique for reducing the transmission time. They have merged two sensors to a node and data transmission frequency is reduced when the value is not fluctuating. For example, in the nighttime, when temperature variation is low, the frequency of transmission automatically reduced. Another study [11] involving data compression increases the battery life further for 359 days. Routing Schemes. Reducing the distance and paths between the sensor node and the sink node reduces power consumption. For this purpose, various techniques like sink mobility, multipath routing, etc., are used. In [12], authors have implemented a drone for mobilizing the sink node. This sink node collects data from all nodes and transmits to the base station. Another work they have found the most energy-efficient route for transmitting soil moisture data for automation of irrigation. A study reported

340

Rasheed Abdul Haq and V. P. Harigovindan

in [13] uses routing metric to calculate the optimum path between sinking node and sensor node reducing the power consumption for transmission. Radio Optimization: More power is required for the transmission hence some optimization in the transmitter part will improve energy efficiency [14]. Many optimization techniques are already proposed in areas like transmission power control, modulation schemes, etc., to reduce power consumption during transmission. A study has been done by varying the transmission power according to the received signal strength realizing high communication stability with low power consumption [15]. Another work [16] uses ON and OFF mechanism of radio in sensor node assigned by the TDMA scheduler to save energy. Another work [17] gives an in-depth study of energy efficiency solutions for WSN-based water quality monitoring systems. This study has been performed to improve the energy efficiency of the WQM system for aquaculture. The most energy-efficient methods are identified, and the study was carried on improving the energy efficiency further by techniques like scheduling sleep and wake-up of nodes, data merging techniques. This review helps in identifying the suitable method for improving energy efficiency and thus extending the battery lifetime of low-cost WQM system.

3 System Overview and Working In this section, all the parts of the system are explained in detail. Figure 1 shows the architecture of the developed system for water quality monitoring in aquaculture. The system is divided into three. We start by describing the probes we used to monitor different water quality parameters which are the sensing part and called Node S. Then, we have a node which converts the electrical signals or the voltage levels from sensors to data known as Node M. M stands for a microcontroller. Finally, the node which controls all nodes and does the overall work including collection and storage of data, communication part of the system which enables us to pass the information to farmers known as Node C. Node C stands for Central Node. This is the overall architecture of the system. Energy saving mechanism is implemented and

Node S

Node M

Node C

• • • • •

• Arduino UNO

• Raspberri Pi 3 B+

pH Sensor Temperature Sensor Salinity Sensor DO Sensor Ammonium Sensor

Fig. 1 System architecture

26 Scheduling-Based Energy-Efficient Water Quality Monitoring …

341

tested on this system. In this system, we monitor five water quality parameters and our objective here is to reduce the power consumption due to sensing.

3.1 Hardware As mentioned, hardware mainly has three nodes and each node have a different purpose which is integrated to form the system. Figure 1 shows the architecture and data flow and three blocks of the system: Sensor Node, Microcontroller Node (Arduino Uno), and Node C (Raspberry pi 3 B +). Node S: This node has sensors for monitoring the water quality parameters such as pH, temperature, salinity, dissolved oxygen (DO), and ammonia. The sensors were selected considering the preciseness, endurance, and cost. • pH Sensor: Analog pH sensor (PH-BTA) from Vernier is a cost-effective and easy to use sensor which can be used with Arduino controllers. The gel-filled pH sensor has a measuring range of 0 to 14. It has a BTA plug connection producing 1.75 volts approximately in pH 7 buffer. The voltage decreases by about 0.25 V/pH as pH increases. This sensor is connected to the Arduino board using a shield and once we write the required code called sketch using Arduino IDE, we can read the volts and convert the electrical signal to get the pH value easily. • Temperature Sensor: For measuring the temperature, we use temperature sensor TMP-BTA from Vernier. This sensor is suitable for long-range application and wet conditions its rugged stainless-steel probe. The sensor has range −40 to 135 °C. It has a good resolution of 0.03 °C in our range of application (0–40 °C). Salinity Sensor: The sensor used is salinity sensor (SAL-BTA) from Vernier, it can measure the total dissolved salt content of any water source. It has a range of 0–50,000 ppm. • Dissolved Oxygen Sensor: The sensor used is optical dissolved oxygen sensor (ODO-BTA) from Vernier. This pobe uses luminescent technology to measure fast, easy, and gives accurate results, making it a terrific choice for our application. We can measure dissolved oxygen concentration in surface water indicating the quality of the aquatic environment. • Ammonia Sensor: The ammonia sensor (NH4-BTA) is used to measure the aqueous concentration of ammonia levels. This sensor is a combination-style, non-refillable, gel-filled electrode. The membrane on this sensor has a limited life expectancy but the membrane module is replaceable. Node M: This node is the microcontroller node, where we convert the electrical signals from sensors to water quality parameter. Each sensor can sense the water quality parameter and convert to an electrical signal. Here, we use ARDUINO UNO REV3 which is a microcontroller board based on ATmega328P for our convenience. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz quartz crystal, a USB connection, a power jack, and a reset button. It contains everything needed to support

342

Rasheed Abdul Haq and V. P. Harigovindan

the microcontroller; simply connect it to a computer with a USB cable or power it with an AC-to-DC adapter or battery to get started. The UNO is compatible with most shields designed for the Arduino boards. During production on a large scale only we design a microcontroller board with the required resource to cut cost. Node C: The third node of the system is the central node used for collecting all the data from sensors through the Arduino microcontroller. This node can store the data for years and analyse them. The same node sends information to the farmers and uploads data to the cloud as per requirement. We use a Raspberry Pi (RPi) 3 B+ as Node C in this system. It has a 64-bit quad-core processor running at 1.4 GHz, dualband 2.4 GHz and 5 GHz wireless LAN, Bluetooth 4.2/BLE helps us with communication. The Arduino takes the information from the sensor node and communicates with farmers via Node C. Each sensor collects the data periodically as per the signal from Arduino. Arduino communicates with Node S and Node C. Arduino does all the processing on the acquired sensor data to make the required data. After collecting the data from Node S in specific intervals, the Arduino passes the information to the farmers using Node C. It is the job of Node C to decide when to intimate the farmers. Arduino is the bridge between Node C and Node S. We have followed a modular design in the architecture for making the expansion of system easy. The same system can be used in very small farms and large farms.

4 Energy Saving Strategy In this work, the Node S is switched ON only once every hour and kept switched OFF for the remaining time. Figure 2 shows the comparison of power consumption 5

Normal Power Consumption(10 nodes) Normal Power Consumption(20 nodes)

Power Consumption (W)

4.5

4

3.5

3

2.5

2 00:0001:00

02:00 03:00 04:00 05:00 06:00 07:00 08:00 09:00 10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 18:00 19:00 20:00 21:00 22:00 23:00 24:00

Time (24hrs)

Fig. 2 Power consumption for 2 h (single node)

26 Scheduling-Based Energy-Efficient Water Quality Monitoring …

343

of the system with one Node S, Node M, and Node C with and without scheduling for 24 h. Without scheduling, the system consumes around 4.5 watts of energy and with scheduling, it is reduced to 3 watts. The total power consumption of the system without scheduling in a day is 110.36 watts and with scheduling, it is reduced to 68.43 watts. That is a 38% reduction in power consumption. The focus is on reducing the power consumption on Node S. This node has the sensors and is controlled by the Arduino microcontroller. The Node S is powered ON and OFF by Node M and scheduling of each sensor can be done in Node M. The number of Node M and Node C increases as per the size of the farm. If the area is more, we need to deploy more sensor nodes to collect data for the vast area. Hence reducing power consumption in Node S will reduce the total power consumption of the system. Also, Node S and Node M are remotely placed and replacing or recharging the battery is a tedious job. Whereas Node C will be near a power supply; hence, we will make node C do all power-hungry processing works. In most WSN-based monitoring systems, energy efficiency is improved mostly reducing the power consumption by using energy-efficient transmission mechanisms. This system is using energy-efficient transmission mechanisms along with more focus on reducing the need for data transmission from node M. Need for transmission can be reduced by using data merging techniques. Here, we are using techniques to reduce the size of data by transmitting only the data requires to be transmitted. Simply put the Node M will transmit the data to Node C only in case of variations from normal range or node C sends a request for the present data.

5 Performance Analysis This section compares energy efficiency with and without using scheduling. The system was run for 24 h and the power consumption was measured and stored. For comparing the performance of scheduling here the sensing part of the system was switched OFF for most of the time. Even though the Node C was kept ON for full time by scheduling the sensing part to wake and monitor the water quality once every hour. Here we were able to attain good performance. The total power consumption of the system in 24 h is calculated for different configurations. The energy saving of the system increases as the number of nodes increases. As Node C is always ON in this system, the energy saved in a small-scale system will be less compared to a large-scale system with more number of sensor nodes. As the number of sensor nodes increases the power saving of the system also increases. Figure 3 compares the power consumption of the WQM system with 10, 20, 30, 40, and 50 nodes without scheduling and a system with 50 nodes with scheduling. The plot shows that the power consumption of the system with scheduling is very low compared to other systems without scheduling. Even with 50 Node S and Node M, the power consumption of the system remains the same as a system with a single Node S and Node M for most of the time. The power consumption will be maximum only once every one hour when the Node S is switched ON.

344

Rasheed Abdul Haq and V. P. Harigovindan 120

Normal Power Consumption(10 nodes) Normal Power Consumption(20 nodes) Normal Power Consumption(30 nodes) Normal Power Consumption(40 nodes) Normal Power Consumption(50 nodes) With Scheduling (50 nodes)

Power Consumption (W)

100

80

60

40

20

0 00:00 01:00 02:00 03:00 04:00 05:00 06:00 07:00 08:00 09:00 10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 18:00 19:00 20:00 21:00 22:00 23:00 24:0

Time (24hrs)

Fig. 3 Power consumption comparison for 24 h (10–50 nodes)

Table 1 Power consumption comparison in 24 h by the system Number of nodes

Power consumption without scheduling (W)

10

487.83

Power consumption with scheduling (W) 75.47

Power saved (%) 84.52

20

907.27

82.52

90.90

30

1326.68

89.57

93.24

40

1746.10

96.62

94.46

50

2165.53

103.67

95.21

Table 1 shows the power consumed by the water quality monitoring system in 24 h. The power-saving is increasing as the number of nodes increases. The powersaving is 84.52% for 10 nodes, 90.90% for 20 nodes, 93.24% for 30 nodes, 94.46% for 40 nodes, and 95.21% for 50 nodes.

6 Conclusion In this work, an energy-efficient water quality monitoring system for aquaculture is presented. This work helps farmers for accurate and reliable monitoring and control of water quality parameters. We have implemented a simple scheduling technique on sensors to save energy. The results indicate sleep and wake-up scheduling improves the energy efficiency of the system. It is also observed that the percentage of powersaving increases with the number of nodes, makes this a suitable scheme for largescale farms.

26 Scheduling-Based Energy-Efficient Water Quality Monitoring …

345

Acknowledgements This research work was a part of the project titled “Design and Development of IoT based low-cost water quality monitoring and reporting system for aquaculture” funded by the Department of Science and Technology (SEED Division), Ministry of Science and Technology, Government of India.

References 1. World population projected to reach 9.8 billion in 2050, and 11.2 billion in 2100. www.un.org/ development/desa/en/news/population/world-population-prospects-2017.html, 2017/06 2. Adu-Manu KS, Tapparello C, Heinzelman W, Katsriku FA, Abdulai JD (2017) Water quality monitoring using wireless sensor networks: current trends and future research directions. ACM Trans Sens Netw (TOSN) 13(1):1–41 3. Dong J, Wang G, Yan H, Xu J, Zhang X (2015) A survey of smart water quality monitoring system. Environ Sci Pollut Res 22(7):4893–4907 4. dos Santos Simoes F, Moreira AB, Bisinoti MC, Gimenez SMN, Yabe MJS (2008) Water quality index as a simple indicator of aquaculture effects on aquatic bodies. Ecol Ind 8(5):476–484 5. Abughalieh N, Steenhaut K, Nowé A, Anpalagan A (2014) Turbo codes for multihop wireless sensor networks with decode-and-forward mechanism. EURASIP J Wirel Commun Networking 2014(1):204 6. Nithya V, Ramachandran B (2011) Topology control based on demand MAC protocol for wireless sensor networks. In: Third international conference on advanced computing. IEEE, pp 248–253 7. Rasin Z, Abdullah MR (2012) Water quality monitoring system using zigbee based wireless sensor network. Int J Eng Technol IJET 9(10):24–28 8. Alhmiedat T (2017, October) Low-power environmental monitoring system for ZigBee wireless sensor network. KSII Trans Internet Inf Syst 11(10) 9. Gao Q, Zuo Y, Zhang J, Peng X (2010) Improving energy efficiency in a wireless sensor network by combining cooperative mimo with data aggregation. IEEE Trans Veh Technol 59(8):3956–3965 10. Azaza M, Tanougast C, Fabrizio E, Mami A (2016) Smart greenhouse fuzzy logic based control system enhanced with wireless data monitoring. ISA Trans 61:297–307 11. Ruirui Z, Liping C, Jianhua G, Zhijun M, Gang X (2010, November) An energy-efficient wireless sensor network used for farmland soil moisture monitoring. In: IET international conference on wireless sensor network (IET-WSN), pp 2–6 12. Mathur P, Nielsen RH, Prasad NR, Prasad R (2016) Data collection using miniature aerial vehicles in wireless sensor networks. IET Wirel Sens Syst 6(1):17–25 13. Chen Y, Chanet JP, Hou KM, Shi H, De Sousa G (2015) A scalable context-aware objective function of routing protocol for agricultural low-power and lossy networks. Sensors 15(8):19507–19540 14. Mesin L, Aram S, Pasero E (2014) A neural data-driven algorithm for smart sampling in wireless sensor networks. EURASIP J Wirel Commun Networking 2014(1):23 15. Wang J, Niu X, Zheng L, Zheng C, Wang Y (2016) Wireless mid-infrared spectroscopy sensor network for automatic carbon dioxide fertilization in a greenhouse environment. Sensors 16(11):1941 16. Sudha MN, Valarmathi M, Babu AS (2011) Energy efficient data transmission in automatic irrigation system using wireless sensor networks. Comput Electron Agric 78(2):215–221 17. Olatinwo SO, Joubert TH (2019) Energy efficient solutions in wireless sensor systems for water quality monitoring: a review. IEEE Sens J 19(5):1596–1625

Chapter 27

A Study of Code Clone Detection Techniques in Software Systems Utkarsh Singh, Kuldeep Kumar, and DeepakKumar Gupta

1 Introduction In the information technology industries, software development is not performed under idle conditions. It is a period-bound activity and prerequisites from the stakeholders can change at random. To satisfy the stakeholder’s changing requirements, developers are required to speed up and complete the product improvement in the given time limit [1]. Working under such conditions, the developers generally copypaste the code in which there is either no modifications or they do some minor modifications to the code by including, erasing or updating the code statements. Doing it at a specific degree does not affect the product, but the extreme utilization of the copy-paste approach degrades the quality of the software systems [2]. Replicating existing code parts and pasting them with or without alterations into various areas of the source code of a software system is a very common practice in software development [2, 3]. The replicated code fragments are called code clones and the process is called software code cloning. This sort of reuse approach of the current code may lead to bug propagation. A fault arising in one part of the code may arise in all the replicated sections of the code. To mitigate this problem, it is U. Singh (B) · K. Kumar · D. Gupta Department of Computer Science and Engineering, Dr. B R Ambedkar National Institute of Technology, Jalandhar, Punjab 144011, India e-mail: [email protected] K. Kumar e-mail: [email protected] D. Gupta e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_27

347

348

U. Singh et al.

very essential to locate each related code pieces throughout the source code and for these, there is a requirement of software code clone detection techniques [4]. In this paper, after reviewing existing works on software clones, we gathered and summarized investigations in the area of software code clone detection. We explored different code clone detection techniques, provided a brief description of various clone terminologies, code clone evolution, clone detection process, and detailed description of code cloning with its pros and cons. It assists the users in understanding the clone detection process and choosing the appropriate techniques for detecting a possible type of clones. The detection and analysis of such clones can help in refactoring and maintenance processes [5]. The left part of the paper is sorted out as follows: Basic terminologies utilized in the area of code clones are clarified in Sect. 2. Section 3 talks about the literature review. Having talked about the points of advantages and disadvantages of code clones in Sect. 4 have a descriptive brief of the clone detection techniques is provided in Sect. 5. An overview of code clone evolution is discussed in Sect. 6. Section 7 concludes the paper with a detailed depiction on future directions.

2 Clone Terminologies This section discusses different terminologies that are used during software code clone detection.

2.1 Clone Relation Terminologies Code clone detection techniques produce results as clone classes, clone pairs or both. A couple of code fragments is known as aclone pair when they have significant similarities between them. For example, consider the three code fragments, Me1, Me2, and Me3 as given in Table 1, we have five clone pairs, , Table 1 An example illustrating clone pairs and clone class Fragment Me1

Fragment Me2

Fragment Me3

intsqrt = 0, i = 0, b = 0; intsqrt = 0, i = 0, b = 0; while(i < n) {b = i * i; while(i < n){b = i * i; sqrt = sqrt = sqrt * b; i++;} (e) sqrt*b; i ++;} (e)



if(sqrt > 0){sqrt = n + sqrt;} else{sqrt = 0;} (f)

if(sqrt > 0){sqrt = n + sqrt;} else{sqrt = 0;} (f)

if(res < 0){res = m+res;} else{res = 0;} (e)



while(sqrt > n){if(sqrt > 0){sqrt = sqrt/n; sqrt = sqrt + n;}} (g)

while(res > m){if(res > 0){res = res/m; res = res + m;}} (f)

27 A Study of Code Clone Detection Techniques in Software Systems

349

, , and . The equivalence relation between the code fragments is shown by similarity relation among them [3]. A clone class is a maximal arrangement of cloned code fragments in which any couple of the two code sections is similar to each other. For example, as shown in Table 1, we get a clone class where this three code fragments Me1(f), Me2(f) and Me3(e) make a clone pairs with each other, respectively, and as a result, there will produce three clone pairs, , and .

2.2 Types of Clones On the basis of syntactic and semantic similarities between code fragments, code clones can be separated into four types: exact clones, renamed clones, near-miss clones, and semantic clones [2]. Exact clones (type 1 clones) are code fragments that are the same except the white space and comments. Renamed clones (parameterized or type 2 clones) are code fragments that are syntactically identical comparative aside changes in identifiers, literals, types. Near-miss clones (type 3 clones) are code fragments that have been duplicated with further modifications such as proclamation insertions/deletions in addition to the changes in identifiers, literals, types, and formats. As shown in Table 2, code fragments in columns A and B, A and C, A and D form exact, renamed, and near-miss code clones, respectively. Semantic clones (type 4 clones) are code fragments that need not be similar at the code-level but perform similar operations. Table 3 gives an illustrative example of semantically similar code clones. Table 2 Examples of code fragments illustrating different types of syntactic code similarities Code fragment (A)

Exact code clone (B)

Renamed code clone (C)

Near-miss code clone (D)

float sub(float j1, float j2){float subtotal = 0.0; subtotal = j1 − j2; return subtotal;}

float sub(float j1, float j2) {float subtotal = 0.0; subtotal = j1 − j2; return subtotal;}

float sub(float x1, float x2) {floattotal = 0.0; total = x1 − x2; return total;}

float sub(float x1, float x2) float x4 = x2; {Float total = 0.0; total = x1 − x4; return total;}

Table 3 An example illustrating semantic similarity between code fragments Code fragment (A)

Code fragment (B)

void fibonacci(intnum) {int t1 = 0, t2 = 1, t3 = 0; for (intpj = 0; pj < num; pj ++) {cout  t1  ” “; t3 = t1 + t2; t1 = t2; t2 = t3;}}

intfibo(int num1){ if (num1 corr elation (Sn and P N 2) T hen T (k) = 0 I f corr elation (Sn and P N 2) > = corr elation (Sn and P N 2) T hen T (K ) = 1 where T is the extracted scrambled watermark bits. Step 5: T matrix is undergone into inverse Arnold’s cat map using a specific secret key which was utilized during scrambling. This generates the code-word C(x). Step 6: Code-word C(x) is further decoded using (7,4) inverse hamming code in which redundant bits are removed from the code-word and the data bits are extracted.

28 An Improved Approach to Secure Digital Audio Using Hybrid …

369

4 Experimental Results The standard host audio files of type blues, classic and pop are used in this experiment. Figure 4 shows the basic graphical user interface (GUI) of the experiment. Here from Fig. 4, it shows the standard host audio (pink) of type blues of 44.1 kHz and the watermarked audio (red). The PSNR of the watermarked audio is shown to be 37.7677 dB. Figure 5a shows the sample of watermark image before embedding. The extracted watermark image which is extracted from audio type blues (shown in Fig. 5b) is tested for its quality assessment using standard quality metric named as PSNR, SSIM, BER, and NC. The PSNR is 96.3295, SSIM is 0.997505, and the value of NC is 0.894472 as shown in GUI. The results are simply calculated under no attack. The SSIM graph of extracted watermark (Fig. 5b) image is shown in Fig. 6. The range of similarity index metric is 0–1. For the good similarity between the two images, the SSIM values approaches near to 1. The SSIM graph in Fig. 6 is carried out for audio of type blues only. The experiment also tested on other standard audios

Fig. 4 GUI of audio watermarking in MATLAB

(a)

(b)

Fig. 5 (a) Watermark image (b) Extracted watermark image

370

A. Kumar et al.

Fig. 6 SSIM graph for extracted watermark image

types, i.e. classic and pop. Different audio types have different variations of amplitude and frequency. Their respective results are computed and shown in Table 1. Table 1 shows the analysis of the watermarked audio and the extracted watermark image. To check the robustness [9] of the algorithm, this paper shows the performance of the experiment under the four types of attacks, i.e. noise, filtering, cropping, and resampling. Figure 7 shows the GUI of the experiment in which the noise attack is applied on the watermarked audio file of type blues. The PSNR of attacked watermarked audio is measured up to 36.7677 dB. The extracted watermark image has Table 1 Analysis of watermarked audio and their respective extracted watermark images Audio analysis

Extracted watermark image analysis

Audio type

PSNR of watermarked audio (dB)

BER

Blues

37.7677

Classic

37.7541

Pop

37.7481

0.11

NC

PSNR (dB)

SSIM

0.25

0.89

96.329

0.99750

0.10

0.92

96.329

0.99845

0.96

97.245

0.99903

28 An Improved Approach to Secure Digital Audio Using Hybrid …

371

Fig. 7 GUI of audio watermarking under attacks in MATLAB

PSNR94.3295 dB, SSIM is 0.996345, and the value of NC is 0.864322. To attain a good quality of extracted watermark image, the value of its NC and BER should be nearer to 1 and 0, respectively. The NC, BER, SSIM, and the PSNR terms are generally used to check quality of image. These terms are briefly discussed in [9, 10, 12]. The GUI, Fig. 7, is a part of the experiment in which noise attack is practiced. The extracted watermark image from the attacked audio is shown below in Fig. 8.

Fig. 8 Extracted watermark image from the attacked (noise) watermarked audio

372

A. Kumar et al.

Fig. 9 SSIM graph of extracted watermark image

The SSIM graph of the extracted watermark image (Fig. 8) is shown in Fig. 9. The computed SSIM value for this extracted image is near 0.89 which shows the good similarity metric. The experimental result of Fig. 9 is carried out for noise attack only; the rest of the other attacks are also performed on different types of host audio. Table 2 shows the quality assessment of the extracted watermark image under different types of attacks.

28 An Improved Approach to Secure Digital Audio Using Hybrid …

373

Table 2 Analysis of extracted watermark under various attacks Audio analysis

Extracted watermark image analysis

Audio type

Attacks Noise

0.021

0.89

94.329

0.996

Blues

Filtering

0.010

0.95

93.329

0.978

Cropping

0.020

0.88

92.329

0.994

Classic

Pop

BER

NC

PSNR (dB)

SSIM

Resampling

0.011

0.91

91.329

0.981

Noise

0.032

0.96

94.329

0.996

Filtering

0.012

0.94

93.329

0.987

Cropping

0.048

0.93

92.329

0.994

Resampling

0.021

0.96

91.329

0.976

Noise

0.020

0.92

94.329

0.998

Filtering

0.013

0.82

93.329

0.989

Cropping

0.042

0.96

92.329

0.994

Resampling

0.041

0.92

91.329

0.979

From Table 2, it is concluded that all the results of quality assessment for the extracted watermark image under various attacks are found to be satisfying. Table 3 contains a comparison of the proposed methodology with other techniques. The above comparison table shows the robustness of the proposed technique as it compares with a few recent prominent works in audio watermarking. From the comparison table, it is concluded that the proposed work is efficient to preserve digital ownership and security of digital data. Table 3 Comparison table Techniques

Results

DWT-DCT-based blind audio watermarking The highest PSNR of watermarked audio (Jazz) is [8] 3.1821 dB. And the highest PSNR of extracted watermark image 62 dB Audio watermarking scheme based on quantum DCT [11]

The PSNR of watermarked audio on single quantum DCT-based audio watermarking is 36 d

Audio watermarking based on DWT and rational dither modulation [6]

The PSNR of watermarked audio is between 20 dB to 30 dB

Audio watermarking based on echo time spread [9]

And the PSNR of watermarked audio under the influence of attacks is found to be 25 dB approx

Audio watermarking using spread spectrum The payload in the design scheme is 43 bps only design [12] which is quite low Proposed technique

The PSNR of watermarked audio is 37.5 dB and it is maintained up to 36.5 dB under various signal processing attacks. The payload is 256 bps

374

A. Kumar et al.

Table 4 Merits and demerits of the various techniques Techniques

Merits

Limitations

DWT-DCT-based blind audio watermarking [8]

The extracted watermarked image has PSNR 62 dB

The quality of watermarked audio is very poor

Audio watermarking scheme based on quantum DCT [11]

High payload

High computation complexity

DWT and rational dither modulation [6]

Good imperceptibility of the audio signals

PSNR is low under attacks

Audio watermarking based on Easy to embed and extract echo time spread [9] without causing any distortion in data

Low payload

Spread spectrum design [12]

This design is more secure as Very low data payload the watermark data is implanted into various segments of the host signal

Proposed technique

The quality of watermarked The payload is quite less. audio and the extracted And the technique is quite watermark, with and without complex attack, is good

Table 4 contains the merits and demerits of the various techniques that are compared with the proposed methodology.

5 Conclusion The hybrid combination of decomposition is applied for embedding the watermark successfully into the audio file. The average PSNR of watermarked audio (without attack) is 37.7566 dB. The highest PSNR of extracted watermark image is 97.245 dB (without attacks) and this value maintained at an average of 92.5 dB under the implication of different attacks. The watermark image is also undergone into encoding using a cyclic encoder followed by Arnold’s cat map. The encrypted image is then inculcated into the audio file. The payload of data entered in an audio signal is 256 bits. In the future, fingerprint images can be used as a watermark image for embedding in the audio files to ensure dual-layer authentication.

References 1. Akhaee MA, Saberian MJ, Feizi S, Marvasti F (2009) robust audio data hiding using correlated quantization with histogram-based detector. IEEE Trans Multimedia 11(5):834–842 2. Ozer H, Sankur B, Memon N (2005) An SVD based audio watermarking technique. In: Proceedings of 7th ACM workshop on multimedia and security, MMSEC-2005, pp 51–56

28 An Improved Approach to Secure Digital Audio Using Hybrid …

375

3. Bhat VK, Sengupta I, Das A (2010) An adaptive audio watermarking based on the singular value decomposition in the wavelet domain. Digital Signal Process 20(6):1547–1558 4. Peng Hu, Dezhong Peng, Yi Z, Xiang Y (2016) Robust time-spread echo watermarking using characteristics of host signals. Electr Lett 52(1):5–6 (IEEE) 5. Hu HT, Hsu LY, Chou HH (2014) Perceptual-based DWPT-DCT framework for selective blind audio watermarking. Signal Process 105:316–627. https://doi.org/10.1016/j.sigpro.2014. 05.003 6. Hu HT, Hsu LY (2016) A DWT-based rational dither modulation scheme for effective blind audio watermarking. Circ Syst Signal Process 35:553–572. https://doi.org/10.1007/s00034015-0074-9 7. Karajeh H et al (2019) Proposed an audio watermarking scheme based on DWT and Schur dual technique. Multimedia Tools Appl 78:18395–18418 (Springer Science) 8. Subir, Joshi AM (2016) DWT-DCT based blind audio watermarking using Arnold scrambling and cyclic codes. In: 2016 3rd international conference on signal processing and integrated networks (SPIN), Noida, pp 79–84 9. Hua G, Goh J, Thing VLL (2015) Time-spread echo-based audio watermarking with optimized imperceptibility and robustness. IEEE/ACM Trans Audio Speech Lang Process 23(2):227–239 10. Gupta A, Kaur A, Dutta MK, Schimmel J (2019) Perceptually transparent & robust audio watermarking algorithm using multi resolution decomposition & cordic QR decomposition. In: 2019 42nd international conference on telecommunications and signal processing (TSP), Budapest, Hungary, pp 313–317 11. Chen K, Yan F, Iliyasu AM et al (2019) Dual quantum audio watermarking schemes based on quantum discrete cosine transform. Int J Theor Phys 58:502–521. https://doi.org/10.1007/s10 773-018-3950-9 12. Li R, Xu S, Yang H (2016) Spread spectrum based audio watermarking followed by perceptual characteristic aware extraction. 10(3):266–273

Chapter 29

Study on the Negative Transconductance in a GaN/AlGaN-Based HEMT Sujit Kumar Singh, Awnish Kumar Tripathi, and Gaurav Saini

1 Introduction For a long time, the choice of GaAs/AlGaAs-based heterojunctions for the highelectron-mobility transistor (HEMT) has dominated the semiconductor industry. However, in the early 1990s, with the improvement in the fabrication technologies, the first generation of GaN/AlGaN-based heterostructures for the HEMTs was introduced [1]. In contrast to the GaAs/AlGaAs-based HEMT, the presence of large energy band gap difference between the hetero-layers in the GaN/AlGaNbased HEMT allows it to attain a significantly larger concentration of electrons in its two-dimensional electron gas (2DEG) region (even with lesser doping in these hetero-layers) [2]. The added advantages of having a higher breakdown voltage along with the improvement in the output power performance at higher frequencies have made it more suitable for high power and high frequency switching applications [3–5].

S. K. Singh · A. K. Tripathi (B) Department of Physics, NIT Kurukshetra, Kurukshetra, Haryana, India e-mail: [email protected] S. K. Singh e-mail: [email protected] G. Saini School of VLSI Design and Embedded Systems, NIT Kurukshetra, Kurukshetra, Haryana, India e-mail: [email protected] Department of Electronics and Communication Engineering, NIT Kurukshetra, Kurukshetra, Haryana, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_29

377

378

S. K. Singh et al.

The utilization of GaN/AlGaN-based HEMTs is however limited by their considerate problems in the field of epitaxial growth, self-heating and negative transconductance [6–8]. The negative transconductance, in particular, has been reported to be severely affected by the concentration and nature of traps in the GaN and the ungated AlGaN regions of the HEMTs [8, 9]. The mobility degradation in the GaN layers induced by the traps is considered to be a dominant factor which provides negative transconductance to the HEMTs [10]. Also, in the higher gate voltage ranges, this problem of negative transconductance is predominately observed as well [11–14]. Baek et al. argue that this negative transconductance at higher gate voltages is a consequence of the increased dominance of the electric field at the source side of the channel over the electric field at the drain side of the channel, while the electron density in the channel remains saturated [12]. Under these conditions, the leakage of electrons from the source side of the channel into the gate is considered to be the dominant factor that provides negative transconductance to the HEMT. Schuermever et al. propose that this negative transconductance is the resultant of electron leakage from not just the source side but also from the drain side of the channel [13]. The gate leakage current is stated to be more dominant in the regions directly below the edges of the gate. The gate leakage current in this high gate voltage range is mentioned to be solitarily controlled by the potential barrier at the hetero-interface. Hess et al. suggest that under the presence of high gate electric field, the electrons in the 2DEG region leak into the wider band gap region with lower motilities [14]. The deficiency of electrons in the 2DEG region and the reduced mobilities of the leaked electrons is proposed to provide the negative transconductance to the HEMT. The presented article thus intends on reinvestigating this negative transconductance behavior at higher gate voltages for GaN/AlGaN-based HEMT with the use of TCAD simulations. The 3D conduction band energy profile of the AlGaN barrier layer, electron density plots and the electric field profiles are also observed to confirm the validity of the proposed theories for this negative transconductance.

2 Simulation Strategy Figure 1a represents the HEMT structure used for the simulation. The analysis and design of this GaN/AlGaN-based HEMT are carried out using Visual TCAD simulation tools. The HEMT system is calibrated using a standard normally on HEMT as proposed by S. Hamady et al. (Fig. 1b) [15]. In the HEMT structure, a 30 nm thick Al 0.25 Ga 0.75 N layer is placed directly above the 1.1 µm thick GaN layer. An undoped layer of silicon with 0.26 µm thickness is used as a substrate to hold up the entire structure. The source and drain electrodes are allowed to form ohmic type contacts with the underlying AlGaN layer while the gate with work function of 6 eV maintains a Schottky type interface with the same AlGaN layer. A gate length of 2 µm, channel length of 9 µm and total HEMT width of 17 µm are chosen for the simulated HEMT.

29 Study on the Negative Transconductance in a GaN/AlGaN-Based HEMT

379

Fig. 1 a HEMT structure used for the simulation. b Calibration of the simulated HEMT with the experimental data

The AlGaN barrier layer and the GaN layer are doped with donor type Gaussian doping profiles with peak dopants concentrations of 1 × 1015 cm−3 and 1 × 1016 cm−3 , respectively. The AlGaN and GaN regions directly below the source and drain electrodes are alongside doped uniformly with a donor doping concentration of 6 × 1019 cm−3 . The AlGaN/GaN hetero-interface is provided with an interfacial charge density of 3 × 1012 cm−2 . The mobility of the electrons in the 2DEG is also increased to 1650 cm2 V−1 s−1 . The drain is fixed at a bias of 1 V while the source is grounded for observing the drain current variation with respect to gate voltage. It is also worth mentioning that in most cases, the off-state drain current in the HEMT is guided by the leakage of electrons via the barrier layer. This leakage in turn is obtained by the presence of traps in the AlGaN layer. Moreover, it is well established that the presence of these traps significantly affects the transconductance behavior of the system and at times may even provide negative transconductance to the HEMT. Thus, in order to solitarily analyze the high gate voltage dependence on the negative transconductance, this factor of negative transconductance dependence over the nature and concentration of traps needs to be eliminated while still maintaining a significant cut off drain current (in order to match its transfer characteristics with the experimentally ones). This is achieved by carefully placing a thin substrate electrode below the silicon layer instead of opting for the usual approach of manipulating the traps in the HEMT layers. The desired weak cut off drain current is obtained by the leakage of electrons from the grounded substrate electrode into the positively biased drain electrode. Moreover, the undoped nature of the silicon substrate ensures that only a small number of electrons are allowed to leak from the substrate electrode to the drain. This leakage current is almost five orders of magnitude smaller than the on-state drain current of the HEMT and thus provides negligible effect on the on-state behavior of the HEMT, especially at higher gate voltages where the analysis is primarily focused. In order the appropriately model the system, fermi level statistics and Shockley– Read–Hall recombination model are used for the HEMT layers, as represented by Eq. 1, where E T represents the trap energy level of the system and the rest of the terms have their usual meanings [16]. The simulation also resolves density gradient

380

S. K. Singh et al.

equations (Eq. 2) [16] and makes use of the Kane’s model for band to band tunneling in the HEMT layers as represented in Eq. 3 [16], where α, β, γ are fitting parameters and D is a statistical parameter. pn − n 2ie    E T   T  τ p n + n ie exp K T + τn p + n ie exp −E KT   + − ∇ · ε∇ϕ = −q p − n + N D − N A − ρs

USRH =

⎛ γ

3/ 2

(1) (2)



Eg ⎟ E ⎜ G B B = α D  exp⎝−β ⎠ E Eg

(3)

3 Results and Discussion In order to obtain the transfer characteristics of the calibrated HEMT at higher gate voltages, the gate voltage is gradually increased from −6 to 4 V while maintaining a drain to source bias (V DS ) of 1 V. As seen in Fig. 2, the simulated HEMT is seen to provide a positive transconductance for a gate voltage greater than its threshold voltage (~−3.8 V) and lesser than ~2.2 V. Beyond ~2.2 V, the drain current is drastically reduced by the onset of negative transconductance at higher gate voltages. This behavior of the drain current can be analytically reasoned by the use of 3D conduction band (EC ) energy plot of the AlGaN barrier layer and GaN layer, as represented in Fig. 3. Figure 3a, b represents the EC profile of the AlGaN barrier layer and the GaN layer at −6 V gate voltage, respectively. In Fig. 3a, the front facing rightmost region represents the E C of the AlGaN barrier region in contact with the source while the front facing leftmost region represents

Fig. 2 Drain current versus gate voltage in the calibrated HEMT at higher gate voltages

29 Study on the Negative Transconductance in a GaN/AlGaN-Based HEMT

381

Fig. 3 Conduction band profile of the AlGaN barrier layer at a −6 V and b −3.8 V gate voltages and that of the GaN layer at c −6 V and d −3.8 V gate voltages

the EC of the AlGaN region in contact with the drain. As the drain remains biased at a relatively higher potential as compared to the source, the E C at the drain side of the AlGaN barrier layer remains at a lower energy level in contrast to the source side of the same. Also, in the center region, the E C of the AlGaN barrier region is seen to attain a considerably higher E C as it forms Schottky type contact with the underlying gate metal. On the other hand, the rear side of the E C plot represents the E C of the AlGaN layer in contact with the GaN layer which attains a relatively lower energy level. Moreover, in these 3D E C plots, the X-axis represents the E C profile of the AlGaN layer along the length axis of the HEMT while the Y-axis represents the E C profile along the vertical axis of the HEMT (refer to Fig. 1). The Z-axis on the other hand represents the conduction band minima of the corresponding AlGaN layer in its XY-plane. The E C behavior of the AlGaN layer in its Z-axis is considered to be the same as in its XY-plane of reference and is thus not included in the 3D view of the AlGaN conduction band. This is expected as the design of the HEMT is highly 2D (No design or pattering is done along the Z-axis of the HEMT). Similar argument is valid for the E C of the GaN layer as well (Fig. 3b). The front side, however, represents the E C of the GaN layer in contact with the AlGaN

382

S. K. Singh et al.

barrier layer and the rear side represents the E C of the GaN layer in contact with the silicon substrate. Moreover, it is important to mention that, in the case of 3D E C plot of the GaN layer, the XY-plane of the 3D plot is quite clear and distinctive. This is as such because the thickness of the GaN layer (along Y-axis) is significantly large in comparison to its length. However, in the case of AlGaN layer, this is not true as the thickness of the barrier layer is very small in comparison to its length. This in turn reduces the net spread of the 3D E C plots along its Y-axis, as shown in Fig. 3a. Also, in the regions directly below the source and the drain, the E C slope in the AlGaN layer is very small as it remains uniformly doped in these side barrier regions. However, in the center barrier region, the E C of the AlGaN barrier layer in contact with the GaN layer (rear side) is visibly lower than that the E C of the AlGaN barrier layer in contact with its gate (front side). This ensures the formation of the Schottky barrier at the Gate-AlGaN interface. The presence of lower E C at its GaN side also reinstates that the electrons have to tunnel through the barrier layer in order to reach its gate terminal. As this possibility of obtaining a leakage tunneling current is very small, a negligible off-state gate leakage current is observed in the HEMT, as shown in Fig. 2. As the gate voltage is increased (Fig. 3c, d), the fermi energy level of the gate electrode is lowered which then simultaneously lowers the E C of the center AlGaN and GaN regions below the gate. This in turn increases the net concentration of electrons in the center regions of these layers (especially in the GaN layer) which then allows the increased flow of electrons from the source to the drain terminal of the HEMT via the 2DEG region. The slope change in the center E C of the GaN region (along the Y-axis) is evident of this increase in the net concentration of electrons. When the gate voltage reaches its threshold, the concentration of electrons in this center 2DEG region is increased to such an extent than it allows a significant number of electrons to flow from the source to the drain. This considerably increases the drain current in the HEMT and hence turns the HEMT on. Further increase in gate voltage (still below ~2.2 V) only increases the net concentration of electrons in this 2DEG region and thus solely increases the drain current in the HEMT. It is also important to mention that the AlGaN and GaN regions below the source and drain electrodes are heavily doped in the HEMT, i.e., of the order of 1019 cm−3 . In the on-state condition however, a smaller concentration of electrons is allowed to be present (of the order of 1012 cm−3 ) at the hetero-interface. Thus, there exist a large electron concentration gradience between the AlGaN regions below the source/drain and the AlGaN regions below the gate (or the AlGaN region uncovered by electrodes). This allows the formation of a slope in the E C of the AlGaN layer along the X-axis, as shown in Fig. 4. The slope remains directed from the source side of the AlGaN layer to the drain side of the same. A similar slope in the E C also exists in the GaN layer as well. In the center of the AlGaN layer (below the gate) however, the E C of the AlGaN layer in contact with the gate obtains a rather uniform E C as the fermi level of the attached gate metal is not expected to vary about its length. On the other hand, a slope in the E C is observed for the center AlGaN region in contact with GaN

29 Study on the Negative Transconductance in a GaN/AlGaN-Based HEMT

383

Fig. 4 Conduction band profile of the AlGaN barrier layer at a 1 V and b 2 V c 2.4 V d 2.8 V gate voltages

layer. The presence of electron concentration gradience is evident of this slope in the E C , as reasoned earlier. Moreover, for any gate voltage smaller than ~2.2 V, the E C of the AlGaN layer in contact with the gate electrode remains well above the E C of the AlGaN layer in contact with the GaN layer. This thus allows only a small or say negligible number of electrons to leak from the 2DEG into the gate, via tunneling from the AlGaN layer (Fig. 4a, b) and thus provides a negligible on-state gate leakage current, as shown in Fig. 2. When the applied gate bias is comparable to ~2.2 V, the E C of the AlGaN layer in contact with the gate, is reduced to such an extent that it even goes below the E C of the AlGaN layer in contact with GaN, as represented in Fig. 4c. This lack of potential barrier at the center region of the AlGaN layer allows the electrons in the center 2DEG region to directly flow from hetero-interface into the gate, via the AlGaN layer. As the possibility for an electron to directly flow from a higher energy level to a lower energy is significantly greater than the possibility of obtaining an electron leakage via tunneling of a potential barrier, a large number of electrons leak from this 2DEG region into the gate at these high gate voltage ranges. This provides

384

S. K. Singh et al.

a dramatic increment to the gate current while simultaneously reducing the drain current in the HEMT and thus provides negative transconductance to the HEMT. Also, the presence of slope in the E C of the center AlGaN region in contact with the GaN layer allows the electrons to initially start leaking from the source side of the center 2DEG region, as represented in Fig. 4c, d. This is observable for a small range of gate voltage slightly greater than 2.2 V. In this range, only the E C at the source side of the center AlGaN layer (in contact with GaN) attains an energy greater than the E C of the AlGaN layer in contact with the gate. On the other hand, the E C at the drain side of the center AlGaN layer (in contact with GaN) stays well below the E C of the AlGaN layer in contact with the gate. This thus allows the electrons to solely leak from the source side of the center AlGaN layer instead of leaking uniformly from the center AlGaN region. When the gate voltage is increased beyond this range, the E C of the AlGaN layer in contact with the gate is lowered to such a degree that it even goes below the E C at the drain side of the center AlGaN region (in contact with GaN). This in turn allows the electrons to leak from these side drain regions of the center 2DEG region as well. The gate leakage is thus obtained from the entire 2DEG region below the gate. The dominant leakage however is always obtained from the source side rather than the drain side of the 2DEG region. It is also worth mentioning that a small number of electrons are also expected to leak directly from the AlGaN region directly below the source electrode of the HEMT, into the gate. However, as the barrier layer is very thin, most electrons in these side barrier regions tend to move in the GaN layer near the hetero-interface. This presence of E C discontinuity at the hetero-interface largely motivates this flow of electrons towards the GaN layer and thus largely reduces the corresponding gate leakage current component. In order to validate the theory for this negative transconductance as proposed by Beak et al., the electric field profiles of this GaN/AlGaN-based HEMT are also reviewed as mentioned under. Figure 5a represents the electric field profile of the GaN layer at −6 V gate voltage. The electric field at the center GaN region is seen to be relatively stronger than the rest. The presence of electron deficient region between the gate metal electrode and the electron enriched center GaN region (regions deep in the GaN layer from where the electrons were not extracted for the Schottky interface formation) can be reasoned to provide this high electric field to the center regions of the HEMT. As the gate voltage is increased, the Schottky barrier is lowered and thus more and more electrons are allowed to fill the center AlGaN and center GaN layer which in turn reduces the effective electric filed at the center region, as shown in Fig. 5b. Moreover, as the gate voltage is increased to a value close to 1.2 V, the electric field in the center region is lowered to a such a degree that it even becomes smaller than the electric field at the GaN regions not directly below the electrodes (or say regions uncovered by electrodes) (Fig. 5c). It is also important to mention that for any gate voltage smaller than ~2.2 V, the electric filed at the drain side of the center GaN region is relatively stronger than the electric filed at the source side of the center GaN region. The presence of slope in the E C of the GaN layer below the gate (refer to Fig. 4) can be reasoned to provide this drain side dominance of the electric field as

29 Study on the Negative Transconductance in a GaN/AlGaN-Based HEMT

385

Fig. 5 Electric field profile of the GaN layer at a −6 V and b 0.8 V c 1.4 V d 2.8 V gate voltages

the it allows more electrons to be present in the drain side of the center GaN region (below the gate) rather than the source side of the same. Also, as the charge density in the gate electrode is the same thought its length, the relative difference in the charge separation shall be more between the gate and the drain side of the center GaN region in contrast to the gate and the source side of the center GaN region. This explains why the electric field is stronger in the drain side of the center GaN region rather than its source side. In other words, the relative energy difference between the E C of the AlGaN layer in contact with the gate electrode and the E C of the drain/source sides of the center AlGaN region in contact with the GaN layer can be used to correctly predict the electric field profile in the center GaN layer. If the E C difference between the gate and the drain side of the center AlGaN region is more than that of the E C difference between the gate and the source side of the center AlGaN region, the electric field is expected to be stronger in the drain side rather than the source side of the center GaN region. The electric field in the center GaN region continues to be lowered as the gate voltage is increased, until the gate voltage reaches ~2.2 V. Beyond this voltage, the E C of the AlGaN layer in contact with the gate electrode is lowered below the E C of the AlGaN layer in contact with the GaN region, and thus, their relative conduction band energy difference changes sign in these ranges. Also, as the gate voltage is increased

386

S. K. Singh et al.

Fig. 6 Electron density profile of the GaN layer at a gate voltage of 3 V

beyond 2.2 V, the relative E C energy difference between the gate and the source side of the center AlGaN region becomes stronger than the E C energy difference between the gate and the drain side of the center AlGaN region. This enables the electric field of the center GaN region to change its dominant direction from drain side to the source side, as proposed by Beak et al. Moreover, as the gate voltage is further increased in the HEMT, the relative energy difference in the E C of the AlGaN layers in contact with the gate and the GaN layer is increased. This allows the electric field to rise again in the center GaN region, as shown in Fig. 5d. Also, in the simulated AlGaN/GaN-based HEMT, the leakage of electrons via the gate electrode in its negative transconductance region is obtained not just from the source side of the channel but also from the drain side of the channel. The presence of a significant electron concentration in the center GaN region of the HEMT (as shown in Fig. 6) validates this argument. Similar conclusions can also be arrived at by observing the E C band profile of the AlGaN layer, at 2.8 V gate voltage as well (Fig. 4d). As shown in the corresponding E C plot, at higher gate voltages, the E C band of the center AlGaN layer in contact with the gate electrode is reduced to an energy level even below the E C energy of the AlGaN layer in contact with the GaN. This thus allows the electrons to not only leak from the source side of the center GaN region but also from the drain side of the same. This is in agreement with the theory of negative transconductance as proposed by Schuermever et al. However, the dominant leakage of electrons is observed from the source side of the channel (center GaN region) rather than its drain side. Moreover, as the effect of the Schottky barrier is more or less removed from the AlGaN barrier layer at higher gate voltages, the electron in the 2DEG region can also be reasoned to simultaneously leak from the high mobility GaN region to the low mobility AlGaN region under the presence of the high electric field of the gate. This shall reduces the effective concentration of electrons in the 2DEG region and thus provides negative transconductance to the HEMT, as mentioned by Hess et al.

29 Study on the Negative Transconductance in a GaN/AlGaN-Based HEMT

387

4 Conclusion The simulation and analysis of the GaN/AlGaN-based HEMT reveals the existence of negative transconductance at higher gate voltages. The conduction band profiles, electron concentration and the electric field profiles have been used to justify the existence of this negative transconductance. A high degree of correlation is found between the negative transconductance and the relative conduction band energy difference of the AlGaN layer in contact with the gate electrode and in contact with the GaN layer. In particular, when the conduction band of the AlGaN layer in contact with the GaN layer attains an energy level lower than the conduction band minima of the AlGaN layer in contact with the gate, negative transconductance is observed in the HEMT. The conduction band profile of the GaN layer has also been found to be highly useful in explaining the electric field variations of the GaN layer before and after the onset of the negative transconductance in the HEMT. On the other hand, the gate leakage current in this negative transconductance region of operation is obtained from the dominant leakage of electrons from the source side of the GaN region below the gate while a relatively small leakage is also obtained from the drain side of the same.

References 1. Asif Khan M, Bhattarai A, Kuznia JN, Olson DT (1993) High electron mobility transistor based on a GaN-Alx Ga1−x N heterojunction. Appl Phys Lett 63(9):1214–1215 2. Lenka TR, Panda AK (2011) Characteristics study of 2DEG transport properties of AlGaN/GaN and AlGaAs/GaAs-based HEMT. Semiconductors 45(5):650–656 3. Mishra UK, Shen L, Kazior TE, Wu YF (2008) GaN-based RF power devices and amplifiers. Proc IEEE 96(2):287–305 4. Dora Y, Chakraborty A, McCarthy L, Keller S, DenBaars SP, Mishra UK (2006) High breakdown voltage achieved on AlGaN/GaN HEMTs with integrated slant field plates. IEEE Electron Device Lett 27(9):713–715 5. Mizutani T, Ito M, Kishimoto S, Nakamura F (2007) AlGaN/GaN HEMTs with thin InGaN cap layer for normally off operation. IEEE Electron Device Lett 28(7):549–551 6. Volcheck VS, Stempitsky VR (2017) November. Suppression of the self-heating effect in GaN HEMT by few-layer graphene heat spreading elements. J Phys Conf Ser 917(8):082015 (IOP Publishing) 7. Leone S, Benkhelifa F, Kirste L, Manz C, Quay R, Ambacher O (2019) Epitaxial growth optimization of AlGaN/GaN high electron mobility transistor structures on 3C-SiC/Si. J Appl Phys 125(23):235701 8. Ge M, Cai Q, Zhang BH, Chen DJ, Hu LQ, Xue JJ, Lu H, Zhang R, Zheng YD (2019) Negative transconductance effect in p-GaN gate AlGaN/GaN HEMTs by traps in unintentionally doped GaN buffer layer. Chin Phys B 28(10):107301 9. Balakrishnan VR, Kumar V, Ghosh S (2005) The origin of low-frequency negative transconductance dispersion in a pseudomorphic HEMT. Semicond Sci Technol 20(8):783 10. Lin J, Liu H, Wang S, Liu C, Li M, Wu L (2019) Effect of the high-temperature off-state stresses on the degradation of AlGaN/GaN HEMTs. Electronics 8(11):1339 11. Vitanov S, Palankovski V, Maroldt S, Quay R (2010, September) Non-linearity of transconductance and source–gate resistance of HEMTs. In: Proceedings of European solid-state device research conference on fringe poster session

388

S. K. Singh et al.

12. Baek J, Shur M (1990) Mechanism of negative transconductance in heterostructure field-effect transistors. IEEE Trans Electron Devices 37(8):1917–1921 13. Schuermeyer FL, Shur M, Grider DE (1991) Gate current in self-aligned n-channel and pchannel pseudomorphic heterostructure field-effect transistors. IEEE Electron Device Lett 12(10):571–573 14. Hess K, Morkoc H, Shichijo H, Streetman BG (1979) Negative differential resistance through real-space electron transfer. Appl Phys Lett 35(6):469–471 15. Hamady S, Morancho F, Beydoun B, Austin P, Gavelle M (2014, August) P-doped region below the AlGaN/GaN interface for normally-off HEMT. In: 2014 16th European conference on power electronics and applications. IEEE, pp 1–8 16. Cogenda Pvt. Ltd. (2008) Singapore, Genius, 3-d device simulator, Version 1.9.3, Reference Manual

Chapter 30

Hybrid Anti-phishing Approach for Detecting Phishing Webpage Hosted on Hijacked Server and Zero-Day Phishing Webpage Ankush Gupta and Santosh Kumar

1 Introduction Phishing is an information security issue where a forged website imitates legitimate websites to fool the users in disclosing their sensitive, confidential information. Confidential information like credentials (username, password), bank account details, social security number, and credit card information, etc. This compromise sensitive information uses to access personal accounts. The result of this compromised information is financial loss and identity theft. Phishing websites are dangerous to individuals or organizations. The word phishing taken from the concept of ‘fishing’ for target [1], sending out a bait and waits for the outcomes. The replacement of ‘f’ with ‘ph’ phoneme comes from the illegally exploring telephone systems or the practice of phreaking. Phishing is a threat to information security that manipulates the victims to disclose sensitive information by fake websites. Fake websites look and feel the same as the legitimate website. Victims are not able to differentiate between phishing websites and legitimate websites because the attacker creates phishing websites by copying the source code of targeted legitimate sites and modify accordingly. Figure 1 shows the PayPal phishing website that looks is similar to a legitimate PayPal website, and there is only one clue to identify the phishing webpage is a uniform resource locator (URL).

A. Gupta (B) · S. Kumar National Institute of Technology Kurukshetra, Kurukshetra, India e-mail: [email protected] S. Kumar e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_30

389

390

A. Gupta

Fig. 1 PayPal phishing webpage

As per the report of Anti-Phishing Working Group (APWG) [2], Phishing attacks are on the rise. Figure 2 represents the number of unique phishing attacks from January 2019 to September 2019. The maximum number of phishing attacks from January 2019 to September 2019 was recorded 93,194 in July 2019. And more than two-thirds of all phishing website uses secure socket layer (SSL) protection (https protocol), it is a clear indicator that relies on the only SSL is not safe. Most targeted industry sectors in third quarter reports are SAAS/Webmail (33%), Payment (21%), and Financial Institution (19%).

1.1 Phishing Attack Process Figure 3 shows the phishing attack process. As attacker creates the phishing webpage using legitimate webpage source code, then send it to the victim by email or social networks or instant messaging. The attacker uses email for transferring the phishing attack to the victim. In an email, there is a malicious URL attached. After getting this

30 Hybrid Anti-phishing Approach for Detecting Phishing Webpage …

391

Fig. 2 Phishing website incidents

Fig. 3 Phishing attack process

malicious URL victim to click on the URL, and the victim views the fake website, and the victim submits the credentials to the phishing webpage.

1.2 Why Phishing Attack Works There may be the following reason for successful phishing attack [3] • Lack of awareness of phishing attacks. • Lack of awareness of URL (uniform resource locator). • Lack of awareness of security indicators and security certificates. The remaining sections of the paper are organized as follows. Section 2 introduces anti-phishing literature. Section 3 describes our proposed solution. Section 4

392

A. Gupta

presents experimental results to validate our proposed system, discussion in Sect. 5 and conclusion in Sect. 6.

2 Related Work Numerous approaches are proposed to address phishing attacks; however, we are going to discuss some salient papers.

2.1 Blacklist Based Methods A blacklist database includes a list of websites that announced as phishing. Such blacklist databases are maintained and managed by organizations like Phish-Tank, Google, Microsoft. Blacklist updated periodically. This approach is not able to detect zero-day phishing URLs that are absent in the blacklist. Thus, these URLs will be left unidentified. The browser denied access to URLs present in the blacklist database. Users cannot surf the blacklisted URLs. Google Safe Browsing API [4] use the blacklist concept. PhishNet [5] creates the all possible variant of blacklist URL using some heuristics to increase the detection accuracy.

2.2 Heuristic Based Methods Heuristic-based methods take advantage of heuristics rules based on past studies and experiments of phishing attacks to characterized webpages. Heuristics are the feature that considered to check the status of a webpage, whether it is legitimate or phishing. Authors in reference [6, 7] used heuristics like ‘@’ symbol in URL, a pop-up window for password field, IP address in domain part, right-click disabled, etc., to derive rule on these heuristics and determine a threshold for the prediction of phishing webpages. Heuristics rule may be an age of the domain, use of malicious JavaScript, IP address in the domain part of URL, long URL, etc.

2.3 Machine Learning Methods The machine learning approach extracts the features from legitimate websites dataset and phishing websites dataset then trains the classifier with these features to classify the requested webpage based on the features of the requested webpage. The accuracy of these techniques depends on the classifier used. Author [8] used the logistic regression classifier to classify the websites using link features of websites.

30 Hybrid Anti-phishing Approach for Detecting Phishing Webpage …

393

Author [9] used the logistics regression classifier for prediction with the numerical characteristics of URL, phishing word used in URL after segmentation of URL. The accuracy of machine learning approaches depends on the training dataset.

2.4 Hybrid Methods The hybrid approach is the combination of two or more approaches to make a better system for predicting the website is a legitimate website or phishing websites. Authors [10] proposed the three-phase hybrid approach used the search engine based method for filtering the legitimate sites, heuristics rule approach for filtering the phishing websites, and the remaining websites classified by machine learning approach.

2.5 Search Engine-Based Methods As the name suggests, the search engine-based approach uses the search engine. First, these approaches extract the keyword (search query) from the URL and webpages content or both to predict the webpage is legitimate or phishing. Then search query feds to search engine after that search engine returns search results for search query used. If the requested webpage domain is available in the search result domain of the search engine, then the queried domain is considered as legitimate otherwise considered as phishing. The authors [11] uses the Term Frequency-Inverse Document Frequency (TF-IDF) technique to extract the search query. TF-IDF finds all terms of a document with its frequency. They use five high-frequency terms then fed to search engine and compare with the requested domain with the top 30 results of the search engine. If requested domain match with top 30 fetches domain, then it declared as legitimate; otherwise, phishing. It is a language-dependent approach that means only English websites fetch in top search results with five high-frequency terms and can be compromised if the attacker use images instead of words. Authors [12] used the optical character recognition (OCR) first browser to take the requested webpage screenshot then convert the screenshot to text using OCR, then this text used as a search query for search engine. If queried domain match with the search results domain, then it considered a legitimate webpage otherwise phishing webpage. This approach fails when there is no text, logo present on requested webpage. Authors [13] used the logo as the keyword or search query to fed into the search engine. They used machine learning approach to take out the logo from the requested webpage. It was a very challenging and time-consuming process. This approach gives irrelevant results in case of multiple logos on the same webpage, like the hacker rank login page contains the logo of Google, Facebook, LinkedIn, etc., and also fails for the webpage that does not include logo.

394

A. Gupta

Authors [14] used the favicon (image attached with title) as the keyword or search query to fed into the search engine. The Author used a machine learning approach to take out the favicon from the requested webpage. It was a very challenging and time-consuming process. This approach fails for the webpage that does not include a favicon. Authors [15] proposed the search engine-based approach using search query as title + domain. The domain takes out from the requested webpage URL, and the title takes out from webpage content. This approach was language-dependent means that only English websites search under the search engine results. This approach fails if the attacker does not use the title on the phishing webpage. Authors [16] proposed the two-level approach in which they use domain + Title and domain as search query for search engine based method. They extract the domain name from URL and title from webpage content. They use domain and title as a search query for English webpages and domain for non-English webpages and then fed to a search engine to determine the legitimacy of the requested webpage. In the second phase, they use link-based features to classify the websites into a legitimate website or phishing website to reduce false positives. But this approach is unable to detect phishing webpage hosted on the legitimate hijacked server. Our proposed approach is efficient in terms of time and adaptable to detect phishing webpages hosted on the legitimate hijacked server and zero-day phishing attack described in the next section.

3 Proposed Solution Search engine-based phishing detection method is better compared to machine learning due to its fast response time, high accuracy, and simple nature. Nowadays, the attacker breaks the search engine-based approach by using phishing sites hosted on the hijacked server. The search engine fetches the results based on page rank. By using a hijacked server attacker take advantage of page rank. Therefore, there is a need for a search engine-based approach that is capable of detecting phishing webpage hosted on the hijacked server.

3.1 Design Objective Recently, various search engine-based approaches are proposed by different authors described in the previous section. Search engine-based phishing detection approaches became famous or popular due to its high accuracy, fast response, and simple nature. There are some problems associated with search engine based methods; therefore, our proposed approach targets to overcome these limitations. These are the following design goals of proposed anti-phishing approach.

30 Hybrid Anti-phishing Approach for Detecting Phishing Webpage …

395

1. Language independent-Our first goal to create a language-independent antiphishing approach that is suitable for English websites as well as non-English websites. 2. Detection of phishing webpage hosted on the hijacked server-Many of the recently proposed approaches are unable to determine phishing webpage hosted on the hijacked server. Our one of the most important goal is to create a phishing detection approach to detect phishing webpage hosted on the legitimate hijacked server. 3. Creating an efficient search query-Some of the proposed approaches use a very long query that results in irrelevant search results. Our Proposed approach use efficient search query in terms of words. 4. Real-time protection:-Some search engine based approaches used logo, favicon, images, etc., as search query but extraction of these logo, favicon, and image is complicated may produce an unrelated or wrong prediction. Our proposed approach is suitable for real-time phishing protection.

3.2 System Architecture In Fig. 5, our proposed approach flowchart represented. Our proposed approach is efficient for phishing webpage hosted on the hijacked server and zero-day phishing attack. We found that phishing webpage hosted on the hijacked server does not use any compromised server resources like images, CSS, favicon, JavaScript hyperlinks, etc. First, we extract domain name, brand name (Second-level domain name) from URL and title, title language, and copyright content from source code of a requested webpage. Then, we check the presence of a brand name in title or copy-right. If the brand name is present in title or copyright means that the requested webpage uses server resources, so there are very fewer chances of phishing webpage hosted on the hijacked server. Title language represents the language of a webpage (English or Non-English webpage) because a non-English webpage contains a non-English title. We use only search engine results for prediction with search query domain + title. Therefore, the requested webpage will not undergo a similarity check. A similarity match used for the detection of a phishing webpage hosted on the hijacked server. If query domain match with the top 10 search results domain of search engine, then requested webpage announces as legitimate otherwise phishing. Phishing webpage hosted on the hijacked server and Non-English webpage will undergo hyperlink similarity match phase. A non-English webpage is detected by title language because a non-English webpage contains a Non-English title. Phishing webpages hosted on the hijacked server because these websites do not use any hijacked server resources. For these types of webpages first queried webpage domain match with the top 10 search results of the search engine with the search query domain. If the requested webpage domain match with ten search results, then the requested webpage will undergo a similarity match phase; otherwise, it announces

396

A. Gupta

phishing. Based on hyperlinks, similarity match our proposed approach decides that the requested webpage is phishing or legitimate. Our approach uses an efficient search query domain + title for English webpages and domain for Non-English webpages and phishing webpages hosted on the legitimate hijacked server. By using a hyperlinks similarity match phase, our approach is capable of detecting a phishing webpage hosted on the legitimate hijacked server. Also, due to low response time, our proposed approach is suitable for real-time phishing detection. URL structure, domain, brand_name used in our proposed approach described below: URL Structure—Protocol://subdomain.second-level-domain. TLD/path Domain-second-level-domain.TLD Brand_name-second-level-domain.

3.3 Algorithm for Our Proposed Solution Is as Follows INPUT-URL. OUTPUT-STATUS OF WEBPAGE (PHISHING/LEGITIMATE) (Figs. 4 and 5).

30 Hybrid Anti-phishing Approach for Detecting Phishing Webpage …

Fig. 4 Search results of google of PayPal webpage using search query Domain + Title

397

398

A. Gupta

Fig. 5 Flowchart of proposed approach

4 Experiment Results We used the Intel(R) Core(TM) i3-7020(U) CPU @ 2.30 GHz processor, 4 GB RAM, and 64-bit Windows operating system for implementing our proposed approach.

4.1 Implementation Detail We use Python as a programming language for implementing our proposed approach. The motivation behind selecting Python as a programming language is Python provides many in-builds libraries or packages. We use the following libraries for implementation: ‘googlesearch’ module for fetching top 10 search results of google search engine. ‘tld’ module for extracting domain and second-level-domain (Brand name).

30 Hybrid Anti-phishing Approach for Detecting Phishing Webpage … Table 1 Webpage classification

Categories

Time (s)

C1, C2, C4, and C5

2.00

C3, C6, and C7

2.80

399

‘urllib.request’ module opens the URL for extracting title and copyrights. ‘BeautifulSoup’ module for extracting all the hyperlinks from webpages.

4.2 Classification of Webpages We classified webpages into the following categories: English legitimate webpages that title and copyright contain brand_name (C1). English zero-day phishing webpages that title and copyright contain brand_name (C2). English legitimate webpages that title and copyright do not contain brand_name (C3). English zero-day phishing webpages that title and copyright don’t contain brand_name (C4). Non-English zero-day phishing webpages (C5) Non-English legitimate webpages (C6). Phishing webpages hosted on the legitimate hijacked server (C7) (Table 1).

4.3 Comparison with Existing Methods We compared our proposed approach with existing search engine-based approach based on the following parameters. Language Independence (P1) Language independence means the approach is suitable for all language webpages (English and Non-English webpages). Capable of detecting phishing webpage hosted on the hijacked server (P2) Numerous earlier proposed search engine-based approaches compromised by phishing webpages hosted on the compromised server. Capable of detecting phishing webpages hosted on the hijacked server means approach is efficient for detecting phishing webpages hosted on the compromised server. Image-based phishing detection (P3) Image-based phishing detection means approach used webpage images like favicon, logo, etc., as a search query for search engine, but extracting the right image is a

400

A. Gupta

Table 2 Comparison with existing methods Approach

P1

P2

P3

P4

Author [11]

No

No

No



Author [12]

Yes

Yes

Yes

4.31 s

Author [13]

Yes

Yes

Yes



Author [14]

Yes

Yes

Yes



Author [15]

No

No

No

1.53 s

Author [16]

Yes

No

No

2.36 s

Our proposal

Yes

Yes

No

2.80 s

time-consuming and complex process and these approaches fail when the webpage does not contain any image. Response Time (P4) Response time is the average time taken by an approach to announce the status of webpage (Phishing/Legitimate). Table 2 shows a comparison of our proposed approach with an existing search engine-based approach using four parameters.

5 Discussion Search engine-based methods are capable of detecting zero-day phishing webpage by using webpage Page Rank. We found that phishing webpages hosted on the compromised server do not use legitimate hijacked server resources. For example, abc.com has compromised server that PageRank is good. An attacker can use this domain for hosting the Facebook phishing webpage to compromised the search engine-based methods, but the attacker does not use any resources of compromised server. We check the presence of brand_name in title or copyright. The presence of brand_name represents webpage use server resource, and there is less probability of phishing webpage hosted on the compromised server. And Absence of brand_name represents webpage may be a Non-English webpage or phishing webpage hosted on the legitimate hijacked server. To ensure phishing webpages hosted on the compromised server, we use the hyperlinks similarity phase.

6 Conclusion We found that phishing webpage hosted on the hijacked server does not use any hijacked server resources like title, copyright, hyperlinks, etc. Therefore, we have

30 Hybrid Anti-phishing Approach for Detecting Phishing Webpage …

401

used a concept presence of brand name in copyright and title and hyperlink similarity match phase for detecting phishing webpage hosted on the hijacked server. The proposed method is efficient and adaptable for detecting a phishing webpage hosted on the legitimate hijacked server and zero-day phishing attacks in terms of time. It can be used for real-time phishing detection as it does not based on any databases like images, logos, etc. Limitation of this work is that it fails if the attacker use hijacked server resources for the phishing webpage hosted on the hijacked server.

References 1. Ollmann G (2004) The phishing guide—understanding preventing phishing attacks. NGS Software Insight Security Research 2. APWG Phishing activity trends reports, Third quarter 2019. https://docs.apwg.org/reports/ apwg_trends_report_q3_2019.pdf 3. Dhamija RJ, Doug T, Marti H (2006) Why phishing works. In: Proceedings of the SIGCHI conference on human factors in computing systems 2006. ACM, pp 581–590 4. Safe Browsing API—Google Developer. https://developers.google.com/safe-browsing/ 5. Prakash P, Kumar M, Kompella RR, Gupta M (2010) PhishNet: predictive black-listing to detect phishing attacks. In: Proceedings of the INFOCOM 2010. IEEE, pp 1–5 6. Ahmed AA, Abdullah NA (2016) Real time detection of phishing websites. In: IEEE 7th annual information technology electronics and mobile communication conference (IEMCON) 2016. IEEE, pp 1–6 7. Mohammad RM, Thabtah F, McCluskey L (2014) Intelligent rule-based phishing website Classification. IET Inf Secur 8(3):153–160 8. Jain AK, Gupta BB (2019) A machine learning based approach for phishing detection using hyperlinks information. J Ambient Intell Humanized Comput 10(5):2015–2028 9. Tupsamudre H, Singh AK, Lodha S (2019) Everything Is in the name–a URL based approach for phishing detection. In: International symposium on cyber security cryptography and machine learning 2019. Springer, Berlin, pp 231–248 10. Ding Y, Luktarhan N, Li K, Slamu W (2019) A keyword-based combination approach for detecting phishing webpages. In: Computers security 2019, vol 84. Elsevier, 256–275 11. Zhang Y, Jason IH, Lorrie FC (2007) Cantina: a content-based approach to detecting phishing web sites. In: Proceedings of the 16th international conference on World Wide Web 2007. ACM, pp 639–648 12. Dunlop M, Groat S, David S (2010) Goldphish: using images for content-based phishing analysis. In: Fifth international conference on internet monitoring and protection 2010. IEEE, pp 123–128 13. Chiew KL, Chang EH, Wei King T (2015) Utilisation of website logo for phishing detection. In: Computers security 2015, vol 54. Elsevier, pp 16–26 14. Chiew KL, Choo JSF, Sze SN, Yong KS (2018) Leverage website favicon to detect phishing websites. In: Security and communication networks 15. Varshney G, Misra M, Atrey PK (2016) A phish detector using lightweight search features. In: Computers security 2016, vol 62. Elsevier, pp 213–228 16. Jain AK, Gupta BB (2018) Two-level authentication approach to protect from phishing attacks in real time. J Ambient Intell Humanized Comput 9(6):1783–1796 17. PhishTank, [Online]. Available at https://www.phishtank.com/ 18. Alexa, [online]. Available at https://www.alexa.com/ 19. Gupta BB, Nalin AG Arachchilage, Kostas EP (2018) Defending against phishing attacks: taxonomy of methods, current issues and future directions. Telecommun Syst 67(2):247–267

Chapter 31

FFT-Based Zero-Bit Watermarking for Facial Recognition and Its Security Ankita Dwivedi, Madhuri Yadav, and Ankit Kumar

1 Introduction Biometric traits are considered to be the most strong security features over some traditional security parameters based on passwords, keys, etc. Those orthodox security parameters contain glitches that are disclosed by some of the proven threats such as brute force. Biometric traits include unique features of a user’s fingerprint, palm-print, iris scan, and face image. The storage of biometric data in the cloud database consists of big concern. An attacker can steal a user’s biometric data from the server and can exploit its features to prove authentication [1] illegally. Security of biometric data is widely essential to avoid its false authentication and prevent the exploitation of the features. This paper proposes a new security design based on the concept of zero-bit watermarking which ensures the security of host biometric features. The concept of zero-bit watermarking followed by techniques like FFT and SVD are introduced in this paper. In the proposed technique, rather than actual embedding of user’s unique ID (image) in a host data, bits of the unique ID is XORed with the least bits of unique singular features of the host facial image. The user’s identity is inculcated as the bits of watermark in the user’s biometric data without affecting the balance in the required equilibrium. The implementation is done in the transform domain where the host image is converted into the transform coefficients. A. Dwivedi · M. Yadav (B) · A. Kumar Centre for Advanced Studies, Dr. A.P.J Abdul Kalam Technical University, Lucknow, India e-mail: [email protected] A. Dwivedi e-mail: [email protected] A. Kumar e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_31

403

404

A. Dwivedi et al.

The best coefficient is then found out which is less informative and may not reflect any distortion to the host image. The formation of coefficients of the host data is carried out using FFT techniques [2]. The objective of the method is to engender the resulting master key from the watermarked biometric data by using Arnold’s cat map which is ineffective to various image processing attacks. Now, sharing and storing the resultant master share over a distributed network will not affect its confidentiality. Earlier researches on zero-bit watermarking utilize some robust implementations. Xi et al. [3] introduces the concept of a Dual Zero-Watermarking method for a 2-D Vector Map relies on Delaunay Triangle Mesh and SVD which focus on proving good results in term of robustness. Singh and Bhatnagar [4] recommended a robust blind watermarking framework based on log polar mapping and SVD to make productive attempts in terms of resiliency against attacks. Thanki et al. [5] gave a concept to secure biometric and Biomedical Images using the watermarking technique based on Sparse Domain which uses the theory of compressive Sensing (CS) hypothesis and the characteristics of scarcity of wavelet transform. The objective is to secure the biomedical and biometric image and establishes high transparency while maintaining robustness. Singh et al. [6] have suggested a secure zero-bit watermarking design based on DWT-SVD technique for securing medical images. Another approach by Douglas et al. [7] draws an outline over the concept of steganography techniques which is used to protect the biometric data. This work is intended to protect ownership of digital data. The main contribution of this paper is the implementation of zero-bit watermarking in biometric facial data using FFT and SVD techniques. The implementation is performed over the face image of a user which is taken as a host image. First, the FFT technique is introduced, which is used to obtain frequency coefficients of the host facial image that are considered as one of the best coefficients for watermarking. The best coefficient is one that contains a medium level of energy. FFT is subjected to find the best segment or the best frequency region of the host face image to perform zero-watermarking. Further, the SVD technique is applied over the selected FFT segment and singular values are obtained that are considered as unique and stable. Least bits of such singular values are integrated with watermark bits (user’s unique ID). The master share is fabricated of the watermarked biometric image using Arnold’s cat map. Then different attacks like contrast enhancement, Gaussian and median filtering, histogram equalization, JPEG compression are enforced over the master share to test its robustness. The embed watermark is not just recognized as an identity of an individual but also secure the authentication host image. And zerobit watermarking is also a robust technique in terms of extraction process of the watermark ID which does not affect any distortion on the host data. The chapter scheme of the draft is divided as follows: Sect. 2 discusses the concept of proposed techniques used in zero-bit watermarking. Section 3 draws the outline of the proposed algorithms. Section 4 illustrates the experimental results and finally, Sect. 5 generalizes the conclusion of the paper.

31 FFT-Based Zero-Bit Watermarking for Facial Recognition …

405

2 Preliminaries This part of the draft defines the basic methods that are required to implement the proposed watermarking scheme.

2.1 Method of Fast Fourier Transformation (FFT) The method of Fast Fourier Transform decomposes the host image into the sine and cosine components. When FFT is applied to an image, it maps the image from its spatial domain to its equivalent frequency domain [8]. The image’s frequencydomain is described by the frequency points of its information. The FFT coefficients of fixed size are generated through the preprocessed host image. The FFT coefficients contain ranging frequency from low to high. The low-frequency coefficient bands have a high magnitude of information of the image. The high-frequency coefficient bands also contain essential information. Both the type of coefficients cannot be used for the watermark embedding process since manipulation in these coefficient bands can generate noise in the image. So, the middle band FFT coefficients are selected, which can tolerate the manipulation, for the watermark embedding process. The equation of FFT is given below: x[m] =

n−1 

x[k]e− j2πmk/n

(1)

k=0

Here, x[m] is the representation frequency domain of the selected host image. The equation is equivalent to the concept of discrete Fourier transform (DFT), but FFT is the fast form of DFT. FFT reduces the number of computations that requires finding the coefficients. The time complexity in the case of FFT is O(N〖log〗2N). FFT decomposes an image into the real and the imaginary components which represent the frequency coefficients. The result of FFT is in the form of high range complex number in the spatial domain and stored as floats. To display the large coefficient values of FFT, these values are scaled into lower range values. The FFT decomposes the image recursively based on the frequencies of the image’s information. The frequency of image is the information of the brightness of pixels of an image. The inverse FFT is resolved by the equation given below: x[k] =

n−1 1 x[m]e j2πmk/n n k=0

(2)

Here, 1/n is the normalization term in the inverse transformation. The inverse of FFT transforms the fixed size-frequency coefficients of an image into the spatial domain.

406

A. Dwivedi et al.

2.2 Singular Value Decomposition (SVD) The selected FFT middle frequency bands are decomposed into singular valued matrices using SVD decomposition. The Singular value decomposition technique [9] is used to disintegrate the matrix of an image into three non-collapsing fixed size matrices. The singular valued matrix which is generated as one of the matrices from SVD is used to perform zero-bit watermarking since a small manipulation in the least bits of the singular values does not alter the image’s quality as such bits are most much stable bits. These bits are quite ineffective against several image processing attacks [10]. Consider ‘A’ as a matrix of a host image of dimension m × n, which is undergone into SVD operation. It results in an ‘S’ singular matrix in which stable elements are presents at its diagonal. The dimensionality of the singular matrix is fixed which may further be decomposed into sub-matrices. The other two matrices of SVD are orthogonal matrices such as U and V which contains other non-stable information of the image. A = U SV T

(3)



 S1 0 A = (U1 , U2 , U3 . . . Um ) (V1 , V2 , V3 , . . . Vm )T 0 S2

(4)

Sn = (μ1 , μ2 , μ3 , . . . , μm )

(5)

μ1 > μ2 > μ3 , . . . , μm ≥ μα+1 , μα+2 , μα+3 , . . . , μm = 0

(6)

Here,

3 Proposed Scheme In the proposed methodology, distinctive characteristics are extracted from the host biometric images using applying Fast Fourier Transform (FFT) in which it is factorized into non-concurring blocks of frequency coefficients of fixed size n * n. Such frequency blocks contain information of pixels of the biometric image. In this experiment, middle FFT frequency coefficients are selected as a suitable region for the embedding process. The selected frequency coefficients are then undergone into SVD decomposition in which singular values of such frequency blocks are dissolved into matrices, which are always distinctive and so it contributes to the unique feature generation. A unique matrix is always generated through every equal size blocks of singular values, and these unique and stable values are then optimally integrated with the bits of the user’s unique ID to generate a secure master share [11]. Figure 1

31 FFT-Based Zero-Bit Watermarking for Facial Recognition …

407

Fig. 1 a Sample of host image, b sample of watermark, c and the secret key

shows a sample host images of a face scan, a watermark image and the corresponding master share that is used in the proposed experiment. The inculcation process of the proposed algorithm is explained in the following steps.

3.1 Algorithm 1: Embedding Process 1. The face scan images stored are taken to get the middle frequency components using FFT. [MF] = FFT (P)

(7)

2. These medium frequency component are divided into blocks, Bi of size say, j * j where i = 1, 2, 3…, N and N is total number of blocks required in the embedding the watermark consist of unique id. 3. Now, the singular value decomposition is implemented in order to get most stable values in each block. [Ui , Di , Vi ] = SVD (Bi )

(8)

  Di = diagonal σi1 , σi2 , σi3 , . . . , σiq , 0, . . . 0

(9)

where,

4. The topmost value of each block let’s say Bi (σi1 ) is assigned in a new array say M. M(i) = Di (σi1 )

(10)

5. Similarly, for every two consecutive values of array ‘M’ perform it in a row say,

408

A. Dwivedi et al.

If the first element is less than second element Then alter the bit to 0; Else alter the bit to 1; And hence the newly occurred bits consist of 1 and 0 will be assigned in another array say P which have same dimensions as M. 6. The watermark image consists of ID of face executes for encryption using Arnold Scrambling. 7. Apply XOR (logical) operation between encrypted watermark and the obtained array P. z = XOR(P, watermark)

(11)

8. Again, execute the Arnold Cat Map [17] encryption technique on z which gives the secret unique key K (master share). K = encryption(z)

(12)

To fetch the original person’s ID or details, distinctive characteristics are critically merged with resulting master share. The process of extraction is explained stepwise as follows:

3.2 Algorithm 2: Extraction Process 1. The face scan images stored are taken to get the middle frequency components using FFT. [MF] = FFT(P)

(13)

2. These medium frequency component are divided into blocks, Bi of size say, j * j where i = 1, 2, 3…, N and N is total number of blocks required in the embedding the watermark consist of unique id. 3. Now, the singular value decomposition is implemented in order to get most stable values in each block. [Ui , Di , Vi ] = SVD(Bi )

(14)

  Di = diagonal σi1 , σi2 , σi3 , . . . , σiq , 0, . . . 0

(15)

where,

31 FFT-Based Zero-Bit Watermarking for Facial Recognition …

409

4. The topmost value of each block let’s say Bi (σi1 ) is assigned in a new array say M. M(i) = Di (σi1 )

(16)

5. Similarly, for every two consecutive values of array ‘M’ perform it in a row say, If the first element is less than second element Then alter the bit to 0; Else alter the bit to 1; And hence the newly occurred bits consist of 1 and 0 will be assigned in another array say P which have same dimensions as M. 6. The secret key obtained while encryption i.e., master share K of every image is moved to the lossless decryption by the inverse of Arnold Cat Map technique [17]. z = decryption(K )

(17)

7. XOR (logical) operation applied between z and newly generated array “P” to extract watermark consist of person’s unique identity. Watermark = XOR(P, z)

(18)

Figure 2 contains the process of embedding of the discussed algorithm in which the method of FFT is used followed by the SVD process. During the embedding process, the least bits of the SVD block of the host facial image is XOR-ed with the bits of unique ID of a user and this will generate a watermarked image without any distortion. The watermarked image is encrypted using the Arnold cat map methods (ACM) to generate master share. The same procedure is applied for watermark extraction which

Split the image into non collapsing Apply 1-Level of FFT

coefficient bands.

HostFaceimageEmbedding

Apply SVD to the selected FFT

Watermark

coefficients of host image

ID

Master share (ACM)

equal size blocks of middle frequency

Unique binary pattern

Fig. 2 Embedding process of multiple images using zero-bit watermarking

410

A. Dwivedi et al.

Apply FFT of 1 level of hierarchy

Split the image into non collapsing equal size blocks of middle frequency coefficient bands. Watermarked face image

Apply SVD to the selected FFT coefficients of the watermarked image

Master share Inverse (ACM)

Extracted ID

XOR

Encrypted Watermark Decrypt

Fig. 3 Flow diagram of extraction process

is also depicted in Fig. 3. The quality of resultant master share is also tested under several attacks and its correlation coefficient is carried out as shown in the result section. Figure 3 contains the extraction procedure of the proposed algorithm in which FFT and SVD is again utilized to extract the watermark image.

4 Experimental Results This section contains the experimental results of the discussed algorithms of zero-bit watermarking. The facial biometric images are taken from a standard database named as Kaggle database [12]. All the images are capture in a symmetric environment and hence useful for this experiment. Some samples of the original images, user’s ID and the respective master shares are shown in Fig. 4. The samples of the host images are taken of equal dimension and their stable bits have participated in zerobit watermarking with the bits of the user’s ID. The robustness of the watermarked image is also tested under several attacks.

31 FFT-Based Zero-Bit Watermarking for Facial Recognition …

411

Fig. 4 Samples of original images with their corresponding watermark IDs with the respective generated master share through the embedding process

4.1 Correlation Coefficient The normalized form of cross-correlation [13] is a standard parameter to check the similarity between the two subjects. In this experiment, the two respective master shares are the two subjects out of which one is host and other is resultant watermarked subject. These two are tested in the range of 0 and 1 in which the estimated correlation approaches to 1 that shows both of the master shares are identical to each other. The value near to 0 in correlation describes the two subjects are divergent and hence their manipulation can be predicted. In the proposed technique, each resulting master share is compared with their host subject and verifies the correctness of the algorithm. Figure 5a, b and c show the correlation graph of sample 15th, 25th, and 35th master shares.

412

A. Dwivedi et al.

Fig. 5 Estimation of the uniqueness of the master shares using correlation coefficients graph describes above. a For 15th sample. b For 25th sample. c For 35th sample

Figure 5 shows the uniqueness among each master share hence validates that the image is encrypted [14] properly. The extraction process of such unique master shares is also achieved properly with the help of the extraction algorithm. Summarizing the concept, the output watermarked image looks similar to the input host image but is always unique as compare to others.

31 FFT-Based Zero-Bit Watermarking for Facial Recognition …

413

4.2 Normalized Correlation (NC) The process of normalizing cross-correlation [14] is numerically discussed here, in which two input images i and i* are enrolled in the general equation to check the similarity between them.   NC i, i ∗ =

m

n x=1

i(x, y) · i ∗ (x, y) m 2 x=1 y=1 i(x, y)

n

y=1

(19)

4.3 Bit Error Rate (BER) Bit error rate (BER) [14] is the estimation of error bits, in the total number of bits, generated during the whole watermarking process. The general equation of BER is given below in which ‘w’ as a host image and ‘w*’ as an attacked image, both of n * m dimension, is given as input and error rate is calculated.   BER w, w∗ =

n

m

x=1

y=1

w(x, y) ⊕ w∗ (x, y)

(20)

n∗m

Here, ⊕ is the XOR operation and x and y are respective rows and columns. The computed values of NC and BER by using their respective general equations are checked without the application of attacks and also shown by Fig. 6 which depicts that the reconstructed watermark image is exactly the same as the host watermark

NC and BER with and without attacks 1

0.9989

0.9985

0.9895

0.9693

0.9418

0.9567 0.7118

0

0.0167

0.0216

0.0606

0.0118

Normalized correlaƟon Fig. 6 Values of NC and BER with and without attack

0.0253

0.0124

Bit Error Rate

0.0401

414

A. Dwivedi et al.

ID. Hence, the user’s ID is perfectly recovered at the receiver side when no attack is applied which is validated by NC and BER. Now, the implication of attacks on a watermarked image can distort the stability of a hidden watermark ID. The stability of the hidden watermark ID depends on the stable values in which embedding processing is applied. In this experiment, the singular values of the host facial image are used for the embedding process which is stable and can bear attacks. Image processing attacks [15, 16], described by Fig. 6, are applied on the watermarked facial image and the hidden watermark is extracted which is recovered successfully and its correctness checked using NC and BER. The average value of NC and BER is approaching toward 1 and 0, respectively. Figure 6 depicts the net values of NC and BER of 40 samples of image. Figure 6 validates the robustness [18] of the proposed algorithm and proves that the hidden watermark ID is secure even under the attacks. Here, the average value of NC is found to be more than 0.85 and the average value of BER is also very less as near to 0. The experiment is tested against attacks like JPEG compression, median filtering, Gaussian filtering, sharpening, histogram equalization, and contrast enhancement attacks. A state of art comparison also discussed in Table 1 in which the proposed technique is compared with other recently published techniques. State of Art comparison using different published techniques.

5 Conclusion This paper concludes that the proposed methodology is successfully implemented to perform zero-bit watermarking and the secure mater-share is generated. The SVD and FFT techniques are successfully applied to hide the user’s unique ID in his host biometric facial image. The encrypted watermarked image is successfully encrypted using the Arnold’s cat map. The resultant master share is undergone into server image processing attacks. The robustness of the image is calculated using the value of NC and BER. Hence, the proposed methodology is significant in some real-life security applications such as online attendance, face authentication, crime investigation, etc.

Unconsidered

Unconsidered

Unconsidered

0.9989/0.0167 0.9985/0.0216 0.9895/0.0606 0.9693/0.0118 0.9418/0.0253 0.9567/0.0124 0.7118/0.0401

1/0

0.978

FFT + SVD

Unconsidered

Proposed method

Unconsidered

0.9099/0.0175 0.9838/0.0286 0.9593/0.0606 0.9357/0.0120 0.8577/0.0261 0.9318/0.0129 0.6918/0.0737

Unconsidered

Unconsidered

1/0

0.8121

DWT + SVD 1/0 (fingerprint images)

0.8372

DWT

Unconsidered

Dwivedi et al. [19]

Unconsidered

Jin [13]

Unconsidered

Unconsidered

Singh and Reversible 1/0 Dutta [18] watermarking

0.9103/0.0175 0.9845/0.0027 0.9322/0.0117 0.9018/0.0180 0.8556/0.0259 0.9315/0.0131 0.6876/0.0736

Blurring

DWT + SVD 1/0 (fundus image)

Contrast enhancement

Singh and Dutta [6]

Histogram equalization

0.9098/0.0168 0.9858/0.0226 0.9545/0.0620 0.9564/0.0254 0.8894/0.0365 0.9458/0.0785 0.6843/0.0458

Gaussian filter Sharpening

DWT + SVD 1/0 (iris scan images)

Median filter

Dwivedi et al. [15]

Type of No attack JPEG watermarking compression

Test of robustness (value of NC/BER)

Table 1 State of art comparison with existing technique

31 FFT-Based Zero-Bit Watermarking for Facial Recognition … 415

416

A. Dwivedi et al.

References 1. Singh A, Dutta MK (2018) Lossless and robust digital watermarking scheme for retinal images. In: 4th international conference on computational intelligence and communication technology, IEEE, pp 1–5 2. Fares K, Amine K, Salah E (2020) A robust blind color image watermarking based on Fourier transform domain. Optik 208:164562 3. Xi X, Zhang X, Liang W, Xin Q, Zhang P (2019) Dual zero-watermarking scheme for twodimensional vector map based on delaunay triangle mesh and singular value decomposition. Appl Sci 9(4):642 4. Singh SP, Bhatnagar G (2020) A reference based secure and robust zero watermarking system. In: Chaudhuri B, Nakagawa M, Khanna P, Kumar S (eds) Proceedings of 3rd international conference on computer vision and image processing, vol 1022. Springer, Singapore 5. Thanki R, Borisagar K (2016) Biometric watermarking technique based on CS theory and fast discrete curvelet transform for face and fingerprint protection. In: Thampi S, Bandyopadhyay S, Krishnan S, Li KC, Mosin S, Ma M (eds) Advances in signal processing and intelligent recognition systems, advances in intelligent systems and computing, vol 425. Springer, Cham, pp 133–144 6. Singh A, Dutta MK (2017) A robust zero-watermarking scheme for tele-ophthalmological applications. J King Saud Univ Comput Inform Sci 7. Douglas M, Bailey K, Leeney M, Curran K (2018) An overview of steganography techniques applied to the protection of biometric data. Multimedia Tools Appl 77(13):17333–17373 8. Sathiyamurthi P, Ramakrishnan S (2020) Speech encryption algorithm using FFT and 3DLorenz–logistic chaotic map. Multimedia Tools Appl 1–19 9. Ahmadi SBB, Zhang G, Wei S (2020) Robust and hybrid SVD-based image watermarking schemes. Multimedia Tools Appl 79(1):1075–1117 10. Seenivasagam V, Velumani R (2013) A QR code based zero watermarking scheme for authentication of medical images in teleradiology cloud. Comput Math Methods Med (Hindawi) 516465 11. Tang X, Wang J, Zhang C, Zhu H, Fu Y (2010) A fast and low complexity zero-watermarking based on average sub image in multiwavelet domain. In: 2nd International Conference on Future Computer and Communication (ICFCC), vol 2, IEEE, pp 178–182 12. Park KR, Jeong DS, Kang BJ, Lee EC (2007) A study on iris feature watermarking on face data. In: Beliczynski B, Dzielinski A, Iwanowski M, Ribeiro B (eds) Adaptive and natural computing algorithms, ICANNGA 2007, vol 4432. Lecture Notes in Computer Science. Springer, Berlin, Heidelberg, pp 415–423 13. Jin W (2010) A wavelet-based method of zero-watermark utilizing visual cryptography. In: International conference on multimedia technology, IEEE, Ningbo, China, pp 1–4 14. Tsaia HH, Lai YS, Lob SC (2013) A zero-watermark scheme with geometrical invariants using SVM and PSO against geometrical attacks for image protection. J Syst Softw (Elsevier) 86(2):335–348 15. Dwivedi A, Kumar A, Dutta MK, Burget R, Myska V (2019) An efficient and robust zero-bit watermarking technique for biometric image protection. In: 42nd international conference on Telecommunications and Signal Processing (TSP), IEEE, pp 236–240 16. Singh A, Raghuvanshi N, Dutta MK, Burget R, Masek J (2016) An SVD based zero watermarking scheme for authentication of medical images for tele-medicine applications. In: 39th international conference on Telecommunications and Signal Processing (TSP), IEEE, Vienna, pp 511–514 17. Mehta G, Dutta MK, Karasek J, Kim PS (2013) An efficient and lossless fingerprint encryption algorithm using henon map and arnold transformation. In: International conference on control, communication and computing, IEEE, New York, USA, pp 1–6 18. Singh A, Dutta MK (2017) A reversible data hiding scheme for efficient management of tele-ophthalmological data. Int J E-Health Med Commun 8(3):38–54

31 FFT-Based Zero-Bit Watermarking for Facial Recognition …

417

19. Dwivedi A, Singh A, Dutta MK (2019) Wavelet-SVD based zero-bit watermarking for securing biometric images. In: 4th International Conference on Information Systems and Computer Networks (ISCON), IEEE, Mathura, India, pp 467–472

Chapter 32

Comparative Analysis of Various Simulation Tools Used in a Cloud Environment for Task-Resource Mapping Harvinder Singh, Sanjay Tyagi, and Pardeep Kumar

1 Introduction A distributed system is a model in which networked computer communicate with each other by passing the messages. Distributed computing is a group of independent computer system that solves the complex problem differently and combine the results. Software system are shared among the customers. All the systems work for a common goal. It consists of several client machines with one or more distributed servers. Cloud computing is a form of distributed computing which is internet-based computing where different applications, servers and storage are delivered to computer organizations or users overpay for use basis, i.e. access to the resources but pay for the resources only on use. Cloud computing uses the network of a large group of servers with specialized connections. For running cloud consumers applications, cloud simulators are required. There are various benefits of simulating any application instead of practically running it in the real cloud environment. It opens the possibility of evaluating the supposition which users have made regarding that application. Suppose users have made some supposition that an application will run in a particular way, with prescribed parameters. So by simulating, users can check whether their hypothesis was correct or not. H. Singh (B) University of Petroleum and Energy Studies, Dehradun, Uttarakhand, India e-mail: [email protected] S. Tyagi · P. Kumar Kurukshetra University, Kurukshetra, Haryana, India e-mail: [email protected] P. Kumar e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_32

419

420

H. Singh et al.

Is the application running correctly and according to the desired criterion or not. So instead of running it on the actual environment, simulators will give users a test environment in which users can run their applications and check whether it meets the desired outcome criterion set by them.

2 Related Work A very in-depth research being performed for cloud simulators. Here, this paper focuses on open-source simulators which can be easily accessed. Ostermann et al. [1] is responsible for the introduction of GangSim tool used for scheduling grid and Lakshminarayanan and Ramalingam [2] helped in designing and simulation of different ways of distributed resources in grid computing by using GridSim toolbox. This designing and simulations of cloud computing domain was introduced by Buyya et al. [3] at very application level, with some basic scheduling algorithms like spaceshared and time-shared whose comparative study was done by him. Simulation viewpoint, review of mathematical models and testbeds in cloud computing are complemented which focuses on empowering researchers to find the best suitable model designing approach and simulation implementation [4]. Introduction of a new cloud asset administration working model and evaluation based on its simulation are centring in light of applications under dynamic administration composition [5]. Modelling and testing joined with intelligent sleep and power ascending algorithms in energy known data centre networks were proposed by Louis et al. [6]. Authors [7–10] compared cloud simulators among client groups and data centres based on the present setup and configuration. Both CloudAnalyst and CloudSim are based on Armbrust et al. [11] and Buyya et al. [3], that serve a cloud data centre as a vast pool or puddle of assets or we call it as resources, with workloads at application level. Zhao et al. [12] also proposed at package level, an energy acquainted simulation nature or environment for cloud data centres, namely GreenCloud. Nunez et al. [13] introduced iCanCloud using C++, a recently developed simulator belonging to cloud infrastructure and compared it’s performance and working with CloudSim. Tian et al. [14] proposed a new light threaded simulation tool, namely CloudSched, for VM scheduling having lifecycle in the data centres of cloud.

2.1 Comparative Guidelines for Cloud Simulators The parameters on which cloud simulators differentiate are identified from the literature and recorded underneath: 1. Platform: Simulators are developed on some platform which has its specific features like CloudSim, Network CloudSim, Cloud Analyst are built on Java

32 Comparative Analysis of Various Simulation Tools Used …

2.

3.

4.

5.

6.

7.

8. 9.

421

platform. Hence these simulators can work on any system having java virtual machine installed on it. Availability: The availability of the simulators refers to the accessibility of the cloud simulators. The simulators reviewed in this paper are free of cost and openly access. Programming Language: Cloud Simulators like iCanCloud, CloudSched, GreenCloud etc. are developed by using different platform-dependent programming languages like Java, C++, OTcl. Cost Modelling: The services of the cloud are provided to cloud consumers on a pay-per-use basis. Cost modelling is used to represent this characteristic of the cloud simulator. This component if present in the simulation tool, will compute the total cost to be paid by the cloud consumer for the services used. Graphical User Interface Support: GUI support acronym graphical user interface support for providing the input and output data to/from the simulator. Cloud Analyst, iCanCloud and CloudSched possesses GUI support for both providing input and getting output to/from these cloud simulators. Communication Model: Communication model refers to the way of correspondence between different functional units of the cloud simulators. Message passing is the mechanism by using which various elements of simulator can communicate with each other. Energy Modelling: Nowadays, stress is laid on minimizing the utilization of energy. Energy modelling is the module present in the cloud simulators to monitor and analyse the energy consumption of various resource allocation techniques. Federation Policy: Cloud simulators uses federation policy to execute applications submitted by cloud consumers upon diversified clouds. SLA Support: SLA support acronym service level agreement support assure whether the simulator meet all the requirements of the cloud consumers for which it was developed.

Accompanying segments of this research paper shows the comprehensive investigation of cloud simulators based on the above-stated parameters.

3 Cloud Environment Cloud is a collection of heterogeneous resources. Due to its bigger size and scalable nature, real cloud environment is very complex to implement. If there are any bottlenecks or limitations in user-developed algorithms, then they can find it through simulation. Building a whole real cloud environment is very costly and time-consuming. So, in this case, the simulation will help cloud consumers to find the shortcomings in a limited period. Using simulators, cloud consumers can run their proposed policies on different workload, can submit various kinds of tasks, different virtual machines can be built on, to run the users tasks and develop their mappings according to user’s proposed algorithms.

422

H. Singh et al.

3.1 Cloud Simulators In cloud computing, simulation tool will help cloud consumers in giving the repeatable and controllable environment for carrying out the full range of experiments. Moreover, if all the experiments and testing is done on the real cloud, then the probability of losing the critical data and information is high. So for implementation of real cloud, simulators are required that mimic the behaviour of real cloud environment. With the help of simulator, cloud consumer can test whatsoever scheduling or load balancing techniques, and he wants to implement in the cloud environment. Simulator is required because the actual designing of a real cloud environment is very tedious and time-consuming task. Simulators are advantageous both for cloud consumers and cloud providers. Cloud providers use simulators to test their resources, what should happen if the load will increase, whether their resources can handle the user’s request or not. In other words, simulators helps service providers in making decisions about when to scale up and scale down the resources allocated to the cloud consumers based on their demand. From the cloud provider side, simulator allows evaluation of different resources using scenarios under varying load and price distribution. Thus optimize the resource usage and focus on raising profits. Instead, cloud consumers goal is to maximize resource utilization, minimize make span time. Consumers implement their customized scheduling policies for achieving their objectives. Simulators enables cloud consumers to devise new scheduling policies and test them based on numerous scheduling criterions. Cloud consumers and providers both can check their services in a repeatable uncontrollable environment free of cost. Once the scheduling policies are implemented and tested on the simulator will give results according to the desires of cloud consumer, then it can be performed on a real cloud environment. Cloud Analyst Cloud Analyst is a GUI based cloud simulator that enables cloud consumers to set the input configurations graphically. It also provides the graphical output of the various scheduling or load balancing policies [15] implemented by the cloud consumers. In cloud analyst, different deployment compositions can be set up to test the behaviour of substantial scale applications submitted by the cloud consumer. The architecture of cloud analyst is shown in Fig. 1 [16]. Due to its GUI support, cloud analyst is very easy to use. It can mimic the real cloud environment with a vast extent of composition and adaptability. Fig. 1 Architecture of cloud analyst

32 Comparative Analysis of Various Simulation Tools Used …

423

Fig. 2 Architecture of CloudShed

CloudSched The architecture of CloudSched shown in Fig. 2. [14] displays multiple layers like user interface, CloudSched and cloud resource. User will interact with simulator through the topmost layer, i.e. user interface layer, where user can choose types of resource he needs and requests for the number of instances required for that resource. The bottom-most layer, i.e. cloud resource layer, consists of the actual resources that serve the cloud consumers request. Different virtual machines from available hosts in a data centre, with varied configurations, are selected based on the requirements of cloud consumer. The middle layer consists of different scheduling algorithms which are used for efficient and effective mapping of the requests submitted by the cloud consumers to idle cloud resources available for the execution. CloudSim CloudSim is a toolkit that consists of a library of java classes for simulation of cloud computing scenarios. It enables cloud consumers to model and simulate large scale of cloud infrastructure like data centre on a single computing node. Moreover, it implements the concept of virtualization, it aids in the creation and management of multiple, independent physical machines, virtual machines and cloudlets. Whatever plot cloud consumer wants to build and test, it can be made on a single computing node with the help of this cloud simulator [17, 18]. The layered architecture of CloudSim is shown in Fig. 3 [19]. Cloudsim gives expandable image of real cloud environment which facilitate analysis of new upcoming cloud applications. It provides basic classes for creating

424

H. Singh et al.

Fig. 3 Architecture of CloudSim

data centres [20], virtual machines, applications, users, computational resources and various policies. Cloud consumers can extend or modify these classes for creating their strategies or scenarios of scheduling. GreenCloud GreenCloud is a cloud simulator which is developed to compute the performance of cloud applications submitted by the cloud consumers. It is an expansion of popular NS2 network simulator. Figure 4 [12] displays the architecture of GreenCloud cloud simulator. GreenCloud consists of multiple layers like core network layer, Aggregation network layer and access layers. Execution of various assignments submitted by the cloud consumers is done at the access layer. Different types of network hardware devices are used to route the request of the cloud consumers to the allocated servers at the aggregation network layer. Services provided to cloud consumers are distinguished based on workload in the core layer of the GreenCloud. iCanCloud iCanCloud enables cloud consumers to simulate large scale of cloud applications deployed with multiple configurations. Moreover, usage of iCanCloud

32 Comparative Analysis of Various Simulation Tools Used …

425

Fig. 4 Architecture of GreenCloud

can be escalated by summing up advanced segments into its archive. This cloud simulator is mainly used to compute the cost incurred for running various applications submitted by cloud consumers. It is also used to represent the degree of dependence between different parameters like performance and cost in executing consumer’s task. Figure 5 [13] shows the architecture of iCanCloud cloud simulator. iCanCloud consists of four layers namely VMs repository, application repository, cloud hypervisor and cloud system. The top layer is cloud system which includes of architecture of the cloud system and is responsible for the deployment of VMs. The bottom layer, i.e. VM repository houses the hardware models and configuration models of CPU, memory, storage and network. The middle layers, i.e. application repository and cloud hypervisor, provides an interface between the infrastructure of the cloud environment and cloud consumers applications.

426

H. Singh et al.

Fig. 5 Architecture of iCanCloud

Network CloudSim Traditional CloudSim does not support all expansions of data centre. This limitation is overcome by introducing Network CloudSim which is developed by extending the classes written in CloudSim for data centre configurations and adding more features to it. In other words, it can be said that Cloudsim simulator is expanded and named as Network CloudSim. Figure 6 [12] displays the architecture of Network CloudSim cloud simulator. Network CloudSim consists of two layers, i.e. CloudSim and User code. The bottom layer has the same functionality as CloudSim simulator and the top layer, i.e. User code consists of simulation specification and scheduling policy configuration that enables cloud consumers to implement his customized policy.

4 Comparative Analysis of Various Variants of CloudSim on Different Parameters This section presented a comparative analysis of various cloud simulators highlighting their effectiveness and deficiencies. Table 1 summarizes the comparative study of these tools in terms of characteristics and limitations, platform, programming language and availability.

32 Comparative Analysis of Various Simulation Tools Used …

427

Fig. 6 Architecture of Network CloudSim

4.1 Comparative Discussion Cloud services refers to the assistance provided to the cloud consumers for accomplishing their tasks. Now a days, the tasks submitted by consumers requires huge infrastructure and resources that it is beyond the boundaries of traditional grid simulators to implement them. Thus, some new simulators are required to actualize the consumer’s tasks and provide them services. All through this paper, an overview of cloud simulation tools are presented. The principle issues of analysts is to pick the satisfactory simulation tool that fits best for their applications. Although some cloud simulators are intended to use only for some particular kind of cloud applications. Different simulators are compared on the basis of various parameters like platform, availability and programming language etc. Table 1 displays the comparative analysis of various variants of cloud simulators on different parameters. In light of the

GridSim

Open source

Java

Yes

No

Restricted

Yes

Yes

No

Platform

Availability

Programming language

Cost modelling

GUI support

Communication model

Energy modelling

Federation policy

SLA support

CloudSim

No

Yes

Yes

Restricted

Yes

Yes

Java

Open source

CloudSim

Cloud analyst

Yes

No

Yes

Comprehensive

No

No

C++, OTcl

Open source

NS-2

GreenCloud

Table 1 Comparative analysis of various variants of CloudSim on different parameters

No

Yes

Yes

Comprehensive

No

Yes

Java

Open source

CloudSim

Network Cloudsim

No

No

No

Comprehensive

Yes

Yes

C++

Open source

SIMCAN

iCanCloud

No

No

Yes

Restricted

Yes

Yes

Java

Open source

-

CloudSched

428 H. Singh et al.

32 Comparative Analysis of Various Simulation Tools Used …

429

most recent perceptions, it seen that inflexibility is the major concern of various cloud simulation tools. Actually, dynamic change in cloud applications has turned into a noteworthy necessity but most of the simulators are developed for fixed cloud applications only. Along these lines, it is important to assess provisioning strategies which can deal with the dynamic applications which are needed to be reproduced. In this research paper, we for the most part look at six cloud simulators, to be specific that are Cloud Analyst, CloudSched, CloudSim, GreenCloud, iCanCloud and Network Cloudsim. The different modules in the cloud computing architecture forms various situations of cloud data centre for simulation. A comparative study of these simulators are provided through their architectures, simulation process, modelling of various elements, outputs and performance matrices. The complex networks and large network graphics make the simulators important for research. A combination of these simulation tools can be used to fulfil objectives for optimization such as load balance and energy efficiencies. Some challenges for cloud simulators identified from the comparative analysis based on different parameters as shown in Table 1 are: 1. Less number of tools are present that can model all cloud layers (IaaS, PaaS, SaaS). 2. At the point, when new algorithms and new policies are added, the designed module of the simulator can guarantee that new modules can be effortlessly included. 3. The GUI’s and outputs from the simulators must be user friendly. The inputs and outputs must be saved for later use. 4. Taking into consideration the user is the main task. Presently, the existing four simulators don’t do so. For more realistic situations, various priority policies can be made for users for certain types of VM’s. 5. The designing of various data centres existing in the actual world must be done effectively by the simulators, but it still needs improvement.

5 Conclusion and Future Scope Measuring the execution of scheduling and allocation approaches in a true cloud environment for different applications and service models under various conditions is a difficult issue to handle. This paper outlines the correlation among various cloud simulators based on different parameters like platform, availability, programming language, GUI support, SLA support etc. The detailed examination of the literature on cloud simulators established that there is a need for improvement regarding managing the changes in cloud applications at runtime. The choice of cloud simulators largely depends on the area of application, where it is to be used. For experimentation on large data centres consisting of a huge number of hosts and VMs, Cloudsim simulator is recommended. In future, Cloudsim simulator can be used to model different scheduling and load balancing techniques proposed by the cloud consumers as it is turned out to be the most effective and efficient simulator in contrast to other simulators discussed in the research paper.

430

H. Singh et al.

References 1. Ostermann S, Plankensteiner K, Prodan R, Fahringer T (2010) GroudSim: anevent-based simulation framework for computational grids and clouds. In: European conference on parallel processing, LNCS(261585), Springer, Berlin, Heidelberg, pp 305–313 2. Lakshminarayanan R, Ramalingam R (2016) Usage of cloud computing simulators and future systems for computational research. arXiv preprint arXiv:1605.00085 3. Buyya R, Ranjan R, Calheiros RN (2009) Modeling and simulation of scalable cloud computing environments and the cloudsim toolkit: challenges and opportunities. In: International conference on high performance computing and simulation, IEEE, pp 1–11 4. Alshammari D, Singer J, Storer T (2018) Performance evaluation of cloud computing simulation tools. In: 3rd International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), IEEE, pp 522–526 5. Malhotra R, Jain P (2013) Study and comparison of cloudsim simulators in the cloud computing. SIJ Trans Comput Sci Eng Appl (CSEA) 1(4):111–115 6. Louis B, Mitra K, Saguna S, Ahlund C (2015) CloudSimDisk: energy-aware storage simulation in cloudsim. In: IEEE/ACM 8th international conference on Utility and Cloud Computing (UCC), pp 11–15 7. Ahmed A, Sabyasachi AS (2014) Cloud computing simulators: a detailed survey and future direction. In: IEEE International Advance Computing Conference(IACC), IEEE, pp 866–872 8. Fakhfakh F, Kacem HH, Kacem AH (2017) Simulation tools for cloud computing: a survey and comparative study. In: IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS), IEEE, pp 221–226 9. Maarouf A, Marzouk A, Haqiq A (2015) Comparative study of simulators for cloud computing. In: International conference on cloud computing technologies and applications (CloudTech), IEEE, pp 1–8 10. Tikar AP, Jaybhaye SM, Pathak GR (2015) A systematic review on scheduling types, methods and simulators in cloud computing system. In: International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT), IEEE, pp 382–388 11. Armbrust M, Fox A, Griffith R, Joseph AD, Katz R, Konwinski A, Lee G, Zaharia M (2010) A view of cloud computing. Commun ACM 53(4):50–58 12. Zhao W, Peng Y, Xie F, Dai Z (2012) Modeling and simulation of cloud computing: a review. In: IEEE Asia Pacific Cloud Computing Congress (APCloudCC), IEEE, pp 20–24 13. Nunez A, Vazquez-Poletti JL, Caminero AC, Castane GG, Carretero J, Llorente IM (2012) iCanCloud: a flexible and scalable cloud infrastructure simulator. J Grid Comput 10(1):185–209 14. Tian W, Xu M, Chen A, Li G, Wang X, Chen Y (2015) Open-source simulators for Cloud computing: comparative study and challenging issues. Simul Model Pract Theory 58:239–254 15. Mishra SK, Sahoo B, Parida PP (2020) Load balancing in cloud computing: a big picture. J King Saud Univ Comput Inform Sci 32(2):149–158 16. Wickremasinghe B, Calheiros RN, Buyya R (2010) CloudAnalyst: a cloudsim-basedvisual modeller for analysing cloud computing environments and applications. In: International conference on Advanced Information Networking and Applications(AINA), IEEE, pp 446–452 17. Anastasi GF, Carlini E, Dazzi P (2013) Smart cloud federation simulations with CloudSim. In: Proceedings of the first ACM workshop on optimization techniques for resources management in clouds(ORMaCloud’13), ACM, pp 9–16 18. Long W, Yuqing L, Qingxin X, Buyya R, Ranjan R, Calheiros RN, Zhou A, Wang S, Sun Q, Zou H, Yang F (2013) FTCloudSim: a simulation tool for cloud service reliability enhancement mechanisms. In: Ninth international conference on computational intelligence and security, pp 323–328 19. Calheiros RN, Ranjan R, De Rose CAF, Buyya R (2009) CloudSim: a novel framework for modeling and simulation of cloud computing infrastructures and services 20. Sharkh MA, Kanso A, Shami A,¨Ohlen P (2016) Building a cloud on earth: a study of cloud computing data center simulators. Comput Netw 108:78–96

Part II

Communication

Chapter 33

Study of Spectral-Efficient 400 Gbps FSO Transmission Link Derived from Hybrid PDM-16-QAM With CO-OFDM Mehtab Singh and Jyoteesh Malhotra

1 Introduction The enormous growth in the demand for channel bandwidth due to services like online gaming, video conferencing, high-definition television, and live streaming etc. has congested the present wireless transmission links [1]. Free-space optics (FSO) communication technology has the ability to transmit high-speed information with secure links, high channel-capacity, low mass and power requirement, quick and easy installation procedure, and last mile access capabilities [2–4]. Orthogonal frequency division multiplexing (OFDM) is another important technology which has been exploited by many researchers to transmit high-speed information with immunity to inter-symbol interference and inter-carrier interference [5, 6]. Moreover, by incorporating coherent detection along with OFDM (CO-OFDM) the performance of the receiver terminal is further improved [7]. Recently, many researchers have reported the incorporation of CO-OFDM technology along with advanced higher order modulation formats to realize high-capacity spectral-efficient optical communication links. Liu et al. [8] reported the performance of electronic compensator at the receiver terminal in order to compensate for chromatic dispersion, phase noise, and polarization mode dispersion in a 100 Gbps PDM-CO-OFDM based optical fiber link. Du et al. [9] experimentally demonstrated a 61.7 Gbps PDM-CO-OFDM based 500 km optical fiber link with 3 back propagation steps for effective non-linearity compensation. Shuai et al. [10] reported a 16 Tbps hybrid wavelength division multiplexed PDM-CO-OFDM based 1800 km optical fiber link demonstrating a spectral efficiency of 7.14 bits/sec/Hz. Singh et al. M. Singh (B) · J. Malhotra Department of Engineering and Technology, Guru Nanak Dev University, Regional Campus, Jalandhar, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_33

433

434

M. Singh and J. Malhotra

[11] reported a high-speed inter-satellite optical wireless communication link over 60,000 km range using PDM-CO-OFDM technique with quadrature phase shift keying modulation. Kakati et al. [12] reported a 320 Gbps-600 km optical fiber link deploying pilot assisted PDM-16-quadrature amplitude modulation (QAM) with CO-OFDM technique to realize a spectral efficiency of 8 bits/sec/Hz. In the present work, we investigate a 400 Gbps FSO link using hybrid PDM-16-QAM and COOFDM techniques and study its performance under the impact of various climate conditions.

2 Simulation Setup Figure 1a, b illustrates the block architecture of the transmitter section and the receiver section of the proposed PDM-16-QAM and CO-OFDM based FSO link respectively which is modeled using Optisystem software. 400 Gbps information is generated using pseudo random bit sequence generator which is split into two parallel 200 Gbps streams using a serial to parallel (S/P) converter. Each bit stream is modulated as 16-QAM signals which are then OFDM modulated. The symbol rate of the system is 50 Gbaud. The OFDM parameters are as follows: 128 subcarriers, 6 pilot symbols, 10 training symbols, and 15 dBm average OFDM power. Each OFDM modulated signal is modulated over distinct orthogonal polarized signal from laser beam with 14.14 dBm signal power. These information signals are then combined and transmitted over free space using transmitter antenna with 10 cm diameter. The information signal undergoes attenuation while propagating through free space. The attenuation coefficients are taken as per the work reported in [13]. Figure 2a, b illustrate the optical spectrum of the transmitted signal and the received signal at 42 km link range under clear weather. A bandpass optical filter with 50 GHz bandwidth is used at the receiver terminal to remove sideband noise. After calculations, the proposed link demonstrates a spectral efficiency of 8 bits/sec/Hz. The received information signal is intercepted using coherent detection using PIN photodiode. The detailed explanation of operating principle of PDM-16-QAM CO-OFDM transmitter and receiver units is explained in [12].

3 Numerical Results Figures 3 and 4 illustrates the log(BER) and constellation plots respectively with increasing link range for clear conditions. Here, it can be observed that with increasing range the BER of the signal increases and constellation symbols get more distorted. This is because with increasing link range, the signal attenuation increases. A reliable 42 km transmission for clear weather with faithful BER (~2 × 10−3 , i.e., FEC Limit [14]) is reported.

33 Study of Spectral-Efficient 400 Gbps FSO Transmission …

435

Fig. 1 Block architecture of PDM-16-QAM CO-OFDM a transmitter section b receiver section

Also, the receiver sensitivity performance which is the minimum optical power required at the input of the receiver terminal to faithfully intercept the information signal is analyzed in this work. Figure 5a illustrates receiver sensitivity analysis for back-to-back (B2B) transmission. Here, it can be observed that the proposed link attains faithful BER at −35.5 dBm received optical power for B2B transmission. Figure 5b illustrates the receiver sensitivity analysis of the proposed link for 42 km transmission over clear conditions. Here, it can be observed that the proposed link attains faithful BER at −32 dBm received optical power. Therefore, there is a 3.5 dBm optical power penalty for 42 km transmission as compared to B2B transmission over clear conditions.

436

M. Singh and J. Malhotra

Fig. 2 Optical spectrum at a transmitter terminal b receiver terminal after 42 km transmission

Further, we investigate the impact of varying level of fog on the proposed link as illustrated in Fig. 6. Here, it can be observed that maximum link range decreases with increasing level of fog. This is because as the level of the fog increases, the signal attenuation also increases. The maximum supported link range is 3.1 km for low fog; 1.85 km for moderate fog; and 1.45 km for heavy fog with acceptable BER.

33 Study of Spectral-Efficient 400 Gbps FSO Transmission …

437

Fig. 3 Log(BER) versus range for clear conditions

4 Conclusion The reported work discusses the performance investigation of a spectral-efficient FSO link derived from PDM-16-QAM and CO-OFDM techniques. The results of the simulative investigation reports a faithful transmission of 400 Gbps information with 50 Gbaud symbol rate over link range varying from 42 to 1.45 km depending on the climate conditions. The proposed link demonstrates a spectral efficiency of 8 bit/sec/Hz with a receiver sensitivity of −35.5 dBm and −32 dBm for B2B and 42 km link respectively under clear conditions.

438 Fig. 4 Constellation plots at a 40 km b 50 km c 60 km d 70 km link range

M. Singh and J. Malhotra

33 Study of Spectral-Efficient 400 Gbps FSO Transmission …

439

Fig. 5 Log(BER) versus received optical power for a B2B transmission b 42 km transmission

440

M. Singh and J. Malhotra

Fig. 6 Log(BER) versus link range for fog conditions

References 1. Khalighi M, Uysal M (2014) Survey on free space optical communication: a communication theory perspective. IEEE Commun Surv Tutorials 16(4):2231–2258 2. Mahdy A, Deogun J (2004) Wireless optical communications: a survey. In: Proceedings of IEEE wireless communications and networking conference, IEEE, Atlanta, USA, pp 2399–2404 3. Al-Gailani S, Mohammad A, Shaddad R (2012) Evaluation of a 1 Gb/s free space optic system in typical malaysian weather. In: Proceedings of 3rd international conference on photonics, IEEE, Malaysia, pp 121–124 4. Singh M, Malhotra J (2019) Performance comparison of high-speed long-reach mode division multiplexing-based radio over free space optics transmission system using different modulation formats under the effect of atmospheric turbulence. Opt Eng 58(4):046112-1-9 5. Kumar N, Teixeira A (2016) 10 Gbit/s OFDM based FSO communication system using MQAM modulation with enhanced detection. Opt Quant Electron 48(1):1–7 6. Sharma V, Chaudhary S (2014) High speed CO-OFDM-FSO transmission system. Optik—Int J Light Electron Opt 125(6):1761–1763 7. Singh M, Malhotra J (2019) Long-reach high-capacity hybrid MDM-OFDM-FSO transmission link under the effect of atmospheric turbulence. Wireless Pers Commun 107(4):1549–1571 8. Liu X, Qiao Y, Ji Y (2011) Electronic compensator for 100-Gb/s PDM-CO-OFDM long-haul transmission systems. Chin Opt Lett 9:030602 9. Du L, Schmidt B, Lowery A (2010) Efficient digital back propagation for PDM-CO-OFDM optical transmission systems. In: Proceedings of conference on Optical Fiber Communication (OFC/NFOEC), collocated National Fiber Optic Engineers Conference, IEEE, San Diego, CA, pp 1–3 10. Zhang S, Bai C, Luo Q, Huang L, Zhang X (2013) Study of 16 Tbit/s WDM transmission system derived from the CO-OFDM with PDM 16-QAM. Optoelectronic Lett 9(2):124–126 11. Singh M, Malhotra J (2020) Modeling and performance analysis of 400 Gbps CO-OFDM Based Inter-satellite Optical Wireless Communication (IsOWC) system incorporating polarization division multiplexing with enhanced detection. Wireless Pers Commun 1:1–17

33 Study of Spectral-Efficient 400 Gbps FSO Transmission …

441

12. Kakati D, Arya S (2019) A full-duplex pilot-assisted DP-16-QAM CO-OFDM system for highspeed long-haul communication. In: Proceedings of 2nd international conference on innovations in electronics, signal processing and communication (IESC), IEEE, Shillong, India, pp 183–187 13. Amphawan A, Chaudhary S, Chan V (2019) Optical millimeter wave mode division multiplexing of LG and HG modes for OFDM Ro-FSO system. Opt Commun 431:245–254 14. Dhasarathan V, Singh M, Malhotra J (2020) Development of high-speed FSO transmission link for the implementation of 5G and Internet of Things. Wireless Netw 26:2403–2412

Chapter 34

4 × 10 Gbps Hybrid WDM-MDM FSO Transmission Link Mehtab Singh and Jyoteesh Malhotra

1 Introduction The past few years have witnessed a tremendous rise in the demand for channel capacity due to growth in the use of social networking, video conferencing, HDTV, VoIP based applications etc. which have congested the radio frequency based wireless networks [1]. Free space optics (FSO) technology is capable of meeting the end user demands and deal with radio frequency spectrum scarcity. The use of optical beams to carry high-speed information without licensing requirement and electromagnetic interference has made FSO links highly desirable especially for providing access to last-mile users [2]. Further, FSO links are quick and easy deployable, have low mass and power requirement, provide secure data transmission links, and have ample license-free spectrum. But, the optical information signal attenuation offered by external climatic conditions degrades the performance of FSO links and limit the link reach [3]. The use of wavelength division multiplexing (WDM) technique to realize highspeed FSO links where independent signals are transported using distinct wavelength channels over free space medium are reported in previous works [4–7]. Mode division multiplexing (MDM) is another transmission technology where independent signal is transported over distinct spatial modes of a laser beam. Recent works have reported the use of few mode fiber [8], multimode fiber [9], spatial light modulators [10], and optical signal processing techniques [11] in MDM transmission. The application of MDM in optical fiber links is reported in [12, 13] and in FSO links is discussed in [14, 15]. Here, we demonstrate the hybridization of WDM and MDM technologies in an M. Singh (B) · J. Malhotra Department of Engineering and Technology, Guru Nanak Dev University, Regional Campus, Jalandhar, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_34

443

444

M. Singh and J Malhotra

FSO link and discuss the transportation of 4 × 10 Gbps information over free space channel under different climatic conditions using numerical simulations. Further, in this article we discuss the proposed system design and results.

2 Simulation Setup Figure 1 illustrates the proposed 4 × 10 Gbps hybrid WDM-MDM FSO link architecture. Two distinct 10 Gbps NRZ encoded signals are transmitted at 850 nm wavelength over Laguerre Gaussian modes (LG00 mode and LG01 mode) while another two 10 Gbps NRZ encoded signals are transmitted at 850.8 nm wavelength over LG00 and LG01 mode as illustrated in Fig. 1. Both 850 and 850.8 nm channels are combined using WDM MUX. The optical spectrum of the transmitted signal is illustrated in Fig. 2. The excited LG mode profiles are illustrated in Fig. 3. At the receiver side, independent channels (850 and 850.8 nm) are separated using WDM DEMUX. Distinct modes at the receiver unit are filtered using a mode filter. Individual data signals are intercepted using a spatial PIN. A bit error rate tester (BERT) analyzes the received signal. The system parameters and atmospheric attenuation coefficient for different climatic conditions are considered as per practical FSO scenario [14, 15].

Fig. 1 Hybrid WDM-MDM-FSO link architecture

34 4 × 10 Gbps Hybrid WDM-MDM FSO Transmission Link

445

Fig. 2 Optical spectrum at the output of WDM MUX

3 Results and Discussion Figure 4a–d reports the simulative investigation of the proposed link performance for clear weather. Here, we discuss the performance analysis of LG00 mode and LG01 mode at 850 nm wavelength. The results show that LG00 mode has a better performance than LG01 mode. The Q Factor is reported as 9.31 dB, 8.23 dB, and 7.12 dB for LG00 mode; 8.15 dB, 6.85 dB, and 5.66 dB for LG01 mode whereas log(BER) is reported as −20.29, −16.08, and −12.30 for LG00 mode; −15.80, − 11.48, and −8.13 for LG01 mode at 5 km, 6 km, and 7 km respectively. The SNR is computed as 17.09 dB, 13.91 dB, and 11.15 dB for LG00 mode; 13.71 dB, 10.53 dB, and 7.76 dB for LG01 mode at 5 km, 6 km, and 7 km respectively. The received electrical power is computed as −59.16 dBm, −62.33 dBm, and −65.08 dBm for LG00 mode; −62.53 dBm, −65.70 dBm, and −68.44 dBm for LG01 mode at 5 km, 6 km, and 7 km respectively. The eye diagrams with clear and wide opening at 7 km as reported in Fig. 5 demonstrates a faithful 4 × 10 Gbps transmission. Figure 6 reports the modal decomposition at the receiver unit for LG00 and LG01 channels into linear polarized (LP) modes in terms of power coupling coefficients (PCC). Here, we can see that for LG00 mode, LP01 mode acquires 82.05% of total power whereas the remaining power is coupled into adjacent modes whereas for LG01 mode, LP11 mode acquires 67.37% of total power whereas the remaining power is coupled to adjacent modes. Higher intermodal power coupling for LG01 mode than LG00 mode can be observed which validates the results in Figs. 4 and 5. Figure 7 reports the impact of varying level of fog weather on the proposed link performance. Here, it is observed that the information signal quality is degraded the most in the case of heavy fog followed by moderate fog and low fog conditions.

446

M. Singh and J Malhotra

Fig. 3 Spatial mode profiles of a LG00. b LG01

4 Conclusion In the proposed work, four independent 10 Gbps non-return to zero encoded information signals are transported over free space channel for varying weather conditions using hybrid WDM-MDM techniques. LG00 mode and LG01 mode are incorporated for information transmission at 850 and 850.8 nm. From the results of the numerical investigation, it can be concluded that all four signals are transported faithfully to

34 4 × 10 Gbps Hybrid WDM-MDM FSO Transmission Link Fig. 4 a Q Factor versus range. b Log(BER) versus range. c SNR versus range. d Received power versus range for clear weather

447

448

M. Singh and J Malhotra

Fig. 5 Computed eye diagrams at 7 km for a LG00. b LG01

realize a 4 × 10 Gbps cost-effective long-reach optical wireless link with acceptable performance.

34 4 × 10 Gbps Hybrid WDM-MDM FSO Transmission Link

Fig. 6 Modal decomposition at the receiver for a LG00. b LG01

449

450

M. Singh and J Malhotra

Fig. 7 a SNR versus range. b Received power versus range for fog weather

References 1. Singh J, Kumar N (2013) Performance analysis of different modulation format on free space optical communication system. Optik—Int J Light Electron Opt 124(20):4651–4654 2. Khalighi M, Uysal M (2014) Survey on free space optical communication: a communication theory perspective. IEEE Commun Surv Tutorials 16(4):2231–2258

34 4 × 10 Gbps Hybrid WDM-MDM FSO Transmission Link

451

3. Mahdy A, Deogun J (2004) Wireless optical communications: a survey. In: Proceedings of IEEE wireless communications and networking conference, IEEE, Atlanta, USA, pp 2399–2404 4. Badar N, Jha R, Towfeeq I (2018) Performance analysis of 80 (8 × 10) Gbps RZ-DPSK based WDM-FSO system under combined effects of various weather conditions and atmospheric turbulence induced fading employing Gamma-Gamma fading model. Opt Quant Electron 50:1– 11 5. Jeyaseelan J, Kumar S, Caroline B (2018) PolSK and ASK modulation techniques based BER analysis of WDM-FSO system for under turbulence conditions. Wireless Pers Commun 103(4):3221–3237 6. Jeyaseelan J, Kumar S, Caroline B (2018) Performance analysis of free space optical communication system employing WDM-PolSK under turbulent weather conditions. J Optoelectronic Adv Mater 20(9):506–514 7. Prabu K, Charanya S, Jain M, Guha D (2017) BER analysis of SS-WDM based FSO system for Vellore weather conditions. Opt Commun 403:73–80 8. Ryf R, Mestre M, Randel S, Palou X, Gnauck A, Delbue R, Pupalaikis P, Sureka A, Sun Y, Jiang X, Lingle R (2013) Combined SDM and WDM transmission over 700-km few-mode fiber. In: Proceedings of Optical Fiber Communication conference and exposition and the National Fiber Optic Engineers Conference (OFC/NFOEC), IEEE, Anaheim, CA, pp 1–3 9. Agrawal G, Mumtaz S, Essiambre R (2013) Nonlinear performance of SDM systems designed with multimode or multicore fibers. In: Proceedings of Optical Fiber Communication conference and exposition and the National Fiber Optic Engineers Conference (OFC/NFOEC), IEEE, Anaheim, CA, pp 1–3 10. Carpenter J, Leon-Saval S, Eggleton B, Schröder J (2014) Spatial light modulators for subsystems and characterization in SDM. In: Proceedings of optoelectronics and communication conference and australian conference on optical fibre technology, IEEE, Melbourne, VIC, pp 23–24 11. Takahashi H, Soma D, Beppu S, Tsuritani T (2019) Digital signal processing for Space-Division Multiplexing (SDM) transmission. In: Proceedings of International Photonics Conference (IPC), IEEE, San Antonio, TX, USA, pp 1–2 12. Chaudhary S, Amphawan A (2018) Solid core PCF-based mode selector for MDM-Ro-FSO transmission systems. Photon Netw Commun 36(2):263–271 13. Chaudhary S, Amphawan A (2018) Selective excitation of LG 00, LG 01, and LG 02 modes by a solid core PCF based mode selector in MDM-Ro-FSO transmission systems. Laser Phys 28(7):1–8 14. Singh M, Malhotra J (2019) Performance comparison of high-speed long-reach mode division multiplexing-based radio over free space optics transmission system using different modulation formats under the effect of atmospheric turbulence. Opt Eng 58(4):046112-1-9 15. Singh M, Malhotra J (2019) Long-reach high-capacity hybrid MDM-OFDM-FSO transmission link under the effect of atmospheric turbulence. Wireless Pers Commun 107(4):1549–1571

Chapter 35

Task Scheduling in Cloud Computing Using Hybrid Meta-Heuristic: A Review Sandeep Kumar Patel and Avtar Singh

1 Introduction Cloud computing is an on-demand availability of shared resources, i.e., storage, computation power, network, software, and other services to fulfill client requests in small time and cost over the internet. The advantages include resource-transparency, reliability, affordability, flexibility, location—independence and a high availability of services [1]. To achieve these functionalities, a proper task scheduling is required so that it can provide a good performance in a swift manner. Moreover, cloud computing aims to satisfy the customer requirements in view of the Service Level Agreement (SLA) and the Quality of Service (QoS) [2]. There exist basically three service models viz.; 1. Platform as a service (PaaS), 2. Infrastructure as a service (IaaS) and 3. Software as a service (SaaS) which can be deployed on various deployment models like Private Clouds, Public Clouds and Hybrid Clouds [3]. Virtualization allows sharing a single instance of a resource among multiple people, e.g., server, network, desktop, operating system. It is used to display the hallucination, rather than actual, of many isolated virtual machines. Each VM runs many guest operating systems to ensure the heterogeneity of application. In this scenario, Hypervisor plays a major role as it assists the interaction between guest OS and physical hardware [4]. A key concept in cloud computing is the Resource Management which is implemented in two stages. The first stage—Resource Provisioning, provides means for S. K. Patel (B) · A. Singh Department of Computer Science and Engineering, Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab 144011, India e-mail: [email protected] A. Singh e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_35

453

454

S. K. Patel and A. Singh

the selection, deployment and management of software from task submission to task execution as requested by an application. The second stage—Task Scheduling, is the process of mapping of various incoming tasks to existing resources to achieve an optimal execution time and an efficient resource utilization [5]. The total completion cost of any task is the summation of the communication cost and execution cost of that task. The Data transfer cost may also be considered for large data transfers. To minimize this cost, resources are equally distributed among the tasks. In this research area, numerous studies have been done over the years with metaheuristic techniques being the most prevalent in the literature. Singh et al. [6] present a summarized study of various meta-heuristic optimization techniques employed in the cloud computing environment for task scheduling. Our study is the only one that focuses on hybrid techniques. The primary objective of the study is to conduct a proper systematic comparative analysis of various hybrid distinctions based on the metrics like makespan, cost, throughput and energy consumption. Aiming to infer intrinsic behavioral properties to these algorithms and assist in the appropriate and efficient hybridization. To build a roadmap for future studies is the ultimate outcome of this research. The organization for the rest of the paper is as follows: Sect. 2 presents a brief description of the task scheduling in the cloud environment. In Sect. 3, various optimization techniques are discussed. In Sect. 4, a literature review of hybrid metaheuristic techniques for scheduling is presented. Sections 5 and 6 gives a tabular summary of the related works and comparison of performance metrics respectively and the conclusion and the future work are presented in Sect. 7.

2 Task Scheduling in Cloud The task scheduling in cloud environment is an NP-complete problem so it is hard to find an optimal solution in polynomial time. The scheduling in cloud improves the resource utilization and reduces the overall completion time. There does not exist a standard task scheduling technique that could be extended to a large-scale environment. The main job of task scheduler is to distribute customer requests to all the present resources to execute them. Task scheduling becomes very important from the user’s point of view as they have to pay based on usage of resources based upon time. There are different effective resource scheduling criteria which reduces execution cost, time, energy and increases CPU utilization and productivity. A broad classification can be done into the following categories: static, dynamic, preemptive, non-preemptive, centralized and decentralized scheduling [7]. The major performance metrics used in the literature are as follows [8–9]: • The Makespan is the maximum finishing among all the received tasks.   Makespan = Max FTi | ∀i ∈ I

(1)

35 Task Scheduling in Cloud Computing Using Hybrid …

455

• The Throughput is the number of tasks completed with respect to deadline of each job. Throughput =



Xi

(2)

i∈I

• The Response Time is the time at which task arrives in the system to the time task is scheduled first time for execution. Response Time = Tfirst execution − Tarrival

(3)

• The Transmission Time is time required to transfer a task from queue to a specific VM. • The waiting time is defined as the time consumed in the waiting queue before the start of execution of particular task. • The Total Cost depends on transfer of file and processing time. TotalCost = Pi × Pc +

⎧ ⎨  ⎩

f ∈FINi

Size( f ) +

 f ∈FOUTi

Size( f )

⎫ ⎬ ⎭

× PTPB

(4)

where PC processing cost, f is file, PTPB processing time per bytes.

3 Optimization Techniques The performance of a system is directly influenced by the efficiency of task execution schedule. To achieve this, a number of optimization algorithms for allocating and scheduling the resources proficiently in the cloud have been proposed over the years. A comparative study of different meta-heuristic techniques is presented here that perform efficient task scheduling is given below:

3.1 Genetic Algorithm (GA) GA is inspired from the biological idea of creating a new generation population. Like in the Darwin’s theory of Natural Selection, the term “Survival of the fittest” is employed as the strategy method for task scheduling as the tasks are assigned to resources according to the value of fitness function. The basic terminologies of the GA are: Initial Population (all solution), Fitness Function (measures of fitness of all solution) Selection (choose fittest solution for next generation) Crossover (Intermixing of parts of two parents) Mutation (produce genetic diversity) [10–11].

456

S. K. Patel and A. Singh

3.2 Harmony Search Algorithm (HS) HS is a meta-heuristic search algorithm inspired by the process of musicians searching for a perfect harmony [12]. The musician can create a new harmony using three rules: Playing exactly the same from memory; almost similar to known one after pitch adjustment; totally compose a new one. The major principles in the HS are: Initialization (parameters like Harmony Memory Size (HMS), Pitch Adjusting Rate (PAR) and Harmony Memory Considering Rate (HMCR), Creation of Harmony Memory (2-D matrix containing a set of possible solutions), Improvise a new harmony (create a new solution), Randomization (diversity of solution), updating (to get better harmony).

3.3 Tabu Search (TS) TS is a meta-heuristic optimization search algorithm which uses a memory like HS. Tabu search was proposed by Glover [13]. It basically begins with a single random solution and is updated by one of the neighboring solutions. This process continues until the most optimal solution is found. It generates a neighborhood solution from the current solution and accepts a solution as the best solution if it is not improving the previous solution. This method can form a cycle by regenerating a previous solution again. Hence to avoid this cycle, TS discards the previous visited solution using memory called Tabu list.

3.4 Particle Swarm Optimization (PSO) PSO has recently become an important heuristic approach and has been applied to various computationally hard and complex problems, such as task scheduling problem, extraction of knowledge in data mining, electrical power systems, etc. It draws inspiration from the social behavior of organisms like a bird flock or fish schooling. The main steps of the PSO are defined as [14]: Initial population (all possible solution, i.e., particles), Fitness Function, Selection (choosing best among two parameters personal best, i.e., p-best and global best, i.e., g-best), Updating (updates velocity and position of a particle).

3.5 Cuckoo Optimization Algorithm (COA) COA is inspired from obligate brood parasitism of cuckoo which lays eggs in the host nest having Lévy flight behavior of the birds [15]. The main terminologies of

35 Task Scheduling in Cloud Computing Using Hybrid …

457

this algorithm are [12]: Initialization (population of solutions), New Cuckoo Generation (cuckoos, i.e., solutions are generated using levy flights), Fitness Evaluation, Updating and Selection/Rejection.

3.6 Artificial Bee Colony (ABC) ABC is a swarm-based meta-heuristic optimization technique, inspiring from foraging conduct of honey bee colonies. ABC algorithm classifies the bees into three types: employed bees, scout bees and onlooker bees. The employed bees search for the food around the food source in their memory and this info of food sources is passed onto the onlooker bees. The onlooker bees do the selection procedure from the food sources found by the employed bees. The probability of selection of a food source by the onlooker bees is determined by its quality. The scout bees induct the diversity by abandoning their food sources and getting along in search of new ones. The total number of employed bees or the onlooker bees is the total number of solutions in the swarm [16]. The main phases of ABC Algorithm are: Initialization (all possible solutions), Phase Employed Bee Phase (determines the neighborhood food source), Onlooker Bee Phase (evaluate effectiveness of all food sources), Scout Bee Phase (new solutions are randomly discovered) and Fitness value.

3.7 Ant Colony Optimization (ACO) ACO is a meta-heuristic method, introduced by Dorigo in 1992, inspired from food searching method of the ants. The ants share the food source information through pheromone path. An ant solves a problem by using a construction graph where edges are the possible partial solution that the ant can take according to a probabilistic state transition rule. After the selection of either a partial or a complete solution, the pheromone updating begins to start. This rule gives a mechanism for speeding up convergence and also prevents premature solution stagnation [17, 18].

3.8 Simulated Annealing (SA) SA is an iterative meta-heuristic random search optimization technique for solving several nonlinear optimization problems. The name and motivation originate from annealing in metallurgy, describes a process of heating and controlled cooling of a material to improve the dimension of its crystals and diminish their defects. It was proposed as the metropolis algorithm and after that many variations were introduced later on. Simulated annealing is widely being used in task scheduling in cloud environment, machine -scheduling and vehicle routing etc. [19].

458

S. K. Patel and A. Singh

3.9 Bacteria Foraging Optimization Algorithm (BFO) BFO includes three basic mechanisms: chemotaxis, reproduction, and eliminationdispersal. Chemotaxis helps the motion of E-coli cell by swaying and plunging with help of flagella. Reproduction: Only half part of population survives, and that bacterium degenerates into two identical ones, which are then positioned at the same location leaving the total bacteria population unaffected. Elimination and Dispersal: The chemotaxis is considered for local search and it increases rate of the convergence. Since bacteria can get stuck in local minima hence, the diversity of BFO is changed to disregard the chances of getting stuck in the local minima. The event of dispersion occurs after a particular number of reproduction processes. So, some bacteria are taken with probability P, to be killed and shifted to a different location within the environment [20].

3.10 Gravitational Search Algorithm (GSA) GSA is an optimization technique method based on “Gravitational Law” [21]. This is basically a population-based multi-dimensional optimization algorithm where agents are called as objects and their performance can be calculated by their masses. The masses are the way of communication as the agents move toward heavier masses by gravitational force. The heavy masses correspond to good solutions and move slowly than lighter ones. Each agent (mass) has four characteristics: position, Active Gravitational Mass (AGM), inertial mass (IM), and Passive Gravitational Mass (PGM). The solution of the problem can be obtained by position, and its inertial masses and gravitational can be calculated using a fitness function.

3.11 Lion Optimization Algorithm (LOA) LOA is a meta-heuristic algorithm taken inspiration from the lifestyle of the lion. The lion has two types of social organization: resident and nomad. Residents live in groups, called pride that includes one or more than one adult males, around five females and their cubs. The nomads move about sporadically either single or in pairs. A lion can switch lifestyle means nomads may become residents and vice–versa [22]. Hybrid Meta-heuristic approaches Every meta-heuristic algorithm comes with its share of pros and cons. Clubbing together a selected set of them to harness the advantages of each one can improve the efficiency. Several such hybrid approaches have been proposed in the literature, which have been discussed below.

35 Task Scheduling in Cloud Computing Using Hybrid …

459

3.12 The Harmony Tabu Search (THTS) In this proposed method, TS and HS are combined to improve the results. TS is applied as the first step followed by the HS. At the beginning of the algorithm, TS is initialized with a tabu list that contains all the candidate solutions and generates initial solutions which are compared with the best candidate solution in the tabu list. Its better quality guarantees its inclusion into the tabu list. After this, HS is applied with the initialization of Harmony memory (HM) with the tabu list. A new solution is obtained from HM by improvising each component of the solution with harmony memory considering rate (HMCR) parameter and mutation of the solution by pitch adjusting rate (PAR) [23].

3.13 Cuckoo Harmony Search Algorithm (CHSA) The CS is very efficient for local search with a single parameter. But it has a limitation that it takes a huge amount of time to obtain an optimal solution. Similarly, HS has a limitation too, its search execution completely depends upon the parameter values adjustments. When hybridization is applied, it is seen that it removes those limitations which affect the performance of CS and HS individually [12].

3.14 Harmony-Inspired Genetic Algorithm (HIGA) This Hybrid algorithm is composed of the HS and GA to detect both local optima as well as global optima when scheduling is being done. The HIGA provides better results when a scenario arises where the best individual remains in the same state either in local optimal state or global optimal state after many generations with the help of HS and updates the current population in the GA. If HS failed to find it in many iterations, it simply means the best solution might be in the global optimal state. As a result, the process can halt. So, in spite of the halting process, the HIGA algorithm reduces the number of iterations and senses local or global optimal state every time. In this, GA is considered as a primary optimization algorithm and when local optimal solution is found by any individual then HS is used to evaluate global optimal solution [24].

460

S. K. Patel and A. Singh

3.15 Genetic Algorithm-Particle Swarm Optimization (GA-PSO) Here GA is applied first and a random population is generated. Then fitness function is applied to obtain elites which are divided into two equal halves. First half part is employed by GA and rest of the half by PSO. In GA, the best elites are given to crossover operator and mutation operator, while in PSO p-best and g-best is calculated for each elite. The position and velocity of elites is calculated and updated in the each iteration [25].

3.16 Multi-objective Hybrid Bacteria Foraging Algorithm (MHBFA) This algorithm produces a solution with a better local and global search capability and a greater convergence time. Since, Bacteria Foraging (BF) has a great local search capability and unluckily has a poor global search. GA overcomes this limitation, hence, the MHBFA inherits swarming, elimination and dispersal from BF and these are measures which are critical in global search procedure [26].

3.17 Simulated Annealing Based Symbiotic Organisms Search (SASOS) This Hybrid algorithm is comprised of the SA and Symbiotic Organism Search (SOS) for achieving the improved convergence rate and improved quality of the solution. The SOS algorithm includes phases like mutualism, commensalism, and parasitism. The SA has a systematic ability to get better local search solutions using the policy of commensalism and mutualism phases of the SOS. The parasitism phase remains unaffected because it deletes the passive solutions and injects the active ones in the solution space which could help the search process out of the local optimal region [27].

3.18 The Technique for Order of Preference by Similarity to Ideal Solution-Particle Swarm Optimization (TOPSIS-PSO) In this hybridization technique, PSO is combined with TOPSIS algorithm to find an optimal solution by taking into account criteria like, transmission time, execution

35 Task Scheduling in Cloud Computing Using Hybrid …

461

time, and cost, which is carried out in two phases. In the first phase, TOPSIS is applied in order to achieve the relative proximity of the jobs. In the second phase, PSO is applied for all tasks to compute closeness in these three criteria in all virtual machines (VM). The fitness function of PSO is formulated using TOPSIS which gives an optimal solution in minimum time [28].

3.19 Artificial Bee Colony Simulated Annealing (ABC-SA) This Hybrid algorithm is comprised of ABC and Simulated SA for the efficient task scheduling depending upon their sizes, priority of request came etc. [29].

3.20 Genetic Algorithm Artificial Bee Colony (GA-ABC) This Hybrid algorithm combines the features of GA and ABC with the facility of dynamic voltage and frequency scaling (DVFS) to achieve efficient task scheduling. In this algorithm, GA is used as the first step for starting the allocation process of tasks to VM and obtained the new individuals until the termination condition of GA occurs. The output of GA is fed as the input to the ABC. Then, ABC provides the optimal distance between tasks and VMs [30].

3.21 Cuckoo Gravitational Search Algorithm (CGSA) This hybrid CGSA composed of CS and GSA. The major demerit of CS algorithm is that it takes maximum time in order to find the optimal solution and the disadvantage of GSA is that it does not converge well for local optimal solution. The CGSA uses the advantages of CS and GSA It conquers the weaknesses and provides the efficient solution in a shorter computational time [31].

3.22 Oppositional Lion Optimization Algorithm (OLOA) This hybrid OLOA uses the benefits of Lion optimization algorithm (LOA) and oppositional based learning (OBL). In this hybrid approach, OBL is nested within the LOA [32].

462

S. K. Patel and A. Singh

3.23 Fuzzy System—Modified Particle Swarm Optimization (FMPSO) PSO uses the Shortest Job to Fastest Processor (SJFP) technique to initiate the initial population, location matrix of particle and velocity matrix. The roulette wheel selection, crossover operator and mutation operator are considered to conquer the demerits of PSO like the local optima. The Hierarchical fuzzy system is used for the evaluation purpose of fitness value of each particle [33].

4 Literature Review The related studies on this research area have been discussed in Table 1. Alazzam et al. [23] proposed a hybrid task scheduling algorithm which includes Tabu- Harmony search algorithm (THTS). The algorithm performs better in respect of makespan and cost compared to TS, HS, round-robin individually. Pradeep et al. [12] presented hybrid Cuckoo Harmony Search Algorithm (CHSA) for task scheduling to improve the energy consumption, memory usage, credit, cost, fitness function and penalty and it was observed that the performance of this proposed algorithm is comparatively better than individual CS and HS algorithm, and hybrid CSGA. Sharma and Garg [24] focused on a Harmony-Inspired Genetic Algorithm (HIGA) for energy-efficient task scheduling to improve energy efficiency and performance. The results describe that the presented algorithm improved efficiency and performance. Senthil Kumar et al. [25] discussed a hybrid Genetic Algorithm Particle Swarm Optimization (GAPSO) to minimize the total execution cost. GA-PSO helped to obtain the result better than various existing algorithms like GA, Max-Min, Min-Min. Srichandan et al. [26] discussed a Hybrid Bacteria Foraging Algorithm (HBFA) for task scheduling which inherits the desirable characteristics of GA and Bacteria foraging (BF) in cloud to minimize the makespan and reduce energy consumption economically as well as ecologically. The results show that HBFA outperforms than GA, PSO, BF when applied alone. Abdullahi and Ngadi [27] put forth a hybrid algorithm to optimize the task scheduling based on SA and SOS for improving convergence speed, response time, degree of imbalance and makespan. The results show that SASOS performs better than SOS. Panwar et al. [28] proposed a new hybrid algorithm based on TOPSIS and PSO to solve multiple objective such as transmission time, resource utilization, execution time, and cost. The achievement of TOPSIS-PSO has been compared with ABC, PSO, dynamic PSO (DPSO), FUGE and IABC algorithm in terms of transmission time, makespan, resource utilization and total cost. Muthulakshmi and Somasundaram [29] proposed a hybrid algorithm which combines the advantages of ABC and SA to improve the makespan. The result obtained by using this algorithm outperforms than MFCFS, Shortest Job First (SJF), LJF, hybrid ABC-LJF and hybrid ABC-SJF. Kumar and Kalra [30] has presented a

Strategy

THTS

HIGA

GA-PSO

HTSCC

Authors

Alazzam et al. [23]

Sharma and Garg [24]

Senthil Kumar et al. [25]

Al-Arasi and Saif [34]

Table 1 Literature survey summary Performance metrics used

GA is employed to randomly generated solution with tournament selection operator, If optimal solution is find then it is given to PSO to find best solution

• Makespan • Resourceutilization

The population is • Response time randomly generated. The half population is evaluated GA and rest half is by PSO and combine both result to get optimal solution

Current generation is • Makespan evolved by GA results in • Energy consumption local optima and it is given • Execution time to HS through which global optima is achieved

Tabu search is applied as • Makespan first step after that • Throughput Harmony search is applied • Total cost until optimal solution obtained

Description

• Makespan 31.32% and 22.36% • Resource utilization 23.17% and 19.6% better than GA and PSO

• Response time is lowered by 1678, 1393, and 1000 ms with Min-Min, Max-Min and GA

• Makespan improved by 47% • Energy saving 33% • Less execution time by 39%

• Least makespan • Least cost • Nearly the same throughput

Achievements

(continued)

CloudSim

CloudSim

MATLAB 2013b

CloudSim

Tools

35 Task Scheduling in Cloud Computing Using Hybrid … 463

Enhanced GA -PSO

CHSA

Improved GA-PSO

MHBFA

Jana and Poray [36]

Pradeep and Jacob [31]

Kousalya and Radhakrishnan [35]

Srichandan et al. [26]

Abdullahi and Ngadi SASOS [27]

Strategy

Authors

Table 1 (continued) Description

Performance metrics used

SOS is initialized first and SA techniques is applied during mutualism and commensalism phase of SOS

Initialized with BF and Hybrid chemotaxis and Hybrid reproduction is employed by PSO to obtain optimal solution

Group of PSO is ordered and coefficient of every constraint of optimal solution of GA is evaluated. Best solution is obtained if anyone meets termination criteria

Local optimal solution with cuckoo search and then it is given to Harmony Search

Memory usage Cost Energy consumption Fitness Penalty Credit

Makespan Energy consumption Convergence Stability Solution diversity

• Convergence speed • Response time • Makespan

• • • • •

• Cost • Execution time

• • • • • •

Implemented with GA and • Response time obtained result is given to • Waiting time PSO to achieve best result

Tools

• Faster convergence for 500 tasks • Improved response time • Least makespan

• Makespan decreases • Minimum energy consumption than GA, PSO and BFA • Scalable • High coverage ratio

• Least cost • Least execution time

• Memory usage 156 • Cost of 0.0098$ • Energy consumption is 0.23 • High Fitness profit • Penalty is 0.276 • Credit is 0.724

(continued)

CloudSim

MATLAB 2013b

CloudSim

CloudSim

• Minimum waiting time CloudSim • Minimum response time

Achievements

464 S. K. Patel and A. Singh

Strategy

TOPSIS –PSO

GA-ABC

ABC-SA

OCSA

GSAA

Authors

Panwar et al. [28]

Kumar and Kalra [30]

Muthulakshmi et al. (2017)

Pradeep and Jacob [31]

Gan et al. [37]

Table 1 (continued) • • • •

Makespan Transmission time Cost Resource utilization

Performance metrics used

• Makespan • Cost

• Task size • Priority of the request

Population employed with • Bandwidth GA. After mutation phase • Completion time the result is given to SA • Cost • Reliability

Initialized with OBL and fittest solution is selected and updating is done by CS

Initialized with ABC having random selection capability of SA to increase efficiency

Started with GA and when • Makespan termination occurred then • Energy consumption result is given as input ABC followed by initialization of DVFS

Initialized with PSO in which fitness is calculated using TOPSIS

Description

• Converges in 743 gen • Function-value 0.91174 • Optimal Scheduling

• Makespan is 141.5 for 500 tasks • cost is 110.3 for 500 tasks

• Improved task size and priority of request

• Less makespan by 75.5% for 40 tasks • Less energy consumption by 84.14% for 40 tasks than Modified GA

• Least makespan • Minimum transmission time • Cost • Improved resource utilization

Achievements

(continued)

MapReduce

CloudSim

CloudSim

CloudSim

CloudSim

Tools

35 Task Scheduling in Cloud Computing Using Hybrid … 465

Performance metrics used

ABC-HFS

OLOA

LGSA

Li and Han [42]

Pradeep and Jacob [32]

Manikandan and Pravin [43]

• Completion time

• Makes span • Energy conservation

Implemented with LOA and fitness function is evaluated using GSA

• Profit • Cost • Energy

LOA is implemented with • Makespan initialization of population • Cost based on OBL

Employed with ABC having 2 kinds; 1.HFS with identical parallel machines and 2.HFS with unrelated machines

Initialization is done with ACO and the obtained local result is given to cultural Algorithm

CloudSim

CloudSim

VC++6.0

Tools

• Max profit 0.8 • Min cost 0.011$ • Min energy 0.039

• Makespan 95.2 s • Cost is 65.2 s

• Reduced completion time

(continued)

CloudSim

CloudSim

C++

• Completion time 106.48 C# in cloud azure • Energy consumption 0.204

HCACO

Makespan Improvement ratio Efficiency Execution Time (ET)

Azad and Navimipour [41]

• • • •

• Makespan by 13% in comparison with FUGE • Reduce ET by 8% and 16% • Average efficiency 3.36

Initialized with PSO in which fitness of solution is calculated using fuzzy interference system

FMPSO

• Least Completion time

Achievements

Mansouri et al. [40]

Employed SA to • Completion time population generated using partial HS

Description

• Least makespan than ABC, PSO, ACO

HSSA

Jiang et al. [38]

Tawfeek and Elhady Hybrid swarm intelligence ABC is first initialized and • Makespan [39] techniques the solution is handled by different suitable module like Bees(), Ants(), particles () etc.

Strategy

Authors

Table 1 (continued)

466 S. K. Patel and A. Singh

Strategy

Dynamic Queue Meta-heuristic Algorithm

CSM-SA

Authors

Alla et al. [33]

Gabi [44]

Table 1 (continued) Performance metrics used

CSM is used as a first step • Time then SA is implemented as • Cost second step

The FL-PSO algorithm • Waiting time and SA-PSO are applied to • Makespan get the optimal solution • Cost • Resource utilization

Description

Improved • Time • Cost

FL-PSO gives the improved results of • Waiting time • Makespan • Cost • Resource utilization

Achievements

CloudSim

CloudSim

Tools

35 Task Scheduling in Cloud Computing Using Hybrid … 467

468

S. K. Patel and A. Singh

hybrid algorithm GA-ABC to make improvement in makespan and energy consumption using DVFS. DVFS model is used for the calculation of power consumption. The results show better results than Modified Genetic Algorithm (MGA). Pradeep and Jacob [31] discussed a hybrid algorithm which inherits the benefits of both Cuckoo Search (CS) and Gravitational Search (GS) to execute the tasks with low cost, less usage of resources, and minimum energy- consumption. The results show that CGSA perform better than CS, GSA, GA, PSO. Krishnadoss and Jacob [32] presented a hybrid algorithm that uses LOA and Oppositional Based Learning (OBL) to improve makespan and cost. The OLOA performs better than PSO and GA. Alla et al. [33] proposed two hybrid algorithms using Fuzzy Logic with PSO and SA with PSO for optimization of makespan, waiting time, cost, resource utilization, degree of imbalance and queue length of the tasks in cloud environment. The hybrid algorithm outstrips the individual SA and PSO in their performance. Al-Arasi and Saif [34] presented hybrid algorithm that inherits the advantages of GA with Tournament selection and PSO. The GA-PSO provides better results by reducing makespan and increasing the resource utilization. Kousalya and Radhakrishnan [35] implemented a hybrid algorithm that uses improved GA including divisible task scheduling into the foreground and background process and PSO. The GAPSO performs better in terms of execution time and resource utilization. Jana and Poray [36] presented a hybrid GA-PSO algorithm to provide comparatively better response time and minimize the waiting time. The results show that this cost-effective GA-PSO achieves better response time, and minimizes the waiting time. Gan et al. [37] discussed about hybrid algorithm using GA and SA which considers the Quality of Service (QOS) requirements for many types of tasks, that correspond to the user’s tasks-characteristics in cloud—computing environment. Jiang et al. [38] focused on hybridization using merits of HS and SA which provides global search and faster convergence speed and local minima escaping to get the better solutions. Tawfeek and Elhady [39] proposed a hybrid swarm intelligence technique which involves ABC, PSO, ACO. The algorithm performs better than existing algorithms. Mansouri et al. [40] presented a hybrid algorithm FMPSO to determine the execution time, makespan, imbalance degree, improvement ratio and efficiency The results show that it does better than other strategies like FUGE, SGA, MGA etc. Azad and Navimipour [41] discussed a hybrid algorithm based on Cultural Algorithm which considers acceptance and influence as major operators and the Ant Colony Optimization Algorithm minimizes the makespan and energy consumption. The results show that it performs better than HEFT and ACO. Li and Han [42] focused on a hybrid task scheduling technique with ABC algorithm with flow shop scheduling for improvement of convergence rate. Manikandan and Pravin [43] proposed a hybrid algorithm uses the benefits of LOA and GSA for the multi-objective task scheduling and uses profit, cost, and energy as the performance metrics. The LGSA perform better than the others. Gabi [44] presented a hybrid multi-objective algorithm comprised of Cat Swarm optimization (CSO) and SA for task scheduling. The algorithm outperformed it constituents by resulting in minimum execution time, cost and a greater scalability which provides global search and faster convergence speed and local minima escaping to get the better solutions.

35 Task Scheduling in Cloud Computing Using Hybrid …

469

5 Comparison of Performance Metrics The selection of appropriate performance evaluation metrics is also important in determining the efficiency of a scheduling algorithm. There have been numerous metrics devised over the years to capture the overall efficiency of the algorithm. Achieving that with a single metric is not possible, making the use of multiple metrics for the evaluation of an algorithm a common trend in the literature. Figure 1 is the graphical depiction of Table 1, i.e., the number of metrics used by several authors in the literature. The most commonly used metric in the literature is the makespan which can be seen in the Fig. 2. Number of Metrics Used

7 6 5 4 3 2 1 0

Authors

No. of Times Used

Fig. 1 Comparison on the basis of metrics used

15 10 5 0

Evaluation Metrics Fig. 2 Comparison of use of different evaluation metrics

470

S. K. Patel and A. Singh

6 Conclusion The applications of the cloud computing environment have been spiking up since the past couple of decades. With more and more services and applications being shifting to the cloud, the requirement of developing more efficient and faster-driving algorithms viz. task scheduling, resource scheduling algorithms is also growing. Finding an appropriate cost-effective, efficient and competent scheduling algorithm is a tedious task. The scheduling algorithms used in conventional computing systems fail to perform well in a more constrained cloud environment. Relatively new techniques like LOA and ACO in hybrid form have shown promising results by outperforming the others. The performance evaluation metrics do not capture the comprehensive efficiency of the scheduling algorithm. The most widely used metric is the makespan but lately, there has been a shift toward energy-efficient algorithms increasing the use of energy efficiency metric for performance evaluation. All the studies in the literature have used the basic versions of the individual algorithms in the process of hybridization. In the future, the hybridization can be done with the improved variants of these algorithms like improved harmony search, modified PSO, etc. to eliminate the implicit limitations of the basic variants. Though there are numerous standard data sets available that replicate the active cloud scenario but the research needs to be extended to the dynamic scheduling techniques, making it an open research field for the researchers in the future. So far, Meta-heuristics have been performing altogether quite efficiently but as they draw inspiration from many natural or man-made phenomenon making it susceptible to diverging away from the scientific consistency.

References 1. Porres I, Mikkonen T, Ashraf A (2013) Developing cloud software: algorithms, applications, and tools. Turku Centre for Computer Science, Finland 2. Buyya R (2009) Cloud computing and emerging IT platforms: vision, hype, and reality for delivering computing as the 5th utility. Future Gener Comput Syst 25(6):599–616 3. Kamboj S, Ghumman NS (2016) A survey on cloud computing and its types. In: 3rd International conference on computing for sustainable global development (INDIACom), IEEE, pp 2971–2974 4. Kumar R, Sahoo G (2014) Cloud computing simulation using CloudSim. arXiv preprint arXiv: 1403.3253. https://doi.org/10.14445/22315381/IJETT-V8P216 5. Singh S, Chana I (2016) A survey on resource scheduling in cloud computing: issues and challenges. J Grid Comput 14(2):217–264 6. Singh P, Dutta M, Aggarwal N (2017) A review of task scheduling based on meta-heuristics approach in cloud computing. Knowl Inf Syst 52(1):1–51 7. Thomas A, Krishnalal G, Raj VPJ (2015) Credit based scheduling algorithm in cloud computing environment. Procedia Comput Sci 46:913–920 8. Mohialdeen IA (2013) Comparative study of scheduling algorithms in cloud computing environment. J Comput Sci 9(2):252–263

35 Task Scheduling in Cloud Computing Using Hybrid …

471

9. Raju R, Babukarthik RG, Chandramohan D, Dhavachelvan P, Vengattaraman T (2013) Minimizing the makespan using hybrid algorithm for cloud computing. In: 3rd IEEE International Advance Computing Conference (IACC), IEEE, pp 957–962 10. Ge Y, Wei G (2010) GA-based task scheduler for the cloud computing systems. In: International conference on web information systems and mining, vol 2, IEEE, pp 181–186 11. Moraga RJ, DePuy GW, Whitehouse GE (2006) Meta heuristics: a solution methodology for optimization problems. In: Handbook of industrial and systems engineering, CRC Press, Florida 12. Pradeep K, Jacob TP (2018) A hybrid approach for task scheduling using the cuckoo and harmony search in cloud computing environment. Wireless Pers Commun 101(4):2287–2311 13. Glover F (1997) Tabu search and adaptive memory programming—advances, applications and challenges. In: Interfaces in computer science and operations research. Springer, Boston, pp 1–75 14. Bilgaiyan S, Sagnika S, Das M (2014) An analysis of task scheduling in cloud computing using evolutionary and swarm-based algorithms. Int J Comput Appl 89(2):11–18 15. Yang X-S, Deb S (2009) Cuckoo search via Levy flights. In: Proceedings of the world congress on nature and biologically inspired computing, pp 210–214 16. TSai PW, Pan JS, Liao BY, Chu SC (2009) Enhanced artificial bee colony optimization. Int J Innovative Comput Inf Control 5:1–12 17. Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39 18. Chiang CW, Lee YC, Lee CN, Chou TY (2006) Ant colony optimization for task matching and scheduling. IEEE Proc—Comput Digital Tech 153(6):373–380 19. Liu X, Liu J (2016) A task scheduling based on simulated annealing algorithm in cloud computing. Int J Hybrid Inf Technol 9(6):403–412 20. Chen H, Zhu Y, Hu K (2011) Adaptive bacterial foraging optimization. Abstr Appl Anal 2011 (Hindawi) 21. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179(13):2232–2248 22. Yazdani M, Jolai F (2015) Lion optimization algorithm. J Comput Des Eng 23. Alazzam H, Alhenawi E, Al-Sayyed R (2019) A hybrid job scheduling algorithm based on Tabu and Harmony search algorithms. J Supercomputing 75(12):7994–8011 24. Sharma M, Garg R (2019) HIGA: harmony-inspired genetic algorithm for rack-aware energyefficient task scheduling in cloud data centers. Eng Sci Technol Int J 25. Senthil Kumar AM, Parthiban K, Siva Shankar S (2019) An efficient task scheduling in a cloud computing environment using hybrid Genetic Algorithm-Particle Swarm Optimization (GA-PSO) algorithm. In: International Conference on Intelligent Sustainable Systems (ICISS), IEEE 26. Srichandan S, Kumar TA, Bibhudatta S (2018) Task scheduling for cloud computing using multi-objective hybrid bacteria foraging algorithm. Future Comput Inform J 3(2):210–230 27. Abdullahi M, Ngadi MA (2016) Hybrid symbiotic organisms search optimization algorithm for scheduling of tasks on cloud computing environment. PloS ONE 11(6):e0158229 28. Panwar N, Negi S, Rauthan MMS, Vaisla KS (2019) TOPSIS–PSO inspired non-preemptive tasks scheduling algorithm in cloud environment. Cluster Comput 22(4):1379–1396 29. Muthulakshmi B, Somasundaram K (2019) A hybrid ABC-SA based optimized scheduling and resource allocation for cloud environment. Cluster Comput 22(5):10769–10777 30. Kumar S, Kalra M (2019) A hybrid approach for energy-efficient task scheduling in cloud. In: Proceedings of 2nd international conference on communication, computing and networking. Springer, Singapore 31. Pradeep K, Jacob TP (2018) CGSA scheduler: a multi-objective-based hybrid approach for task scheduling in cloud environment. Inf Secur J—A Global Perspect 27(2):77–91 32. Krishnadoss P, Jacob P (2019) OLOA: based task scheduling in heterogeneous clouds. Int J Intell Eng Syst 12.1 33. Alla HB, Alla SB, Touhafi A, Ezzati A (2018) A novel task scheduling approach based on dynamic queues and hybrid meta-heuristic algorithms for cloud computing environment. Cluster Comput 21(4):1797–1820

472

S. K. Patel and A. Singh

34. Al-Arasi RA, Saif A (2018) HTSCC a hybrid task scheduling algorithm in cloud computing environment. Int J Comput Technol 17(2) 35. Kousalya A, Radhakrishnan R (2017) Hybrid algorithm based on genetic algorithm and PSO for task scheduling in cloud computing environment. Int J Networking Virtual Organ 17(2– 3):149–157 36. Jana B, Poray J (2018) A hybrid task scheduling approach based on genetic algorithm and particle swarm optimization technique in cloud environment. In: Intelligent engineering informatics. Springer, Singapore, pp 607–614 37. Gan G, Huang T, Gao S (2010) Genetic simulated annealing algorithm for task scheduling based on cloud computing environment. In: International conference on intelligent computing and integrated systems. IEEE 38. Jiang H, Bao Y, Zheng L, Liu Y (2012) A hybrid algorithm of harmony search and simulated annealing for multiprocessor task scheduling. In: International Conference on Systems and Informatics (ICSAI), IEEE, pp 718–720 39. Tawfeek MA, Elhady GF (2016) Hybrid algorithm based on swarm intelligence techniques for dynamic tasks scheduling in cloud computing. Int J Intell Syst Appl 8(11):61 40. Mansouri N, Zade BMH, Javidi MM (2019) Hybrid task scheduling strategy for cloud computing by modified particle swarm optimization and fuzzy theory. Comput Ind Eng 130:597–633 41. Azad P, Navimipour NJ (2017) An energy-aware task scheduling in the cloud computing using a hybrid cultural and ant colony optimization algorithm. Int J Cloud Appl Comput 7(4):20–40 42. Li J-Q, Han Y-Q (2019) A hybrid multi-objective artificial bee colony algorithm for flexible task scheduling problems in cloud computing system. Cluster Comput 1–17 43. Manikandan N, Pravin A (2019) LGSA: hybrid task scheduling in multi objective functionality in cloud computing environment. 3D Res 10(2):12 44. Gabi D (2020) Hybrid cat swarm optimization and simulated annealing for dynamic task scheduling on cloud computing environment. J Inf Commun Technol 17(3):435–467

Chapter 36

Modulation Techniques for Next-Generation Wireless Communication-5G Sanjeev Kumar, Preeti Singh, and Neha Gupta

1 Introduction Today is an era of wireless communication. In the last decade, significant amount of research has been done in the field of wireless communication. Wireless communication consists of radio frequency (RF) and optical wireless communication (OWC). OWC can be divided into visible light communication (VLC) and free space optics (FSO) [1]. In FSO, laser diodes are used and LED’s are used for data transmission in VLC. Photodetector is used for reception of data for both FSO and VLC [2]. In the recent years, information and communication technologies have shown numerous growth and advancement. As the demand for communication quality is increasing, the requirement of high speed, high spectrum efficiency, and low latency communication networks is increasing. This increase in the demand can be fulfilled with the next-generation wireless communication technologies, i.e., 5G. In this paper, CP-OFDM, F-OFDM, FBMC, and UFMC as next-generation wireless communication technologies (NGWCT) are analyzed. CP-OFDM is already existing technology which suffers from inconsistencies like out of band distortion, poor frequency localization, and length of CP. These drawbacks can be overcome using filtered OFDM,

S. Kumar (B) · P. Singh · N. Gupta UIET, Panjab University, Chandigarh, India e-mail: [email protected] P. Singh e-mail: [email protected] N. Gupta e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_36

473

474

S. Kumar et al.

Fig. 1 5G waveform candidates

FBMC, and UFMC modulation technologies. The waveform design in 5G uses filterbased waveforms which can be categorized into three types: subcarrier, sub-band and full band filtering [3]. In Fig. 1, the different 5G waveform candidates are shown. The rest of paper is organized as follows: Sect. 2 discusses the performance analysis of 5G waveform candidates (CP-OFDM, F-OFDM, FBMC, UFMC), and conclusion is provided in Sect. 3.

2 5G Waveform Candidates There has been a great interest of waveforms either it is previous generation or nextgeneration technologies. The candidate waveforms can be classified into singlecarrier waveforms and multi-carrier waveforms. In this paper, only multi-carrier waveforms are analyzed. (1) CP-OFDM The cyclic prefix OFDM is the most deployed and researched multi-carrier technique in wired and wireless communication in last two decades. CP-OFDM offer advantages like orthogonality of subcarriers, adaptive modulation techniques, and low inter-symbol inter-reference (ISI) by using CP. The block diagram of OFDM is shown in Fig. 2. The transmitted signal of CP-OFDM can be represented as s(t) =

N −1 

n

dn e j2πk N

(1)

n=0

where d n is the complex data symbol and N is the total number of subcarriers [4]. The bit error rate of M-QAM technique is given by [5]

36 Modulation Techniques for Next-Generation …

475

Fig. 2 Block diagram of OFDM

BER =

2 log2 (M)

 1−

√1 M



 Q

 3 (M−1)

log2 (M)E b No

(2)

where E b /No is the signal-to-noise ratio and M is the constellation size. In spite of several advantages, the CP-OFDM carries some drawbacks such as out of band distortion, length of cyclic prefix, and poor frequency localization [6, 7]. To overcome these drawbacks, filter-based multicarrier techniques are discussed below. (2) F-OFDM Filtered OFDM is based on full-band filtering principle. In this technique, only one pair of transmit and receive filters is required. The advantage of F-OFDM over CPOFDM is reduced out of band distortion. Based on requirement of service and power, different modulation techniques can be employed in F-OFDM [8]. Asynchronous property of F-OFDM is one of its main advantage [9]. Filter length is the main weakness of F-OFDM. If the length of filter is larger than the (1/2 OFDM symbol), then there is a reduction in frequency spectrum. If the filter length is smaller than normal, then out of band distortion increases [10]. The block diagram of F-OFDM is shown in Fig. 3. In Fig. 3, the different subcarriers are converted into time domain using inverse fast Fourier transform (IFFT). Addition of cyclic prefix is done before full-band filtering of time domain signal which is transmitted through wireless channel and matched filter is used at the receiver. The cyclic prefix is removed before FFT operation, and data is received. The F-OFDM transmitted signal can be represented as s(t) =

L−1    sl n − l N + N g

(3)

l=0

where N g is the CP length, S l (n) is the transmitted pulse, l is the subcarrier position, and L is the total number of symbols [11]. The BER of F-OFDM is same as OFDM. The next candidate for 5G is FBMC.

476

S. Kumar et al.

Fig. 3 Block diagram of F-OFDM

3 FBMC The main principle of FBMC is subcarrier filtering. In this technique, the bandwidth is divided into subcarriers and per subcarrier filtering is done. As a result, the out of band distortion is reduced, and spectral efficiency is increased [12]. The block diagram of FBMC is shown in Fig. 4. In Fig. 4, signal mapping is done to input data before offset quadrature amplitude modulation (OQAM) is done. Then, the signal is up sampled and filtered through prototype filter. After conversion into time domain, the signal is received at receiver passing through wireless channel. At the receiver, the signal is filtered, and demodulation is done before down-sampling. The main disadvantage of FBMC is that the tail of filter impulse response is longer than other filter-based 5G techniques which makes it not suitable for small packet transmission applications. Also, subcarrier filtering increases its computational complexity than OFDM [13]. The transmitted signal of FBMC can be represented as s(t) =

K −1  L−1 

gl,k (t)xl,k

(4)

k=0 l=0

where x l,k is the transmitted signal, l is the subcarrier position, k is the time-position, and gl,k (t) is the transmitted pulse [14]. The next candidate for 5G is UFMC.

Fig. 4 Block diagram of FBMC

36 Modulation Techniques for Next-Generation …

477

4 UFMC FBMC works on the principle of sub-band filtering. In this technique, only transmit filter is used, and 2N-point FFT is used at the receiver. It uses suitably designed filters to overcome the drawbacks of F-OFDM and FBMC. The block diagram of UFMC is shown in Fig. 5. In Fig. 5, the transmitter filter used is of type sub-band filtering, and output of all the filters is combined, and transmitter signal is generated. Zero padding is done at the received signal from wireless channel. 2N-point fast Fourier transform (FFT) is used before down-sampling the signal by a factor of 2. Main advantage of UFMC over FBMC is that it is suitable for short burst transmission and latency is reduced [15]. One of the limitations of UFMC is that inter-symbol interference which is introduced once the filter length is increased beyond the symbol duration [16]. The transmitted UFMC signal can be represented as s(t) =

B−1  L−1  N −1 

dnb g(l)e j2πk

(n−l) N

(5)

b=0 l=0 n=0

where B is the sub-band (blocks), L is the length of filter, dnb is the data on nth subcarrier, and bth sub-band, g(l) represent finite impulse response (FIR) filter windowing function [4]. The comparison of 5G waveform candidates is given below. The power spectral density comparison of 5G candidates is shown in Fig. 6. The analysis of Fig. 6 conclude that FBMC has the lowest OOB distortion among all the 5G techniques. UFMC and F-OFDM has almost similar PSD, and OFDM has largest OOB among all techniques. Computational complexity is an important

Fig. 5 Block diagram of UFMC

478

S. Kumar et al.

Fig. 6 Comparison of power spectral density of 5G candidates [14]

parameter for implementation of a communication technology. The comparison of computational complexity is shown in Fig. 7. Parameters for the calculation of computational complexity are L = 513 (FOFDM), D = 12, M = 14, N = 1024, NCP = 72, N = 664, and L = 72 (UFMC) [17]. The analysis of Fig. 7 shows that the CP-OFDM has the lowest computational complexity, and UFMC has highest computational complexity. However, FBMC and F-OFDM show almost similar performances. The spectral efficiency of any modulation technique shows how efficiently the technique utilizes available bandwidth. Fig. 8 shows the spectral efficiency. Comparison of 5G techniques. The analysis of spectral efficiency shows that FBMC and F-OFDM improved spectral efficiency over all other modulation techniques. CP-OFDM and UFMC shows almost similar performance. The bit error rate (BER) performance of 5G techniques is shown in Fig. 9. As shown in the analysis

Fig. 7 Comparison of computational complexity of 5G candidates

36 Modulation Techniques for Next-Generation …

Fig. 8 Spectral efficiency comparison of 5G techniques

Fig. 9 Bit error rate comparison of 5G techniques

479

480

S. Kumar et al.

of BER of 5G modulation techniques, the CP-OFDM and F-OFDM show superior performance over UFMC and FBMC modulation techniques.

5 Conclusion Extensive analysis of 5G—next-generation wireless technology candidates (CPOFDM, F-OFDM, UFMC, FBMC) is provided in this paper. Computational complexity, power spectral density, BER, and spectral efficiency are used for the analysis of 5G modulation techniques. Computational complexity analysis shows that F-OFDM and FBMC outperform other filter-based modulation techniques. Analysis of power spectral density shows that the FBMC has the lowest out of band distortion. FBMC has the highest spectral efficiency than all other filter-based modulation techniques, but bit error rate performance is degraded. These next-generation wireless technologies will definitely bring revolution in the communication sector. Acknowledgements The author gratefully acknowledges the support of this research by Council of Scientific and Industrial Research (CSIR), New Delhi, under the Senior Research Fellowship grant 09/135/(0798)/18-EMR-I.

References 1. Kumar S, Singh P (2020) A survey on wireless optical communication: potential and challenges. In: Intelligent communication, control and devices. Springer, Singapore, pp 47–52 2. Kumar S, Singh P (2019) A comprehensive survey of visible light communication: potential and challenges. Wireless Pers Commun 109(2):1357–1375 3. Liu Y, Chen X, Zhong Z, Ai B, Miao D, Zhao Z, Sun J, Teng Y, Guan H (2017) Waveform design for 5G networks: analysis and comparison. IEEE Access 5:19282–19292 4. Demir AF, Elkourdi M, Ibrahim M, Arslan H (2019) Waveform design for 5G and beyond. arXiv:1902.05999 5. Proakis JG, Salehi M (2001) Digital communications. McGraw-hill, New York 6. Lin H (2015) Flexible configured OFDM for 5G air interface. IEEE Access 3:1861–1870 7. Gerzaguet R, Medjahdi Y, Demmer D, Zayani R, Doré JB, Shaiek H, Roviras D (2017) Comparison of promising candidate waveforms for 5G: WOLA-OFDM versus BF-OFDM. In: 2017 International symposium on wireless communication systems (ISWCS). IEEE, pp 355–359 8. Zhang X, Jia M, Chen L, Ma J, Qiu J (2015) Filtered-OFDM-enabler for flexible waveform in the 5th generation cellular networks. In: 2015 IEEE global communications conference (GLOBECOM). IEEE, pp 1–6 9. Zhang L, Ijaz A, Xiao P, Molu MM, Tafazolli R (2017) Filtered OFDM systems, algorithms, and performance analysis for 5G and beyond. IEEE Trans Commun 66(3):1205–1218 10. de Figueiredo FA, Aniceto NF, Seki J, Moerman I, Fraidenraich G (2019) Comparing f-OFDM and OFDM performance for MIMO systems considering a 5G scenario. In: 2019 IEEE 2nd 5G world forum (5GWF). IEEE, pp 532–535 11. Abdoli J, Jia M, Ma J (2015) Filtered OFDM: a new waveform for future wireless systems. In: 2015 IEEE 16th international workshop on signal processing advances in wireless communications (SPAWC). IEEE, pp 66–70

36 Modulation Techniques for Next-Generation …

481

12. Farhang-Boroujeny B (2011) OFDM versus filter bank multicarrier. IEEE Signal Process Mag 28(3):92–112 13. Sidiq S, Mustafa F, Sheikh JA, Malik BA (2019) FBMC and UFMC: the modulation techniques for 5G. In: 2019 international conference on power electronics, control and automation (ICPECA). IEEE, pp 1–5 14. Nissel R, Schwarz S, Rupp M (2017) Filter bank multicarrier modulation schemes for future mobile communications. IEEE J Sel Areas Commun 35(8):1768–1782 15. Wei S, Li H, Zhang W, Cheng W (2019) A comprehensive performance evaluation of universal filtered multi-carrier technique. IEEE Access 7:81429–81440 16. Rani PN, Rani CS (2016) UFMC: The 5G modulation technique. In: 2016 IEEE international conference on computational intelligence and computing research (ICCIC). IEEE, pp 1–3 17. Hammoodi A, Audah L, Taher MA (2019) Green coexistence for 5G waveform candidates: a review. IEEE Access 7:10103–10126

Chapter 37

Muscle Artifact Detection in EEG Signal Using DTW Based Thresholding Amandeep Bisht and Preeti Singh

1 Introduction In encephalographic readings, amputation of artifacts plays a crucial role during analysis. These artifacts stem either from subjects itself (within body) or from external environment [1, 2]. In general, trained practitioners identify irregularities related to neural actions; however, visual analysis is a tedious task. That is why quantification of encephalographic records for efficient analysis is a pressing issue in current scenario. One such enigma in processing these signals is removal of muscle artifacts (MA), which is the concern of this paper. These artifacts emanates as an outcome by various muscle activities such as swallowing, chewing, talking, clenching, muscle contraction, sniffing, and head movements [3, 4]. Electromyogram (EMG) can be placed across scalp to capture muscle contraction (especially facial muscles).These artifacts are characterized as high frequency activity normally in the range above 20 Hz and may exist in single or multiple channels. Dissimilar to eye-related artifacts, these MAs cannot be cataloged. The primary purpose of preprocessing EEG is to extract neural information and discard residual artifacts. The majority of prevalent procedure for MA removal involves the variants of Blind Source Separation (BSS) method [5–8]. In most situations, these BSS techniques yield an effective segregation, yet sources of artifact still remain a question. Apart from this, higher-order statistics (HOS) parameters namely skewness and kurtosis-based thresholding are also preferred for nonlinear A. Bisht (B) · P. Singh UIET, Panjab University, Chandigarh, India e-mail: [email protected] P. Singh e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_37

483

484

A. Bisht and P. Singh

analysis of non-stationary biological signals [9]. At present, time–frequency based techniques namely wavelet analysis and empirical mode decomposition are extensively deployed for processing non-stationary signals taking account their ability to provide both time as well frequency information [10, 11]. For a highly non-stationary signal, as for instance EEG, nonlinear techniques are favored compared to linear ones. To maintain the stationarity of EEG signal, segmentation is performed, so that EEG epochs characterize somewhat as semi-stationary. This paper presents one such nonlinear distance time based method for detecting MA contaminated segments (epochs). This work intends to compare dynamic time warping (DTW) method with HOS based thresholding method for MA detection. The work is distributed into four sections. Section 2 briefly discusses the methodology and addresses the nonlinear DTW technique used for MA detection. Section 3 presents the results and discussion. Lastly, conclusion has been discussed in Sect. 4.

2 Methodology The workflow for detection of muscle artifact contaminated segments has been presented in Fig. 1. Initially, dataset is prepared for analysis. A very few studies centers on performance evaluation based on real-time data due to absence of ground truth data related to artifacts. In order to sidestep this issue, majority of the investigation is tested on either simulated or semi-simulated data. Once data is prepared, data standardization is performed followed by segmentation and artifact detection. For performance analysis, various qualitative parameters such as accuracy, sensitivity, specificity, and performance index (PI) are used.

2.1 Dynamic Time Warping (DTW) DTW estimates the optimum distance betwixt two time data sequences of varying interval in a nonlinear pattern. Usually, it employs squared Euclidean distance to align samples of given time series to the other one via nonlinear matching. Its key utility is that it supports elastic transfiguration of time data sequence for identifying shape irregularity thereby reducing distortion effect. A warping path is evaluated in a nonlinear manner by summing up local distance with global distance, that is, minimal distance of adjoining elements. The potential to recognize minute contraction or extension builds it a substantial technique for similarity estimation [12–14]. Out of possible feasible paths, an optimum (lowest cost function) warping path is chosen. Let p and q be two segments of length N and M, respectively. The DTW distance between two segments is given as:   di j = ai j + min di−1, j−1 , di, j−1 , di−1, j

(1)

37 Muscle Artifact Detection in EEG Signal …

485

Fig. 1 Flowchart for MA detection

 2 ai j = pi − q j

(2)

where i = 1, 2 … N; j = 1, 2 … M; aij = squared Euclidean distance. The warping path is confined to various assumptions such as: • Boundary conditions: If E = e1 , e2 … ek e1 = (1, 1) and ek = (M, N). It confines the warping path to begin as well as end in diagonally opposite facing cells (corners) of the matrix. • Continuity: If ek = (x, y) and ek −1 = (x  , y ); then x − x  < 1 and y −y < 1. It constraint the permissible moves in the path followed to adjacent cells (counting diagonally adjacent cells).

486

A. Bisht and P. Singh

• Monotonicity: If ek = (x, y) then ek −1 = (x  , y ); where x − x  > 0 and y − y > 0. It strength the points in ‘E’ to be monotonically spaced in time axis. Path cost( p, q) =



d(i, j)

(3)

i, j

Now, path cost is used as a similarity parameter in form of distance. In this paper, a reference epoch of 1 s duration has been generated by averaging the segments of clean EEG. Warping path cost for dissimilar segments is higher, whereas it tends to zero for similar segments. Overall, higher the path cost, more uncorrelated is the segments and more probability of being contaminated.

2.2 Performance Parameters Present work visualizes the performance of DTW by calculating a 2 × 2 confusion matrix for simulated EEG signals of our dataset [15]. Performance parameters implemented are discussed below.

2.2.1

Accuracy (ACC)

Accuracy shows how closely predicted measurement is related to actual one (true measurement). It is the ratio of true results to the number of cases examined. It is defined as: ACC =

TP + TN TP + FN + FP + TN

(4)

where • TP: True positive. It represents artifact segments that were detected visually as well as by algorithm also. • FN: False negative. It represents those artifact segments that were incorrectly marked as artifact. In simple words, these are artifact predicted by algorithm but are not actually the one. • FP: False positive. It represents those artifact segments that are actually present but not marked by algorithm. • TN: True negative. It represents artifact segments which are neither identified by algorithm nor visually distinguishable.

37 Muscle Artifact Detection in EEG Signal …

2.2.2

487

True Positive Rate (TPR)

TPR is also known as sensitivity. It is ability of the technique to correctly identify outliers among outliers actually present in signal. Higher the sensitivity, more efficient the technique is in detecting artifact. It is obtained as: Sensitivity =

2.2.3

TP TP + FN

(5)

True Negative Rate (TPR)

Specificity is also known as true negative rate. It relates to algorithm’s ability to correctly mark segments that are not artifact. Higher the specificity, more efficient the technique is in identifying similar segments. It is defined as: Specificity =

2.2.4

TN TN + FP

(6)

Performance Index (PI)

It is a parameter that captures the performance of diagnostic test performed and estimates the probability of an informed decision. Higher the value, better the performance of technique. It is given as: PI =

TN TP + −1 TP + FN TN + FP

(7)

3 Results In this work, the simulation has been confined to semi-simulated data. The contaminated data has been generated by superimposing simulated muscle artifact on pure EEG. The real-time EEG data has been downloaded from publicly available CHBMIT scalp EEG database [15, 16]. On the contrary, muscle artifact contaminated segments are simulated using ‘randn’ function in MATLAB. A total of 15 EEG channels with 75,000 samples each have been generated, and sampling rate has been set to 256 Hz. Both pure EEG and simulated have been standardized (mean = 0 and standard deviation = 1) initially before being summing. Once semi-simulated dataset has been created, segments of 1 s duration has been segregated and stored. Figure 2

488

A. Bisht and P. Singh

Fig. 2 Semi-simulated dataset: channel 1 (illustrating MA contaminated segment)

portrays one of the channels from semi-simulated data, depicting the enlarged view of a window contaminated by muscle artifact. Next the simulation results and comparative analysis of DTW technique with existing HOS based thresholding have been discussed. After initial preprocessing steps, DTW distance has been computed between EEG segments and reference segment. The comparison between DTW, kurtosis, and skewness has been organized in Table 1 (only for 30 segments). An average of these distances has to be calculated that is considered as threshold for MA segment detection. Segments exceeding this threshold value have been labeled as artifact. Since, ground truth regarding MA is known in this analysis; therefore, performance evaluation via visual inspection would be convenient. Figure 3 illustrates performance of enlisted three techniques. An overall sensitivity of 100%, specificity of 73.84%, and accuracy of 77.33% has been achieved by DTW technique, as seen in Fig. 1, when average distance is taken as threshold. The lowest performance has been achieved by skewness-based thresholding. However, if threshold is varied in range of minimum to maximum value, a

37 Muscle Artifact Detection in EEG Signal … Table 1 DTW and HOS based similarity distance for channel 1

489

Segment no.

Kurtosis

Skewness

Seg1

3.131392

Seg2

2.373736

Seg3

2.543057

Seg4

2.562842

Seg5

3.675354

Seg6

3.066587

Seg7

2.639128

0.278799

19.6695

Seg8

2.263012

0.333144

34.43143

Seg9

2.685815

−0.04657

22.27757

Seg10

2.773319

−0.27777

19.42949

Seg11

3.136594

−0.00482

21.98409

Seg12

2.963369

−0.35912

19.89911

Seg13

2.473992

Seg14

3.46059

−0.10755

23.71646

Seg15

8.293332

−1.1909

32.01604

Seg16

5.119728

0.891532

33.77622

Seg17

3.966694

0.188105

47.39

Seg18

3.728156

Seg19

2.543319

Seg20

2.590372

−0.24088

23.21819

Seg21

2.799977

−0.03567

25.85992

Seg22

2.682162

−0.23733

19.40363

Seg23

3.809176

−0.72434

18.93668

Seg24

3.769495

−0.3257

20.34713

Seg25

3.849149

Seg26

2.975982

Seg27

8.42038

1.480459

27.03239

Seg28

2.540604

0.034881

14.04382

Seg29

3.20582

0.03372

24.5045

Seg30

2.16235

0.277262 −0.06375 0.103903 −0.29052 0.270233 −0.45099

0.010682

−0.3932 0.014612

0.306931 −0.16002

−0.1464

DTW 25.32436 24.3364 19.99433 28.93225 28.71142 19.01

16.64952

32.86633 19.67011

25.05023 23.43643

20.92048

maximum performance index (PI) of 80.769% is achieved at a threshold of 30.658, as observed from Fig. 4. Figure 4 represents maximum PI achieved for optimum value of threshold. It is evident from figure that DTW surpassed HOS based thresholding in all parameters. In overall, DTW proves to be a better solution in detecting muscle artifact contaminated segments.

490

A. Bisht and P. Singh

Percentage (%)

100 90 80 70 60 50 40 30 20 10 0

Performance Parameters

TPR

TNR

ACC

Kurtosis

50

69.23076923

66.66666667

Skewness

70

50.76923077

53.33333333

DTW

100

73.84615385

77.33333333

Fig. 3 Average performance parameter for semi-simulated dataset

Percentage (%)

100 90 80 70 60 50 40 30 20 10 0

Threshold vs. PI

Kurtosis

Skewness

DTW

Threshold

3.031

0.1831

30.65866032

PI

32.31

25.38

80.76923077

Fig. 4 Performance index versus optimum threshold

4 Conclusion In this work, we investigate and compare nonlinear dynamic time warping (DTW)based thresholding to well establish HOS based thresholding for MA detection. With semi-simulated dataset, DTW based thresholding performed better than kurtosis- and skewness-based thresholding. In terms of average specificity (73.846%), sensitivity

37 Muscle Artifact Detection in EEG Signal …

491

(100%), and accuracy (77.33%), DTW outperforms both HOS methods. To further enhance the performance, the range of threshold is varied between minimum to maximum, and PI is calculated for each threshold. In terms of overall performance, skewness showed a minimal performance followed by kurtosis. On the contrary, DTW envisions a PI of 80.769% at optimum threshold of 30.658. In future work, focus would be on improving DTW and increasing the channels in dataset. Acknowledgments This research work is supported by Technical Education Quality Improvement Project III (TEQIP-III) of MHRD, Government of India, assisted by World Bank under Grant Number P154523 and sanctioned by UIET, Panjab University, Chandigarh (India).

References 1. Jung CY, Saikiran SS (2016) A review on EEG artifacts and its different removal technique. Asia-Pacific J Converg Res Interchange 2(4):43–60 2. Bisht A, Kaur C, Singh P (2018) Recent advances in artifact removal techniques for EEG signal processing. Book Intell Commun Control Dev 989:385–392. https://doi.org/10.1007/978-98113-8618-3 3. Frolich L, Dowding I (2018) Removal of muscular artifacts in EEG signals: a comparison of lineardecomposition methods. Brain Inform 5:13–22 4. Uriguen JA, Garcia B (2018) EEG artifact removal—validation. J Med Imag Health Inform 6:30360–30652 5. Makeig S, Bell AJ, Jung T-P, Sejnowski TJ (1996) Independent component analysis of electroencephalographic data. In: Advances in neural information processing systems, vol 8. MIT Press. pp 145–151 6. Vigário RN (1997) Extraction of ocular artefacts from EEG using independent component analysis. Electroencephalogr Clin Neurophys 103(3):395–404 7. Jung T-P, Makeig S, Humphries C, Lee T-W, Mckeown MJ, Iragui V (2000) Sejnowski TJ removing electroencephalographic artifacts by blind source separation. Psychophysiology 37:163–178 8. Vigário R, Särelä J, Jousmiki V, Hämäläinen M, Oja E (2000) Independent component approach to the analysis of EEG and MEG recordings. IEEE Trans Biomed Eng 47(5):589–593 9. Chen X, Liu A, Chen Q, Liu Y, Zou L, McKeown MJ (2017) Simultaneous ocular and muscle artifact removal from EEG data by exploiting diverse statistics. Comput Biol Med 88:1–10 10. Garg N, Ryait HS, Kumar A, Bisht A (2017) An effective method to identify various factors for denoising wrist pulse signal using wavelet denoising algorithm. Biomed Mater Eng 29(1):53–65 11. Bono V, Das S, Jamal W, Maharatna K (2016) Hybrid wavelet and EMD/ICA approach for artifact suppression in pervasive EEG. J Neurosci Methods 267:89–107 12. Bisht A, Garg N, Ryait HS, Kumar A (2016) Comparative analysis of DTW based outlier segregation algorithms for wrist pulse analysis. Indian J Sci Technol 9(47):1–5 13. Shaw L, Routray A, Sanchay S (2017) A robust motifsbased artifacts removal technique from EEG. Biomed Phys Eng Express 3(3):1–18 14. Garg N, Bisht A, Ryait HS, Kumar A (2018) Identification of motion outliers in wrist pulse signal. Comput Electr Eng 67:1–15 15. Shoeb A, Edwards H, Connolly J, Bourgeois B, Treves ST, Guttag J (2004) Patient-specific seizure onset detection. Epilepsy Behavior 5(4):483–498 16. Shoeb A, Guttag J (2010) Application of machine learning to epileptic seizure onset detection. In: 27th international conference on machine learning (ICML), pp 975–982

Chapter 38

Human Activity Recognition in Ambient Sensing Using Sequential Networks Vinay Jain, Divyanshu Jhawar, Sandeep Saini, Thinagaran Perumal, and Abhishek Sharma

1 Introduction Human activity recognition has become a need in ambient sensing homes, which are equipped with sensors all around. With the advancement in the Internet of Things (IoT) technologies, more and smarter homes and living places are being created to live a better life. But, just equipping the living places with smart sensors and appliances will not satisfy their abilities at all. There is a need to synchronize with an automated system, so that they can work efficiently and help best to the users. Smart homes had been very helpful for patients, aged peoples, or childcare. However, smart homes are not self-sufficient in supplying the best care as per their requirements. And the most important need for them is that they should work automatically according to the need of the users. To fulfill this feature into smart home technology, machine learning has played a vital role. Machine learning has been able to supply great results while predicting human activities in ambient environments quite effectively and providing great accuracy. But V. Jain · D. Jhawar Department of Computer Science and Engineering, The LNM Institute of Information Technology, Jaipur, India S. Saini (B) · A. Sharma Department of Electronics and Communication Engineering, The LNM Institute of Information Technology, Jaipur, India e-mail: [email protected] A. Sharma e-mail: [email protected] T. Perumal Department of Computer Science, UPM, Malaysia, Seri Kembangan, Malaysia e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_38

493

494

V. Jain et al.

most of the researches have been done on the single resident environment and which is not the case in daily living. Humans are social animals, and they work, live or eat with others, and so is the case in smart homes as well. In the past decade, researchers have been working on multi-resident environments as well, but still, we have not reached the same result as the single resident environment. In this paper, we have taken the task of recognizing human activities in the multi-resident environment working on the ARAS human activity datasets [1]. ARAS datasets comprise two datasets of House A and House B, each having two residents, and their activities are recorded for a period of thirty days. Many others have worked on the ARAS dataset using various machine learning algorithms, but here we tried to improve the work using the variations of some machine learning algorithms. Related work toward activity recognition is presented in Sect. 2. Section 3 will describe the methodology involved in the paper, and the corresponding results have been explained in Sect. 4.

2 Related Works Plenty of research has been done on activity recognition, mainly through wireless wearables that collect data through accelerometer and gyro meter sensors. This is even complicated for the multi-resident environments where the activities and much more complicated and needs the intervention of multiple residents. The need for these types of systems has increased in the past decade due to the expanding demand for improving health-related problems. The activities related to wearables are mostly restricted to physical movements like walking, running, or jumping and thus are limited in their abilities, and they are not very much comfortable as the person has to wear it all the time. Video-based activity recognition is much more effective than the wearables but is extremely costly and difficult to handle [2, 3]. Ambient sensors have a wide scope of recognition of activities like cooking, watching television, or talking, and can be placed anywhere without disturbing the working of the residents. Also, they have a wide range of scope, as they can record temperature, pressure, humidity or light. Many types of research have been done by collecting the data from the artificial laboratories, which were created to create an environment that imitates the real-world setting [4, 5]. A whole lot of work has been done on ARAS multi-resident dataset for activity recognition using machine learning algorithms. Hidden Markov models are extensively worked on in the multi-resident environment, where multiple activities labels can be combined in the single label to be used by HMMs [6–8]. Work by Sedki et al. [9] has shown the application of a dozen machine learning algorithms using an F-1 Score to provide the accuracy on the ARAS dataset. Another work by Tran et al. [10] has worked on deep learning models for the classification of activities in the multi-resident environment. Apart from these, conditional random fields [11, 12] and decision trees [13] have also been popular in the research of activity recognition in the multi-resident environment.

38 Human Activity Recognition in Ambient Sensing …

495

Unsupervised machine learning techniques have also been used by the researchers for the multi-resident activity classification [14], and their point was that patients and elderly people might not always be able to label activities during the data collection. They performed the labeling after the training on the ARAS dataset. We in this paper have tried to improve the results through machine learning technique LSTM models (deep LSTM and mini-batch LSTM).

3 Methodology There are many papers discussing the use of LSTMs on human activity recognition. Although their dataset was collected using wearable sensors, this led to a continuous externalized dataset and better accuracy for an individual. Wearable sensors are very much used by the present generation, but senior citizens still hesitate to use such gadgets. Also, due to health reasons, some people are recommended to stay away from such sensors. ARAS dataset uses sensors to sense the change in surrounding rather than that of a person. The problem is the sensor output is in binary format. This leads to a very sparse dataset and very low accuracy in predicting activity. In this paper, we have shown our approach to solve this problem. There are several extended patterns while engaging in several activities, storing information from our past activities which might affect the next activity to be done. LSTMs efficiently learn long dependencies of this activity events. It is also effective in recognizing the order of the events, overall the collection of activities which are to be done together and also the sequence in which they perform are a perfect scenario where LSTMs can be exploited and can be used to a great effect. In this paper, we explored two RNN-based approaches, mini-batch LSTM, and deep LSTM, to model the activities of multiple resident in each house. But first, let us denote some notation for convenience in understanding the algorithms behind the approaches. Here x t = {x(1, t), x(2, t), …, x(s, t)} and yt = {y(1, t), y(2, t)} where s is the size of dataset. During training, we are using mini-batches of time from t1 to t2 such that given x(t1:t2) and activity as y(t1:t2). Finally, during testing, we are taking sensor reading as x(1:T ) and predict activity y(1:T ). a (t) = b + W.h (t−1) + U.x (t)

(1)

  h (t) = tanh a t

(2)

O (t) = c + V.h (t)

(3)

  yˆ (t) = soft max O (t)

(4)

496

V. Jain et al.

3.1 Mini-Batch LSTM Approach LSTMs are processing over time the weights of the model sequentially, just like RNNs [15]. A single RNN or vanilla RNN is a feedforward network rolling overtime in which over each time, timestamp input x (t) and hidden output of the earlier timestamp h(t −1) are used to find the current timestamp output y(t −1). The hidden layer outputs are used to predict the probabilities of occurrence of each output as shown in the above equations. The activity with the maximum probability is then selected as the activity output for that timestamp. But as synced sequenced many to many RNNs become larger in size, the change in derivatives of weights and bias becomes negligible to change the weights to cause a substantial improvement in the model. LSTMs are used instead to help solve this problem of vanishing gradients and take care of long-term dependencies. So, in addition to a vanilla RNN, LSTMs also have a forget gate and cell memory addition to RNNs. Cell memory is used to store information that can be used in later stages. Forget gates check whether there should be a change in cell memory or not. Nonlinearity of hidden outputs is using tanh, whereas other outputs is using sigmoid, as shown in equations mentioned below. During training, we used a mini batch of size 128 to train our model for up to 15 epochs and after that every mini-batch parameter was updated. During testing, a part of the dataset not used during training was used to predict the output. Hyper-parameters used were selected using a validation set.   f (t) = σg W f .x t + U f .h t−1 + b f

(5)

  i (t) = σg Wi .x t + Ui .h t−1 + bi

(6)

  o(t) = σg Wo .x t + Uo .h t−1 + bo

(7)

  c(t) = f (t) ◦ c(t−1) + i (t) ◦ σc Wc .x t + Uc .h t−1 + bc

(8)

  h t = o(t) ◦ σh c(t)

(9)

3.2 Deep LSTM Approach Earlier deep LSTM was used to help solve the problem of speech recognition to a great success, which is similar to this type of dataset. Deep LSTMs are LSTMs stacked above each other such that hidden states of one LSTM layer are the inputs of other LSTM layers of the same timestamp [16] as shown in Fig. 1. This stacking

38 Human Activity Recognition in Ambient Sensing …

497

Fig. 1 Stacked LSTM architecture

of LSTMs helps to capture the complexity of the dataset more accurately. It also decreases no. of neurons per LSTMs to learn which helps train faster per LSTM. The benefits of stacked LSTMs help hidden states to work on a different timescale, thus optimizing the overall model. But there can be a problem of over-fitting for which used a dropout layer in between LSTMs. Due to the dropout layer, some of the neurons, according to the probability, would get shut down, and the weights of that neuron would not change. In this, a bitmask of size same as that of neurons is reserved, where the bits are randomly selected to be either 1 or 0. Final weight is more than weights from the neurons multiplied element-wise with the bit-mask. This helps to not relying too much on a single neuron to contribute to the final result, and it would also not make the other neurons redundant. Overall, the model comprises four layers namely input layer, LSTM layers, dropout layers, dense layer, and lastly output layer. Above, the training was done using a mini batch of size 128 and for 15 epochs. Testing was done on a test set that was separated from the data and not used during the training stage. Hyper-parameters used are selected using a validation set. Parameters were updated using Adam optimizer and dropout with a probability of 0.95. We implemented mini-batch LSTM and deep LSTM using basic LSTM on TensorFlow. We also implemented techniques like dropout, stacking, multi-output LSTM classifier, and dense SoftMax to absolute activity values.

498

V. Jain et al.

3.3 Dataset We conducted our analysis on a publicly available dataset called ARAS multi-resident dataset [1]. It is an ambient sensing activity recognition dataset of multiple residents named Resident 1 and Resident 2 in two different houses namely House A and House B. Multi-label classification is nothing other but multi-class classification. While working with a multi-resident dataset like ARAS where the inputs are the sensor readings, the output we want is not a single label, but multiple labels based on the number of residents in the house. The single label problem is just ordinary classification where we input the data and output is 0 or 1. If we see that the specifications of multi-label classification are a bit more elaborated, where the input is still the input vector, but the output is 0 or 1 for each of the labels that we are using for classification. Multi-output classification is all about predicting the label which is not just 0 or 1, but it has dimension more than 2. Here, we see that in the ARAS dataset, the activities to which we can classify are 27, so we need to use multi-output classification with dimension 27. Features present in the ARAS dataset were binary sensor readings as 1 for a sensor is activated, and 0 for a sensor is deactivated. There are 27 possible activities out of which each resident does one activity, which can be different from other resident’s activity. Additionally, more features were added to improve the model for prediction. One of which we included in the day of the week since, as a human, we have a habit or better say it as a schedule which we follow all around the week. So, the dataset maintained the day of the week for a feature vector in the form of categorical encoding from 1 to 7. Another feature regarding the time of occurrence of activity was maintained. A categorical encoding of data of each day was divided into four parts each of six hours. The encoding for this is 1–4. So, earlier we had only 20 features, the 20 sensor’s reading. After the addition of new features, the total grew up to 32 of which four features are for the encoding of the timestamp of six hours, seven features for days in a week, and one feature for time (sec) for the occurrence of an activity, apart from the 20 sensor recordings as presented in Table 1. Table 1 Statistics of ARAS dataset

Sub-dataset

Data statistics No. of residents

Size of dataset

No. of features

House A

2 male both aged 35

86,400 * 30

32

House B

Married couple, avg. age 34

86,400 * 30

32

38 Human Activity Recognition in Ambient Sensing …

499

4 Results We divided the ARAS dataset into two parts, 24 days for training and 6 days for testing, which is 80:20. We calculated different results for each case that is for House A, House B, and combined data of both houses together. Accuracy is used as an evaluation method to compare the results with previous research studies and is widely preferred metrics. The performance of the model is evaluated by the accuracy of resident’s activity, comparing the ground truth values and the predicted values in the test set Dtest. accuracy =

1 |Dtest |

 y (1:T ) ∈Dtest

 1  t y == yˆ T T

(10)

Here y t is denoted as the actual ground truth values, yˆ t is denoted as the predicted values, |Dtest | is the size of the test dataset, and t is defined as a timestamp. The formula in Eq. 10 is the standard formula to calculate the accuracy of the trained model. The formula intakes all the predicted values and their corresponding ground truth values. The formula uses only true positives and true negatives to define the accuracy, i.e., predicted value equals actual value. Then, it is divided by the size of the test dataset that is equal to true positives, true negatives, false negatives, and false positives. We used a stratified split for cross-validation. Three different models were trained for House A, House B, and both combined. Each model was trained for ten times, and the final result is the average of all the ten results. For optimizing the result, we used different values for hyper-parameters, which included the length of LSTM, no of neurons in LSTM, no of layers in stacked LSTM, the learning rate of the model to update parameters, and the probability of dropout which was used only during the training state. The best accuracy results for each case is written in bold text. On training with model mini-batch LSTMs, in House A, the test accuracy was found out to be 67.309%; for House B, the accuracy was found out to be 83.095%, and for the combined dataset, it was found to be 72.62%. On training with model deep LSTMs, in House A, the test accuracy was found out to be 74.375%; for House B, the accuracy was found out to be 82.20%, and for the combined dataset, it was found to be 77.735%. The accuracy of the described above models is also presented in Fig. 2.

5 Conclusion and Future Work Here, from the above-presented experiments, we can observe that sequential networks based on gated recurrent neural networks are a very good fit for activity recognition scenario. From the experiment, we can also confer that different techniques that are used in solving synced many to many input–output systems such as speech recognition can also help in this environment. Furthermore, in an ordinary house, it

500

V. Jain et al.

Fig. 2 Accuracy of separate and combined dataset

is common to find multiple residents, and it would be better to consider the impact of one’s action on another resident’s action. Although we have encoded a multiple output LSTM, dependencies among the label can be further understood and developed for improvement in the above model. This will be considered in future work.

References 1. Alemdar H, Ertan H, Incel OD, Ersoy C (2013) Aras human activity datasets inmultiple homes with multiple residents. In: 2013 7th international conference on pervasive computing technologies for healthcare and workshops. IEEE, pp 232–235 2. Poppe R (2010) A survey on vision-based human action recognition. Image Vis Comput 28(6):976–990 3. Ni B, Wang G, Moulin P (2011) Rgbd-hudaact: a color-depth video database for human daily activity recognition. In: 2011 IEEE international conference on computer vision workshops (ICCV workshops). IEEE, pp 1147–1153 4. Intille SS, Larson K, Beaudin J, Nawyn J, Tapia EM, Kaushik P (2005) A living laboratory for the design and evaluation of ubiquitous computing technologies. In: CHI’05 extended abstracts on human factors in computing systems, pp 1941–1944 5. Kientz JA, Patel SN, Jones B, Price E, Mynatt ED, Abowd GD (2008) The Georgia tech aware home. In: CHI’08 extended abstracts on human factors incomputing systems, pp 3675–3680 6. Chen R, Tong Y (2014) A two-stage method for solving multi-resident activity recognition in smart environments. Entropy 16(4):2184–2203 7. Singla G, Cook DJ, Schmitter-Edgecombe M (2010) Recognizing independent and joint activities among multiple residents in smart environments. J Ambient Intell Human Comput 1(1):57–63

38 Human Activity Recognition in Ambient Sensing …

501

8. Cook DJ (2010) Learning setting-generalized activity models for smart spaces. IEEE Intell Syst 2010(99):1 9. Sedky M, Howard C, Alshammari T, Alshammari N (2018) Evaluatingmachine learning techniques for activity classification in smart home environments. Int J Inf Syst Comput Sci 12(2):48–54 10. Tran SN, Nguyen D, Ngo TS, Vu XS, Hoang L, Zhang Q, Karunanithi M (2019) On multiresident activity recognition in ambient smart-homes. Artif Intell Rev:1–17 11. Crandall AS, Cook DJ (2008) Resident and caregiver: handling multiple people in asmart care facility. In: AAAI Fall symposium: AI in eldercare: new solutions to old problems, pp 39–47 12. Hsu KC, Chiang YT, Lin GY, Lu CH, Hsu JYJ, Fu LC (2010) Strategies for inference mechanism of conditional random fields for multiple-resident activity recognition in a smart home. In: International conference on industrial, engineering and other applications of applied intelligent systems. Springer, pp 417–426 13. Prossegger M, Bouchachia A (2014) Multi-resident activity recognition using incremental decision trees. In: International conference on adaptive and intelligent systems. Springer, pp 182–191 14. Emi IA, Stankovic JA (2015) Sarrima: smart ADL recognizer and resident identifier in multiresident accommodations. In: Proceedings of the conference on wireless health, pp 1–8 15. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780 16. Dyer C, Ballesteros M, Ling W, Matthews A, Smith NA (2015) Transition-based dependency parsing with stack long short-term memory. arXiv:1505.08075

Chapter 39

Towards the Investigation of TCP Congestion Control Protocol Effects in Smart Home Environment Pranjal Kumar and P. Arun Raj Kumar

1 Introduction Smart home refers to modern homes that have appliances, lighting and/or electronic devices controlled remotely by the owner, often via a mobile application. It is ubiquitous computing that involves incorporating smartness into dwellings for comfort, health care, safety, security, and energy conservation [1]. A simple, smart home AP has 256 MB storage, 128 MB of flash memory sizing installed, and supports 802.11 g of connection at speeds of up to 54 Mbps, LAN interface with Gigabit Ethernet Power over Ethernet (PoE) [2]. Smart home systems such as smartphones, desktops, etc., transmission over the network by multimedia applications for large sections of data streams. The network buffer requirements for these devices are, therefore, fairly high and the home AP bandwidth is not QoS compliant. This results in congestion. To avoid congestion, the trivial solution is to increase the capacity of the buffer size. But, due to the rapid increase in the number of devices in a smart home, buffer capacity will not be sufficient to meet the demand. Therefore, the congestion control protocols [3] in transport layer mitigates the congestion in the network. Transmission Control Protocol (TCP) is predominant protocol in digital network communications and many TCP variants [4] like TCP Tahoe, TCP Vegas, TCP NewReno, TCP Westwood, Cubic TCP, Compound TCP, etc., are available in the literature. Every TCP

P. Kumar · P. Arun Raj Kumar (B) National Institute of Technology, Calicut, India e-mail: [email protected] P. Kumar e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_39

503

504

P. Kumar and P. Arun Raj Kumar

Table 1 Comparative study of existing congestion control protocols Protocol

Advantage

Disadvantage

NewReno

Improvement of the fast recovery phase

One RTT is required to detect each packet loss

TCP vegas

Duplicate packets not required

Decreased performance for wrong RTT estimation

TCP Westwood

Uses bandwidth estimation

Unable to differentiate packet losses

TCP cubic

Uses cubic function

Does not consider RTT

Compound TCP

Loss-delay-based TCP

Delay-based component works when in a steady state

Table 2 TCP variants and devices Protocol

Platform

Device

TCP Westwood

Android OS, Symbian OS, Web OS

Smart phones, devices controlled by smart phone applications

TCP Cubic

Linux

Desktop, laptops

TCP NewReno

Windows XP

Desktop, laptops

TCP Compound

Windows 2008, Windows Vista

Desktop, laptops

variant tries to focus on maximizing the congestion window in order to achieve higher throughput. From Table 1, it is evident that each protocol suffers from an individual drawback when the network is congested. The objective is to analyze the existing protocols used in different devices for favorable results in congestion control as shown in Table 2. But, from the experiments, it is found that the throughput is increased if the congestion control protocol is the same in all the devices. From the experiments conducted, Compound TCP achieved higher throughput than the existing congestion control protocols due to the inclusion of delay-based component along with the loss-based component. Figure 1 depicts a basic smart home setup where AP is connected directly to the Internet and several smart home devices like smartphones, personal computers (PC), electronic appliances, etc., are connected to the Internet through AP. It incorporates a wide range of sensors, actuators in collaboration with the other wireless devices [5]. These devices connected to AP (as shown in Fig. 1) exhaust the available buffer capacity resulting in loss of critical packets (small data packets required for triggering any application/device). When all the devices are connected inside a network (here it is Smart Home Network), the network serves as a whole unit that receives/transmits data from the Internet. It is no longer about a single device dealing with the problem of congestion. So primarily this setup serves as a heterogeneous mixture of various congestion control algorithms running in different devices to achieve maximum throughput. Though different congestion avoidance protocols are already available in literature to avoid congestion, but, none of them is able to reduce the packet loss rate, packet drop rate,

39 Towards the Investigation of TCP Congestion …

505

Fig. 1 Smart home environment

and increases the throughput because of individual drawbacks of existing congestion avoidance schemes (these algorithms are serving in curbing the congestion but still not efficient enough to restrict the critical packet losses). Therefore, there is a need to increase the throughput and mitigate the congestion rate in SHE for effective bandwidth utilization. Compound TCP serves as the best congestion avoidance scheme for SHE. The contributions in this paper are as follows: • To find the optimal existing congestion control protocol for SHE. • To improve the throughput by finding the suitable threshold value for Compound TCP in Smart Home Environment. • To prioritize critical packets using Weighted Fair Queuing (WFQ).

2 Related Work and Problem Formulation A comprehensive analytical model has been developed for congestion control combining various factors such as buffer loss, MAC contention, and channel errors to study their impact on multiple TCP connections over MAC [6]. There exists unfairness among the TCP connections in WiFi networks due to bottleneck buffer overflow at the Access Point. An ns-2-based simulation analysis of TCP Tahoe, TCP Reno, TCP New Reno, TCP Sack, TCP Vegas and TCP New Jersey is done in [7]. The effect of random packet loss rate and mobility on the TCP variants is studied. Through collaboration with multi-homed mobile hosts, the throughput performance in WiFi networks was also increased in [8]. In [9], the maximum TCP window size is obtained (using analytical modeling), that provides fair access in IEEE 802.11 infrastructure. The behavior of various TCP variants is investigated in the presence of random losses in [10]. It is shown that deterioration in throughput is not only depending upon a bandwidth-delay product but also on packet loss probability. A

506

P. Kumar and P. Arun Raj Kumar

comparative study of various TCP versions over a wireless link in the presence of random losses is done in [11]. Queuing model is developed for finding session delays in short-lived flows when shifting from activity (download) to idle periods in [12]. The performance of long-lived TCP connections is analyzed with only downloading clients in [11, 13]. In [13], particularly, analysis of throughput of TCP flows over wireless LAN, when restricted by the advertised window is done. TCP’s flow control with both uploading and downloading clients is discussed in [14]. The distribution of active stations in MAC is computed when TCP connections compete with UDP flows [14]. The throughput unfairness is discussed and evaluated in [15]. In Standard TCP congestion avoidance algorithm, additive increase and multiplicative decrease (AIMD) scheme is employed that leads to a conservative linear growth function for increasing the congestion window and multiplicative decrease function when encountered a loss. According to [RFC2581, PADHYE], in a steady-state environment, with a packet loss rate of ‘p’, standard TCP’s average congestion window is inversely proportional to the inverse of packet loss rate. Therefore, a large window is only sustainable when the packet loss rate is low. In the absence of packet loss, the sender increases the congestion window more quickly and decreases it more gently upon a packet loss. In a mixed network environment [16], the aggressive behavior of such approaches may severely degrade the performance of regular TCP flows whenever the network path is highly utilized, causing self-induced packet losses on bottleneck links, and push back the throughput of the regular TCP flows. Various devices in SHN are equipped with different variants of TCP congestion control avoidance algorithm discussed in the last section. When all the devices are connected in a network (here it is SHN), the network serves as a whole unit that receives and transmits data from the Internet. It is no longer about a single device dealing with the problem of congestion. So primarily, this setup serves as a heterogeneous mixture of various congestion avoidance algorithms running together to achieve maximum throughput. So, here, the problem is to increase the throughput and mitigate the congestion rate in SHE. Compound TCP’s only requirement is that it needs steady state for the delay component to work (addition of delay component makes it a very useful TCP congestion avoidance protocol) [17]; a smart home network is not a very highspeed network to be operated upon and hence it achieves steady state at the earliest(on an average the number of devices connected in the network would range from 4 to 10 among which only a few transmits/receives high-speed data) that makes TCP compound a very strong congestion control protocol.

3 Proposed Approach In the CTCP, standard congestion avoidance algorithm is a scalable delay-based component. Aggressivity is controlled by the fast-growing rule in the late component. A new state variable, dwnd (Delayed Window), is introduced to control the delay-based component of CTCP in conventional TCP. The conventional congestion

39 Towards the Investigation of TCP Congestion …

507

window, which controls the loss-based component in CTCP, remains untouched. The congestion and dwnd now control the sending window [8]. wnd = min(cwnd + dwnd, awnd),

(1)

where awnd is the advertised window from the receiver. The combined window for CTCP from (1) above allows up to (cwnd + dwnd) packets in one RTT. Therefore, the increment of cwnd on the arrival of an ACK is modified accordingly: cwnd = cwnd + 1/(cwnd + dwnd)

(2)

During slow start, CTCP maintains the same behavior. Dwnd is started to zero while the connection is in the slow beginning phase. When the link enters congestion avoidance, the data retrieved on the bottleneck queue is referenced as ‘diff’. Delay component is effective. The congestion is detected through comparison between diff and gamma threshold. If diff < gamma, it is assumed that the network path is underused; otherwise, it is supposed that its network path is congested. The delay window [18] is supported by a binomial function. All the packets are traveling to the Internet via the Access Point in the SHN, which uses First Come First Serve (FCFS) as an underlying queuing discipline. Instead of using FCFS as underlying queuing discipline [19], if Weighted Fair Queuing (WFQ) is incorporated, further critical packet loss can be done [20]. The data packets corresponding to basic home devices like lighting systems, fan, washing machine, etc., which are controlled by a smartphone like application are small in size but needs to be processed as these are primarily ON OFF packets if left unprocessed can create issues inside the smart home. Usually, critical packets are not being served because the FCFS queue is occupied with multimedia packets; hence, heavy rejection of packets is observed. On the other hand, if we use Weighted fair queuing instead of FCFS, then we can prioritize the flow. A packet stream is known as flow or conversation within a single session of a single request. WFQ is a flow-based method that sends packets across the network and ensures the efficiency of packet transmission that’s vital to interactive traffic. It helps to stabilize the network congestion between the individual flows of packets. The packets that arrive are classified into different fluxes and a queue FIFO (First In First Out) is assigned to each flux. During access point configuration, a total number of flows are created and bandwidth for each flow is assigned, with the lowest priority being given high bandwidth flows (data packets are first processed in highpriority queues). The IP field and TCP/UDP headers, such as Source IP Address, IP Address, Protocol number, Type of Service (ToS), UDP/TCP Source port number, Destination TCP/UDP port number, are used to detect flows [21]. Based on these fields, a hash value is generated. Packs with the same flow of traffic end up having the same hash value. A packet is assigned a sequence number for scheduling purposes. For scheduling purposes, a packet is assigned a sequence number. The priority of a packet or flow affects the number of your scheduling sequence. The sequence

508

P. Kumar and P. Arun Raj Kumar

Fig. 2 Proposed methodology

numbers for an arriving packet are calculated by adding to the modified size of the arriving packet a sequence number for the last packet in the flow queue. The arrival packet size is changed by multiplying the packet by the weight. The packet priority (in the ToS field) is inversely proportional to its weight. WFQ follows these main criteria: • The message will be sorted into conversations that reduce starvation, delay and jitter in the queue, with dedicated queues for each stream, called conversations. • Bandwidth allocation across all flows fairly and accurately, reduction of scheduling time and service guarantee. • In assigning bandwidth, IP Precedence is used as weight.

39 Towards the Investigation of TCP Congestion …

509

4 Experimental Analysis In this section, the performance metrics used in validation, simulation tools, and experimental results are presented. Simulation parameters are shown in Table 3. All the devices in the Smart Home Environment (SHE), be it mobile hosts, desktops etc., are deployed with TCP compound as their congestion avoidance protocol rather conventional protocols. SHE is simulated with the proposed setting and the packet loss rate, drop rate, and throughput are measured and compared with the existing congestion avoidance schemes. Then, the smart home setup is tested with modified CTCP with lesser threshold value than the original algorithm. For further reduction of critical packet loss Weighted fair Queuing approach is discussed. The following experiments are conducted: • Finding the optimal congestion control protocol. • Finding suitable threshold value for CTCP in Smart Home Environment.

4.1 Simulation Tools The proposed methodology has been implemented and evaluated over NS 2 simulator [22], a discrete event simulator. Experiment 1: Finding the Optimal Congestion Control Protocol Here, four sending nodes behaving as different devices inside the Smart Home Environment. Every device incorporates already existing congestion control schemes (e.g., Linux platform uses TCP Cubic). Four nodes here represent devices with TCP Vegas, TCP Westwood, TCP NewReno and TCP cubic. Central node (Access point) behave as a waypoint of collecting the data and sending over to the TCP sink agent Table 3 Experiment 1 results

Parameter

Scenario 1

Scenario 2

Number of generated packets

815

780

Number of sent packets

801

761

Number of forwarded packets

67

51

Number of dropped packets

53

48

Number of lost packets

75

52

Minimal packet size

28

28

Maximal packet size

7112

7112

Average packet size

571.7795

542.0112

Number of bytes sent

500,280

478,220

Number of forwarded bytes

213,988

192,028

Number of dropped bytes

185,304

156,720

510

P. Kumar and P. Arun Raj Kumar

Fig. 3 Packet drop rate

at the bottom. Four sink agents are connected to the node which receives data from each of the four sending devices. Traffic is generated using FTP agents. Similarly, in test scenario 2, every parameter is kept the same as in 1, but now, each device is deployed with TCP compound as their congestion control protocol. The results of the simulation are shown in Table 3. Packet drop rate for test scenario 1 and 2 is 6.616% and 6.307%, respectively, as shown in Fig. 3. Packet loss rate is 9.363% and 6.833%, respectively. Throughput came out to be 84.01% and 86.85%, respectively. Packet size is kept at 1024 bytes for each node as shown in Table 3. In scenario 1, the number of packets sent is more than the number of the packet sent in case of compound TCP setup (scenario 2), the total number of packet drop is lesser later case, i.e., the network is more congested when existing schemes are used, proving the hypothesis that Compound TCP is a better choice for congestion control in a SHE. Packet generation rate in scenario 1 is relatively greater than scenario 2, i.e., the primary target of loss-based schemes is to only maximize the cwnd without taking care of packet loss; hence, packet drop rates are higher. Compound TCP takes into account both loss and delay-based parameters and thus packet drop is minimized. How much throughput is achieved when TCP Vegas, Reno, Westwood, are used uniformly instead of Compound TCP in the same setting is as follows: Throughput with TCP Vegas, Westwood, reno deployed uniformly to all the nodes, came out to be 83.125%, 84.06%, 84.4%, respectively, as shown in Fig. 4. Mobile devices, if also deployed with NewReno or Reno instead of TCP Westwood than the performance of the network is affected significantly, with packet size of 8192 bytes, throughput came out to be 75.84%. As TCP Westwood also considers loss and delay-based parameters, but not simultaneously. Either of the modules works at a time depending upon whether the network is wired (Delay based) or wireless (Loss based). From mobile host to TCP proxy, the network is wireless, fixed hosts are connected via a wired network. Therefore, it enhances the throughput but not as significantly as Compound TCP as in later, both modules works simultaneously to control the cwnd [7]. Experiment 2: Proposed Compound TCP In compound TCP, in order to detect the congestion i.e., indicated by the number of backlogged packets, and to quantify the tradeoff between the throughput and buffer requirement, a threshold value is predefined which is being set at 30 as in [7]. This

39 Towards the Investigation of TCP Congestion …

511

Fig. 4 Throughput analysis

value states that whether the network is congested or not as mentioned in section II. The minimum value is being set 30 because the Internet deals with high-speed networks, unlike a smart home network. In smart home setup, we are not dealing with very high-speed networks, thus, the bound to be put off the number of packets in the buffer (backlogged packets) value may be reduced and convenient value of the threshold for SHN can be found out. In this experiment, threshold value is set at 18 (proposed CTCP for SHN) and the total number of nodes is increased to 9. Throughput after deploying proposed CTCP to only 5 nodes (Environment1) came out to be nearly 88% as shown in Table 4. 80.9% of throughput is observed when already existing protocols were applied to the setup with 9 nodes (Environment 2) as shown in Table 5, clearly a decrement when the total number of devices got increased in SHN. On the other hand, the throughput recorded when proposed CTCP is used in the same setup is 87.34% and 87.88% with the packet sizes of 4096 bytes and 1024 bytes respectively. Packet loss calculated is 15.38% in heterogeneous setup and 2.05% in proposed CTCP setup as shown in Fig. 5. Table 4 Environment 2 with heterogeneous setup

Parameter

Value

Number of generated packets

2443

Number of sent packets

2386

Number of forwarded packets

265

Number of dropped packets

88

Number of lost packets

367

Minimal packet size

28

Maximal packet size

1612

Average packet size

198.8537

Number of bytes sent

479,772

Number of forwarded bytes

199,280

Number of dropped bytes

51,764

512 Table 5 Environment 2 with proposed CTCP

P. Kumar and P. Arun Raj Kumar Parameter

Value

Number of generated packets

30,856

Number of sent packets

28,376

Number of forwarded packets

84

Number of dropped packets

2856

Number of lost packets

583

Minimal packet size

28

Maximal packet size

1612

Average packet size

302.815

Number of bytes sent

9,064,896

Number of forwarded bytes

86,040

Number of dropped bytes

196,652

Fig. 5 Analysis

It is to be noted that mobility factor was considered for every node in the simulated environment. Generally, it is not the case, hence if we take off the mobility factor, then throughput recorded is 90%.

5 Conclusion and Future Works In this paper, we have analyzed the Smart Home network performance of an 802.11based WLAN in presence of FTP traffic. We developed the hypothesis that Compound TCP is able to control congestion in SHN in a much better way than the previously existed algorithms. Though Compound TCP is designed for high-speed conventional

39 Towards the Investigation of TCP Congestion …

513

networks, it is working well in medium bandwidth networks such as Smart Home Network (SHN). We also proposed modified CTCP with threshold value tweaked from 30 to 18, which performs even better for small home networks. We found excellent agreement between the model predictions and simulation results from ns-2. In the end, we argued about the incorporation of WFQ as an underlying queuing discipline for Access Points to further reduce the critical packet loss. Along with Compound TCP, a model can be developed using Weighted Fair Queuing as a scheduler for packets in Access Point. For what threshold value in CTCP for this setup (SHN), we can optimize the critical packet loss, can be analyzed both analytically and using simulated results.

References 1. Soliman M, Abiodun T, Hamouda T, Zhou J, Lung C (2013) Smart home: integrating internet of things with web services and cloud computing. In: 2013 IEEE 5th international conference on cloud computing technology and science, vol 2, pp 317–320, Dec 2013 2. Nuaymi L, El-Sayah J (2004) Access point association in ieee 802.11 wlan. In: Proceedings. 2004 international conference on information and communication technologies: from theory to applications, pp 211–212, Apr 2004 3. Sergiou C, Antoniou P, Vassiliou V (2014) A comprehensive survey of congestion control protocols in wireless sensor networks. IEEE Commun Surv Tutor 16(4):1839–1859 4. Chaudhary P, Kumar S (2017) Comparative study of tcp variants for congestion control in wireless network. In: 2017 International conference on computing, communication and automation (ICCCA), pp 641–646 5. Song Z, Zhou X (2013) Research and simulation of wireless sensor and actuator networked control system. In: 2013 25th Chinese control and decision conference (CCDC), pp 3995–3998, May 2013 6. Pokhrel S, Panda M, Vu H (2017) Analytical modeling of multipath tcp over last-mile wireless. IEEE/ACM Trans Netw 7. Waghmare S, Nikose P, Parab A, Bhosale SJ (2011) Comparative analysis of different tcp variants in a wireless environment. In: 2011 3rd international conference on electronics computer technology, vol 4, pp 158–162, Apr 2011 8. Elmannai W, Razaque A, Elleithy K (2014) Simulation based study of tcp variants in hybrid network. arXiv:1410.5127 9. Harigovindan VP, Babu AV, Jacob L (2012) Ensuring fair access in ieee 802.11 p-based vehicleto-infrastructure networks. EURASIP J Wireless Commun Netw 2012(1):168 10. Chan A, Tsang D, Gupta S (1998) Performance analysis of tcp in the presence of random losses/errors 1:513–518 11. Anjum F, Tassiulas L (2003) Comparative study of various tcp versions over a wireless link with correlated losses. Netw IEEE/ACM Trans 11:370–383 12. Ebrahimi-Taghizadeh S, Helmy A, Gupta S (2005) Tcp versus tcp: a systematic study of adverse impact of short-lived tcp flows on long-lived tcp flows. In: Proceedings IEEE 24th Annual joint conference of the IEEE computer and communications societies, vol 2. IEEE, pp 926–937 13. Bruno R, Conti M, Gregori E (2005) Throughput analysis of UDP and TCP flows in IEEE 802.11 b WLANS: a simple model and its validation. In: 2005 workshop on techniques, methodologies and tools for performance evaluation of complex systems (FIRB-PERF’05). IEEE, pp 54–63 14. Pokhrel SR, Panda M, Vu HL, Mandjes M (2015) TCP performance over Wi-Fi: joint impact of buffer and channel losses. IEEE Trans Mobile Comput 15(5):1279–1291

514

P. Kumar and P. Arun Raj Kumar

15. Gorbenko A, Tarasyuk O, Kharchenko V, Abdul-Hadi AM (2013) Estimating throughput unfairness in a mixed data rate wi-fi environment. Int Conf Dig Technol 2013:181–184 16. Gopalasamy S, Auslander DM (1993) Performance analysis of a mixed traffic network in a manufacturing environment. In: 1993 American control conference, pp 1513–1517, June 1993 17. Pokhrel SR, Williamson C (2018) Modeling compound tcp over wifi for iot. IEEE/ACM Trans Netw 26(2):864–878 18. Sjani M, Andriani E et al (2016) Reducing multimedia transmission delay by shortening tcp acknowledgement route. In: 2016 international seminar on application for technology of information and communication (ISemantic). IEEE, pp 114–117 19. Melander B, Bjorkman M, Gunningberg P (2002) First-come-first-served packet dispersion and implications for tcp. In: Global telecommunications conference, GLOBECOM’02, vol 3. IEEE, pp 2170–2174, Nov 2002 20. Yin H, Wang Z, Sun Y (2004) A weighted fair queuing scheduling algorithm with (m, k)-firm guarantee. In: 30th annual conference of IEEE industrial electronics society, IECON 2004, vol 3, pp 2034–2039, Nov 2004 21. Kim H, Hou JC (2004) Improving protocol capacity for UDP/TCP traffic with model-based frame scheduling in IEEE 802.11-operated WLANS. IEEE J Sel Areas Commun 22(10):1987– 2003 22. Kong R (2008) The simulation for network mobility based on ns2. In: 2008 international conference on computer science and software engineering, vol 4, pp 1070–1074, Dec 2008

Chapter 40

Efficient Information Flow Based on Graphical Network Characteristics Rahul Saxena , Mahipal Jadeja , and Atul Kumar Verma

1 Introduction Information flow in networked environment has been an area of great interest and a challenging area to work upon. A lot of research is being focused on structuring the information flow in an optimized manner. The term optimization is contextual to the application or problem. Like spreading a news on a social media network, broadcasting an emergency alert for a road traffic network, sending a message (unicast or broadcast) in a vehicular ad hoc network (VANET) environment, etc., all these applications demand for quick information transmit from one station to another and reach the network consensus (coverage) as soon as possible. On the contrary, there may be situations like virus spread in a social media network, inhibiting false information flow in a network, etc. where the need is to contain the information flow or redirect it over the routes so that damage control can be done. This situation instinct decision framework requires mathematical modeling of various scenarios discussed, over which relations can be established to find out viable methods and approaches that can be followed to achieve the best possible result for the particular instance.

R. Saxena (B) · M. Jadeja · A. K. Verma Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, India e-mail: [email protected]; [email protected] M. Jadeja e-mail: [email protected] A. K. Verma e-mail: [email protected] R. Saxena Department of Information Technology, Manipal University, Jaipur, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_40

515

516

R. Saxena et al.

One of the powerful modeling tools to simulate the real-world environment involves the use of graphs [1]. Graph and network theory have various interesting metrics and results which are handy in finding the solutions to the realistic problems. This paper mainly focuses on depicting the applicability of basic graph metrics of the network over two scenarios [2]. The first scenario deals with the objective that how the information can be spread quickly in a large network, preferably social networks, by minimizing the number of nodes visited in the network and having shorter path lengths. The basic idea here will be to identify the high important nodes of the network and making communication establish among such nodes in the network so that information quickly travels in the different parts of the network and the large network coverage happens in minimum exchange of the information within the network. In the second scenario, the situational aspect changes. The motive will be to control the false flow of information in the network. Considering a case of electrical circuits, the power generating stations may influx high voltage to hubs (power distribution stations) which may unevenly distribute throughout the network. If such a glitch happens, the voltage in the network can be redirected to the nodes having low PageRank scores and degree very low in comparison with the average out-degree of the network so that the damage can be controlled. Thus, following the ideology discussed above, the flow of the paper will be as follows: Sect. 1 deals with the basic introduction and what will be the focus areas of the manuscript. Section 2 deals with the state-of-the-art methods dealing with the computational solutions provided to the problems of interest using various theories and ideologies. Section 3 covers the scenarios’ discussion, mapping the problems to graphical model and evaluating the graph measures. Section 4 deals with defining the algorithmic approach and modifications needed to already existing PageRank framework for various cases considered. Finally, based on the methods proposed and justifications made with respect to the situations, conclusions are drawn, and future directions will be discussed over the area and approach.

2 Related Work There have been various investigations on information flow analysis using graph theory ideologies and using lemmas and results interesting views have been presented. Frey et al. [3] proposed a probabilistic framework with constraints on edge selection to optimize information propagation. Authors have used various combinations of Dijkstra, FP tree growth and Naïve Bias algorithms to predict the expected information flow in a subgraph. The crisp was that how the maximum information content of a network can be achieved by having a minimal ‘k’ set of edges. However, the running time of the algorithm was found to be of higher order. Chen et al. [4] suggested pruning of influential node-based approach to prevent the overlap of information using the community structure. Saito et al. [5] also proposed a similar approach where the idea was to have link removals to identify the amount of information loss happening in the network. Based on this information, important links of

40 Efficient Information Flow Based on Graphical …

517

the networks are identified. However, the process again has high computational overhead as the link removal and combinations are tested over and again. Temel et al. [6] came up with a graph theoretic approach where a weighted information knowledge digraph is constructed to identify critical components of the graph. Bagrow et al. [7] gave a theoretical framework with ‘clocked’ nodes to predict limits in online social activity. The paper predicts that the information content about an individual node can be obtained with 95% accuracy only by accessing its social ties. All these approaches discussed so far have targeted on how the influence maximization helps in efficient information spread. However, there have been attempts to minimize or block the information flow inside the network so as to control the flow of false information within the network. Liu et al. [8] proposed a compartment approach for refuting rumors in social networks. Control framework becomes operative based on the classification of nodes into four categories: rumor-neutral, rumor-received, rumor-believed and rumor-denied. Indu et al. [9] proposed a nature-inspired model to simulate a forest fire spread model with rumor spread in social networks. The results have been evaluated over Twitter data by keeping into consideration certain factors like type of user account, likes, hashtags, etc. to model the node behavior. This information has then been used to distinguish between the rumor and the genuine nodes. However, the paper does not focus on how to curb the information flow from the rumor nodes. Li et al. [10] has discussed a game theoretic approach to model the pattern of information flow in the network. The belief propagation between the neighboring nodes is identified, and based on the difference between the information flows between the neighboring nodes, irrational nodes are identified and a controlling factor is then added to make flow of false information die out soon in the network. Hartmann et al. [11] came up with an analysis of case study of disinformation flow of MH17 Plane Crash of Malaysian Airlines. The study basically focuses on how the text tagging can help achieve better classification of nodes to identify the false information influencers and repeaters. Further, Sucharitha et al. [12] present an extensive survey to find out emerging patterns in social media based on graph theoretic and network analysis approach to look out for suspicious groups. Based upon the discussion so far, the current state of the art suggests that though a lot of work has been done especially in the area of information spread and influence maximization in network theory, there are very few that has come up with a generalized framework.

3 Network Characteristics-Based Measures and Solutions 3.1 Graphical Modeling Graph is a popular tool to imitate any natural scenario in the form of a network. From the mathematical definition of graph, a graph G (V, E) is a set of vertices and edges represented as ‘V’ and ‘E’. Extending the definition, a connected graph is one in which each vertex or node is connected to others with at least one edge. Further,

518

R. Saxena et al.

Fig. 1 Schematic diagram of a directed graph

if the edges are directed, the graph is referred to as directed graph, and if the edges are undirected or bidirectional, the graph is referred to as undirected graph (Fig. 1). Inferring the diagrammatic representation of graph from the definition, this information is stored as relationship between various nodes. Every node is linked to others based on some similarity measure of which the information is represented by edge joining the two entities. These relationships are stored in computer memory as adjacency matrix, adjacency list, edge list, etc. [1]. The connecting edge carries the information regarding the closeness of the two nodes, common features, cause of linkage between the two and various other vital information.

3.2 Mapping Real-World Scenarios to Graph In general, the entities of any network form the node, and the relationship between the nodes is mimicked through edges. For example, mapping a road traffic network will have cities as the nodes of the graphs and various routes connecting these nodes forms the edges of the network. The labels on these edges refer to the information like cost of traversing that route, etc. Similarly, a computer network or VANET can be simulated using a graph where the connecting devices form the nodes and the connectivity range is represented using an edge. To map a biological gene network, the interacting genes form the nodes and the interconnections between the genes are shown using the edges connecting them. The edge labels contain the information like type of connection, bond strength, etc. These are few examples where it can be seen that how efficiently a real-world problem can be converted to graphical model. The graphical model has various interesting mathematical implications which allow to derive useful insight about the network structure, connectivity, etc. which can be utilized to build the solution to the objective problem that is being looked upon (Fig. 2). Fig. 2 Simulating a city connectivity network through graph

40 Efficient Information Flow Based on Graphical …

519

3.3 Graphical Analysis Using Stanford Network Analysis Platform (SNAP) In this section, general graph metrics will be analyzed using SNAP network analysis tool. Understanding these graph measures and significance will help to propose the model for various network scenarios. Considering a Wikipedia directed graph data set [13] having edge list between various nodes of the network will be considered as a reference network with number of nodes equal to 7115 and 103,689 edges between them. The graphical distribution of the average out-degree (no. of edges directed from source node to any other node in the network) comes out to be as shown below. As evident from Fig. 3, the out-degree distribution is not a well-distributed graph (more axial), so a log-log scale distribution curve has been considered to understand the distribution properly (Fig. 4). These results have been obtained over the Wikipedia edge list network using SNAP library functions. The curve seems to follow the power-law distribution [14], heavy-tailed distribution. The average degree comes Fig. 3 Out-degree distribution curve

Fig. 4 Log-log out-degree distribution curve

520

R. Saxena et al.

out to be approximately four, i.e., four out connections for majority of nodes in the graph. Similarly, another metric that is to be taken into consideration is path length (distance or no. of hop counts to reach from one node to other node). The expression for average path length is given as: h¯ =

n 

1 2E max

hi j

(1)

i, j=i

where hij refers to the distance from node i to node j, E max refers to max. number of edges [n(n − 1)/2], where n being the total no. of nodes. This measure gives an idea of the spread of the graph while the average degree distribution gives the detailing about how many neighboring nodes can be reached. Considering these two measures, our focus will now be to identify the nodes of high importance based on the PageRank scores of the nodes in the network. As per the importance of PageRank flow model, a page is considered to be important if it is being referred by other important pages in the network. Here, ‘pages’ in the definition refer to the social network graphs which are synonymous to ‘nodes’ in the network. So, rank of a node j will be defined as: rj =

 ri d i→ j i

(2)

where r i refers to important factor of in links to node j, d i refers to the outgoing links from node j. More insight over PageRank algorithms can be found in [15]. This measure forms the basis of our model as for information flow to happen efficiently, information must pass through such nodes. However, our model further takes into consideration: out-degree, path length and weakly connected components (WCC) as well into consideration. The first two parameters have been discussed earlier. WCCs refer to the part of network where every node is not having path to each and every node in the network, just opposite to strongly connected components. Figure 5 shows a SCC of a graph, where each node is reachable to every other node in the graph. However, presence of any node in this SCC may cause the information to flood but the flooding may not be effective. The reason behind it is quite simple as there will be multiple overlaps and the information will be within this small group. To account for this, WCC plays an important role as a large chain will be covered if more number of nodes are part of WCC rather than SCC. The largest WCC component evaluated Fig. 5 Strongly connected component of graph

40 Efficient Information Flow Based on Graphical …

521

on our Wikipedia graph data set using SNAP API functions was found out to have 7066 nodes and 103,663 edges which is approximately equal to the total number of nodes and edges in the graph. In this section, understanding regarding different graph measures has been developed. In the next section, combining these metrics, algorithmic model will be proposed: (i) to enhance the information flow in the network and (ii) to control the information flow in network.

4 Proposing Algorithmic Approaches for Information Flow in Network 4.1 Scenario 1: Rapid Information Propagation Through High Influential Nodes Considering a social network or a networked environment of sensor nodes where the rapid information flow is of prime importance, the information flow must start from a node referred to as ‘Hub’ of the network. For this, we need to rank the nodes as per the PageRank scores. The algorithmic procedure for the same will be as follows [15]: Algorithm to identify Influential Nodes PageRank_scores (Graph G) // Procedure accepts Graph G edge list as input Begin for each node in Graph G Set importance factor, r=1/N // N being Total number of nodes if in degree of node is zero set r=0 else calculate r value of each node j as: for each neighbor i (3) Store each node score in a list Score [j]=rj end for Store node ids in order as per Sorted Score[j] in list node_order. End

522

R. Saxena et al.

Fig. 6 Graph scenario with PageRank scores indication

Here, β corresponds to the probability of visiting a node and (1 − β) corresponds to the probability of jumping out of the loop (spider trap problem). For sake of convenience, the important factor of each link is considered to be one. However, it will play an important when considering the weightage of each link while evaluating the PageRank scores. On completion of this algorithm, node_order array will have the nodes sorted as per the PageRank scores. However, this may not prove beneficial to us. Figure 6 shows a scenario where we have nodes having high PageRank scores because of being referred by other nodes of high importance but has very few outdegrees as well as in degrees which may not help us in rapid spread of information. As per the algorithm, node with 20 PageRank score will be of highest importance followed by 12.1 and 11.5. However, instead of 11.5, node with PageRank score 10 can be considered as it has more outreach. To tackle these situations, we put into consideration the out-degree distributions which involves a tradeoff for the nodes having high page Rank scores but low out-degree. Further, a limit of average path length is also considered so that the number of traverses (visits) can be reduced to find nodes having desired characteristics. So, the proposed algorithm will be as shown below:

40 Efficient Information Flow Based on Graphical …

523

Modified Algorithm to identify PageRank scores Combined Scores (Graph G, node_order) // Procedure accepts Graph G edge list, node_order obtained from procedure PageRank_scores as input Begin for each node i in Graph G Store Out degree of node i in out_degree[i] end for j=1

//After the node of highest PageRank Score

influencer[m]=node_order[j] // Influencer list sum=outdegree(node_order[j]) //Function to evaluate outdegree of node j j=2 Repeat till (total edges in G - sum)

or (node_order list is finished)

// Difference between total edges in G and sum is less than some convergence criteria i=j+1 while Dist (node_order[i], node_order[j]) < β // Dist (i, j) function evaluates path length from node i to j //β being average path length of G If(outdegree(node_order[j]) < outdegree(node_order[i])) m=m+1 influencer[m]=node_order[i] sum=sum+outdegree(node_order[i]) i=i+1 end while m=m+1 influencer[m]=node_order[j] sum=sum+outdegree(node_order[j]) j=j+1 for k=0 to length(influencer) -1 if(node_order[j] is equal to influencer[k]) j=j+1 k=0 end for end Repeat End

Here, the algorithm convergence will be reached by checking that after every new addition of influencer node, the number of node counts in the network is almost equal

524

R. Saxena et al.

Fig. 7 Schematic diagram of an electrical network

to total number of nodes or the difference between the number of edges covered and total number of edges converges to some minimum criteria.

4.2 Scenario 2: Controlling Information Propagation Through Nodes of Low Influence or Having Less Out-Degree In contrast to the scenario discussed above, there are networked environments where information flow needs to be curbed in order to control the false flow of information. To understand this, let us see an example of electrical distribution network where source generating power station assigns power flow to a distribution station connected to many consumers. As per Fig. 7, the black root node serves as power generating station which transfers the next node the power to be distributed in the network serving as distribution station. Now if a high voltage glitch passes through this station, then, the power generating station has a choice to divert load to a location of low abundance in the network. In this case, it can be directed to the node having in degree as one and out-degree as zero. This can be achieved by identifying the nodes with a low degree in the nearby neighbor and diverting the complete load to it till we reach a node of zero out-degree to reach the consensus. The algorithm for the same is as follows:

40 Efficient Information Flow Based on Graphical …

525

Algorithm to localize information flow in the network Find_consensus_node (Graph G) // Procedure accepts Graph G edge list as input Begin Let start node be i. min=1000 or some large number while (min! =0) for each neighbor j of i Outdeg[j]= sum of out degree of j end for min=Findmin(Outdeg[j]) // Returns the node with minimum outdegree i=min end while End

Here, the information flow will be redirected to a location where the out-degree of the node will be minimum in comparison with all neighbors of node and this process will get repeated till we reach a situation from where the false information cannot flow further, i.e., reaching the dead end.

5 Conclusions and Future Scope In the paper, an attempt has been made to propose a generalized framework building upon the idea of ranking the nodes on the basis of their PageRank scores. The idea for identifying high important nodes of the network along with PageRank scores takes into consideration: (i) out-degree of the node (ii) and path length between two the current and the next influencer node. The idea here is to identify high degree important nodes in close vicinity. The algorithm proposed is a refined version of the PageRank algorithm to identify high important nodes of the network. The second situation of controlling the false information in the network tries to localize the flow of information to a dead region of the network by identifying the nodes which have low out-degree. In future, this proposed algorithm needs to be simulated on network analysis tools like SNAP to analyze computational aspects, challenges and the worth of these frameworks proposed.

References 1. Segaran T, Evans C, Taylor J (2009) Programming the semantic web: build flexible applications with graph data. O’Reilly Media Inc., MA, USA

526

R. Saxena et al.

2. Bigdeli A, Tizghadam A, Leon-Garcia A (2009) Comparison of network criticality, algebraic connectivity, and other graph metrics. In: Proceedings of the 1st annual workshop on simplifying complex network for practitioners. ACM, Venice, Italy, pp 1–6 3. Frey C, Züfle A, Emrich T, Renz M (2018) Efficient information flow maximization in probabilistic graphs. IEEE Trans Knowl Data Eng 30(5):880–894 4. Chen YC, Peng WC, Lee SY (2012) Efficient algorithms for influence maximization in social networks. Knowl Inf Syst 33(3):577–601. (Springer) 5. Saito K, Kimura M, Ohara K, Motoda H (2016) Detecting critical links in complex network to maintain information flow/reachability. In: Richard B, Zhang ML (eds) Proceedings of the 14th pacific rim international conference on trends in artificial intelligence. Springer, Cham, pp 419–432 6. Temel T, Karimov F (2019) Information systems model for targeting policies: a graph-theoretic analysis of expert knowledge. Expert Syst Appl 119:400–414 7. Bagrow J et al (2019) Publisher correction: information flow reveals prediction limits in online social activity. Nat Hum Behav 3(2):195 8. Liu W, Wu X, Yang W, Zhu X, Zhong S (2019) Modeling cyber rumor spreading over mobile social networks: a compartment approach. Appl Math Comput 343:214–229 9. Indu V, Thampi SM (2019) A nature-inspired approach based on forest fire model for modeling rumor propagation in social networks. J Netw Comput Appl 125:28–41 10. Li Y, Qiu B, Chen Y, Zhao HV (2019) Analysis of information diffusion with irrational users: a graphical evolutionary game approach. In: 2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, Brighton, United Kingdom, pp 2527–2531 11. Hartmann M, Golovchenko Y, Augenstein I (2019) Mapping (dis-) information flow about the MH17 plane crash. In: Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda. Association for Computational Linguistics, Hong Kong, China, pp 45–55 12. Sucharitha Y, Vijayawada Y, Prasad VK (2019) Analysis of early detection of emerging patterns from social media networks: a data mining techniques perspective. In: Soft computing and signal processing. Springer, Singapore, pp 15–25 13. CS224W Analysis of Networks Homepage. http://snap.stanford.edu/class/cs224w-2018/data. html. Last accessed 19 Jan 2020 14. Statistics How To Homepage. https://www.statisticshowto.com/power-law/. Last accessed 19 Jan 2020 15. Park S, Lee W, Choe B, Lee SG (2019) A survey on personalized PageRank computation algorithms. IEEE Access 7:163049–163062

Chapter 41

Tunable Optical Delay for OTDM P. Prakash, K. Keerthi Yazhini, and M. Ganesh Madhan

1 Introduction Optical communication systems have applications in nationwide fiber-optic backbone networks, transcontinental submarine links and (terrestrial as well as spaceborne) free-space point-to-point connections. Optical communication systems are also massively entering the access market, with fiber-to-the-home deployments currently underway in several countries worldwide. Also, high-bandwidth shortreach optical links are increasingly being used as rack-to-rack interconnects within routers and supercomputers. Multiplexing is a method in which multiple analog and digital signals are combined and transmitted on a single channel. Among various available multiplexing methods, Time Division Multiplexing is a method in which multiple digital or analog signals are delayed and transmitted on a single channel with one signal at a time. Optical Time Division Multiplexing is a method in which Time Division Multiplexing is employed in optical systems. In the delay line, the input sequence undergoes time delay for a certain time. For an input sequence of time ‘t’, the output is obtained at ‘t + n’ for a delay line of time ‘n’. In this paper, we demonstrate a method to delay a signal with time delay ‘n’ using All-Pass Filter to perform Optical Time Division Multiplexing.

P. Prakash (B) · K. Keerthi Yazhini · M. Ganesh Madhan Madras Institute of Technology, Anna University, Chennai 600044, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_41

527

528

P. Prakash et al.

Fig. 1 Ring resonator based APF

2 All Pass Filter Delay Line An All Pass Filter makes changes to phase relation to the frequencies of the input signals equally. All Pass Filter has no impact on the amplitude of the input signals. Hence they are useful for phase shifting applications and not for filtering frequencies. For multistage APFs, filter complexity and delay ripple are to be considered as important dimensions. Large delays may be produced for periodic input. In silica ring resonator based APFs, amplitude distortion is introduced when the propagation loss in an APF is finite. A way to compensate for this loss, is to introduce gain into the device that is an active APF with a fact that the fiber loops with gain (using erbiumdoped fiber sections) have been used as optical buffers to produce long time delays. Ring resonator coupled to the bus waveguide is one of the common implementations of APF. Here the delay is produced by the coupling of the light signal from the bus wave guide to the ring wave guide according to the property of ring resonators that is total internal reflection, coupling and constructive interference. The APF can be made tunable by changing the refractive index of the ring wave guide by the property of electro-optic effect using anisotropic crystal. The number ring resonators coupled determines the stage of APF, that is, for n resonators the APF is an n-stage APF. In Fig. 1, R represents the radius of the ring waveguide and k represents the coupling ratio [1]. One of the important applications of delay line in communication systems is based on the concept of group delay. The transit time for an optical signal travelling at mode velocity for a particular distance is defined as group delay. It can be mathematically defined as the product of propagation distance with first frequency derivative of propagation constant [2]. Dispersive properties of the filter define the group delay response.

3 Literature Survey Multiplexing has been in use to increase the bandwidth of the system for a longtime. The main approach in optical multiplexing is wavelength division multiplexing, mainly because of simpler design of the system. Various design techniques has been

41 Tunable Optical Delay for OTDM

529

proposed over the years to perform optical time division multiplexing. The system proposed by Tucker, R. S., Eisenstein, G., and Korotky, S. K. proposed the idea to use of optical delays to design a single optical pulse generator. Delay line filters have been in use for a long time to generate optical delays in an optical system. Various optical delay lines has been designed so far. Lenz, G., and C. K. Madsen. Proposed the idea of using All-Pass Filter as an optical delay line. Various research articles has been proposed from then experimenting the use of different materials to design a optical resonator to be used as an delay line.

4 APF Tunability Tunable optical delay line is a fundamental building block for signal processing in an optical communication network. Various techniques have been demonstrated to achieve a tunable delay. Examples are the switching among a discrete set of optical paths, slow light in an optical fiber, and wavelength conversion with group velocity dispersion. One of the potential applications of an optical delay line is to perform all-optical channel selection on OTDM signal. The conventional channel selection methods provide mainly discrete time delays and thus the data rates and the number of channels need to be pre-determined. The tunability is produced using the EO effect [3]. Some materials change their optical properties when they are subjected to an electric field caused by forces that distort the positions, orientations, or shapes of the molecules. The property of the material to change its optical characteristics is known as EO effect. Electrically controllable optical device can be made from the materials whose refractive index can be modified by means of an applied electric field. These are the materials in which the absorption of light creates an internal electric field which, in turn, initiates an Electro Optic effect which alters the optical properties of that medium. The light incident on it controls the optical properties of the medium indirectly. Photorefractive devices thus permit light to control light. In the EO effect, crystal lattice displacements are caused by an electric field, which in turn changes the refractive index of the crystal [7]. The dependence of the refractive index n on the electric field B is expressed by Eq. 1: 1 1 = 2 + cE + Be2 + . . . 2 n n0

(1)

where, n0 is the refractive index without electric field, c is the electro-optic coefficient, B is a higher order electro-optic coefficient [4]. Because of the ferroelectric nature of LiNbO3 crystal, it exhibits electric dipole properties without the application of an electric field. However, this crystal does not

530

P. Prakash et al.

exhibit this property at Curie temperature. The refractive index of the crystal has a linear relation with the applied electric field. There are numerous methods to fabricate low loss LiNbO3 structures. The core region of the waveguide has larger refractive index than other regions. This confines the light to core region of the waveguide. Problems like optical axis alignment can be avoided by guiding light into waveguides. This enables mass production of relatively complex PLCs [5].

5 Three Stage All Pass Filter The input signal in each path from the power splitter passes through the three stage All Pass Filters depending on the requirement. The three stage All Pass Filter is designed using a OptiBPM layout as shown in Fig. 2 and exported into the OptiSystem. The three stage All Pass Filter consists of linear waveguides and S bend sine waveguide in which the rings are arranged in the zigzag manner. The coupling of light signal occurs between the waveguides and traversing of the light signal occurs which causes delay. The key element of photonics is optical waveguides which perform coupling, splitting, combining, switching, multiplexing, de-multiplexing and guiding. One of the powerful technique to analyze the optical signal through the waveguide and optical fibers is through finite difference Beam Propagation Method [6]. In the OptiBPM designer the waveguide is created on a z-cut wafer of Lithium Niobate and the surrounding cladding material is air oriented along the y-axis of Lithium Niobate. The waveguide width is 8 µm. The substrate material is of Lithium Niobate. The Fig. 2 Opti BPM layout of APF

41 Tunable Optical Delay for OTDM

531

substrate and cladding thickness is set as 10 µm and 2 µm respectively. A diffused material named Lithium Niobate is defined with crystal cut in z direction and propagation in y direction. Two dielectric materials are defined, one with a refractive index 1 and named as air and other with a refractive index 1.47 and named as buffer. Next a diffused profile named Ti:LiNbO3 is defined with lateral diffusion length 3.5 and diffusion length in depth as 4.2. The length and width of the wafer is set as 1000 µm and 100 µm respectively.

6 OTDM System A bit sequence 1101 at bit rate of 10 Gbps input is modulated with a carrier optical wave of wavelength 1550 nm and power of 1 mW. This input modulated signal is split into four channels through which tunable delay is produced. The delay signals are combined and then transmitted through the optical fiber of 10 km with wavelength 1550 nm. Figure 3 gives the block diagram representation of system while Fig. 4 gives the simulation layout. The bit sequence from the user defined bit sequence generator is converted into electrical pulses by NRZ pulse generator which is further modulated with continuous wave at wavelength of 1550 nm and power of about 1 mW from continuous wave laser. The optical signal is splitted using 1:4 power splitter which splits input power

Fig. 3 Block diagram representation of OTDM system

Fig. 4 Simulation layout of OTDM system

532

P. Prakash et al.

equally into 4 individual ports. The optical signal is then passed through three stage APF which each produces a optical delay of 25 ps. The first arm produces a delay of 25 ps while second and third arm produces delay of 50 and 75 ps respectively.

7 Simulation Results Figure 4 denotes Optisystem layout of OTDM system. This amplitude modulated signal from the Mach Zehnder modulator is further given to the power splitter 1:4 which splits the input in the input port with equal amplitude among the four output ports, zero degree phase relationship between the output ports and high isolation between the output ports. The signal from the different paths through the delay line is given to the 4:1 power combiner which calculates the vector sum of all the signals in its input ports into a single output port. Figures 5, 6, and 7 shows time domain signal output from 1st, 2nd and 3rd arm respectively. From Fig. 1, it can be inferred that the single three stages All-Pass Filter produces an output delay of 25 ps and when the three stage All-Pass filter is cascaded in series with another three stage All-Pass Filter produces an output delay of 50 ps. Each stage of All-Pass filter produces an time delay response of 25 ps and the system can be designed to produce the necessary delay by tuning the All-Pass filter and by cascading them in series. Fig. 5 Delay of 25 ps

41 Tunable Optical Delay for OTDM

533

Fig. 6 Delay of 50 ps

Fig. 7 Delay of 75 ps

8 Conclusion Thus, tunable optical delay line for Optical Time Division Multiplexer is experimentally demonstrated using three-stage All Pass Filter designed with the coupled ring resonators using the property of Electro Optic effect. The delay is produced here by the coupling properties of the resonators and tunability is achieved by changing the effective refractive index of the ring waveguide. Here the APF is made tunable by changing the refractive index of the ring waveguide by the property of Electro Optic effect using anisotropic crystal in which by the application of the electric field produced the increase in the carrier concentration which in turn changes the refractive index of the material. However, larger delays are achieved using multi-stage APFs at the cost of filter complexity and amplitude distortion. Since the cascading of more

534

P. Prakash et al.

number of APF’s causes amplitude distortion. This performance degradation can be overcome by using active APF filters and improvement in fabrication techniques.

References 1. Lenz G (1999) General optical all-pass filter structures for dispersion control in WDM systems. J Lightwave Technol 17(5):1248 2. Bogaerts W (2012) Silicon microring resonators. J Laser Photon Rev 6(1):47–73 3. Yu Z (2017) Tunable optical delay line for optical time-division multiplexer. J Opt Commun 395:217–220 4. Abdelsalam A (2013) Optical time division multiplexer on silicon chip. J Opt Express 18(13):13529–13535 5. Weis RS (1985) Lithium niobate: summary of physical properties and crystal structure. J Appl Phys A 37(4):191–203 6. Xia F (2006) Ultracompact optical buffers on a silicon chip. J Nat Photon 1:65–71 7. Kogelnik H (1988) Theory of optical waveguides. In: Guided-wave optoelectronics, 1st edn. Springer, Berlin

Chapter 42

Game Theory Based Cluster Formation Protocol for Localized Sensor Nodes in Wireless Sensor Network (GCPL) Raj Vikram, Sonal Kumar, Ditipriya Sinha, and Ayan Kumar Das

1 Introduction Wireless Sensor Network (WSN) consists of a large number of sensors, which are deployed in large areas to provide variety of applications such as remote health monitoring, disaster management (forest fire), etc. In WSN, saving the energy of nodes are highly demanding due to huge amount of energy drainage. Hierarchical cluster based approach is used to enhance the network performance and reduce the consumptions of battery energy of sensor nodes. The nodes, which sensed the event are called active and rest others are called inactive in the network. All active nodes constitute the cluster. Considering the demands of energy-efficient cluster formation and overhead reduction have motivated authors to design event-based cluster formation technique. The objective of proposed work GCPL (Game theory based Cluster Formation Protocol for Localized sensor nodes in Wireless Sensor Network) is that all non-cooperative nodes should participate in a cluster as a cluster head after a certain amount of round. This motivates authors of this paper to apply the non-co-operative game theory approach for electing cluster head. This way the energy consumption of the sensor nodes is handled effectively and the lifespan of R. Vikram (B) · S. Kumar · D. Sinha Department of Computer Science and Engineering, National Institute of Technology Patna, Patna, India e-mail: [email protected] S. Kumar e-mail: [email protected] D. Sinha e-mail: [email protected] A. K. Das Department of Computer Science and Engineering, Birla Institute of Technology, Patna, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_42

535

536

R. Vikram et al.

the network is increased. At the other side, if all the sensor nodes are linked to GPS, the deployment costs are large. This paper proposes updated variant of the DV-Hop [1] technique in order to determine the position of sensor nodes not linked to GPS. This way it reduces implementation cost of sensors in remote areas. Our main contributions are listed as follows: 1. Localization of sensor nodes without GPS. 2. Event-based Cluster formation. 3. Initiator node and Cluster head selection based on game theory. The rest of the article is structured like this: Sect. 2 describes the related work on localization of sensor nodes and clustering problem, Sect. 3 identifies the scope of the work and Sect. 4 points out the details of the simulation. Finally, in Sect. 5, we conclude the article.

2 Related Work Goal of GCPL technique is to identify the position of the sensor nodes without GPS and selection of cluster head based on game theory technique to minimize the implementation cost and maximize the network lifetime. This review section is divided into two subsections such as different localization techniques in WSN and existing clustering protocols in WSN.

2.1 Localization of Sensor Node Localization in wireless sensor network is the major concern of researchers. Previously in wireless sensor network location was known through GPS, which was expensive. Nowadays DV-Hop [1] approach is applied to find the location of sensors without GPS. Nodes connected with GPS are known as Anchor node and nodes not connected with GPS are known as unknown nodes. In DV-Hop technique GPS connectivity is provided to some sensor nodes and remaining nodes compute their position with the help of these anchor nodes. Here distance is estimated by applying Least Square Method. DV-Hop technique uses line of sight concept to measure the distance. Here estimated distance and measured distance are not same due to line of sight concept. Improved DV-Hop [2] has been introduced to reduce this error. In this algorithm average Hop-size is calculated first and then distance is computed by using this average Hop-size. After distance computation location co-ordinate is estimated by applying 2-D hyperbolic location algorithm. Improved DV-Hop reduces the location error up to some extant but it is not able to remove the error completely. Advanced DV-Hop [3] is introduced to enhance the location accuracy. In advanced DV-Hop technique, the position co-ordinates are updated after estimation. This technique

42 Game Theory Based Cluster Formation Protocol …

537

gives more accurate location co-ordinate with less amount of error. On the other hand, it can’t be able to reduce location error completely.

2.2 Clustering Technique This section presents a brief description about some of the existing clustering protocol in WSN. The routing protocols based on clusters have a number of benefits, such as greater scalability, less load, less energy usage and more robustness relative to flat routing protocols in WSNs. LEACH (Low Energy Adaptive Clustering Hierarchy) [4–6] is one of the examples of homogeneous type of protocol. The chain-based routing protocols is Power-efficient gathering in Sensor Information Systems PEGASIS [7]. It is an enhancement of the LEACH [4, 5] protocol. EEHC (Energy Efficient Heterogeneous Clustered) [8] and DEEC (Distributed EnergyEfficient Clustering) [9] are two examples of heterogeneous schemes based on probabilistic cluster head selection. HEED [10] is another clustering protocol, which is totally distributed and it make no assumption regarding the location knowledge. A scheme named clustered routing of selfish sensors (CROSS) [11] is based on game theory. In CROSS each sensor node participates in playing a clustering game with other player nodes to select a cluster head. LGCA [12] introduced the problem of CROSS and implemented a new algorithm to overcome that problem. HGTD [13] has been proposed to overcome the problem of LGCA. Because of its low computational difficulty, Fuzzy logic based solution FLCFP [15] has been suggested for WSN. A distributed and energy-efficient approach for clustering in wireless sensor networks by means of cellular learning automata (CLA) technique was suggested in the EEMCCLA [16] protocol. In this protocol, some of nodes are chosen as CHs using irregular cellular learning automata (ICLA) [17]. CCOF [18] designs a routing protocol for the topology variations in MWSNs due to enhance node mobility, energy efficiency, and decrease network overhead. In SCCWSN [14], a problem arises if the CHs are crashed unpredictably. Therefore, cluster participants constantly relay their sensed data to missed CHs without realizing the CHs don’t interpret the data properly. As a result, huge amount of data packets is lost. To solve this problem a Gaussian distribution is applied for each data with its mean and covariance matrix which represents the expected value and its uncertainty. With the aid of the genetic algorithm (GA), GAEEC [19] suggests a new protocol to partition the whole network into an optimum number of clusters. EASRWSN [20] is an energy-aware secure routing protocol that enhances network lifetime for WANETs, and provides secure routes for data delivery. Optimal back-off sleep period protocol [21] is capable of delivering the optimum coverage and expanding the existence of the wireless network by the life of a single node. The main challenge for researchers in WSN is energy conservation and low equipment cost for constructing the network. State of the art describes that very few protocols in WSN are designed to mitigate both the problems. Localization of sensors without GPS reduces the equipment cost. On the other hand, energy-efficient clustering approach is allowing data aggregation in order to save

538

R. Vikram et al.

energy of the sensors. Thus for enhancement of network lifetime and reducing the deployment cost in high node density network this paper proposes a protocol which designs a localization-based game theory approach for energy-efficient clustering.

3 Proposed Work Localization of sensor nodes without GPS connection is one of the objective of this paper to reduce implementation cost of sensors in remote place. This paper also proposes non-cooperative game theory approach during cluster head selection. Objective of proposed work is that all non-cooperative nodes should act as a cluster head after a certain amount of round. This way life time of sensor network will be enhanced. Figure 1 depicts the step-by-step procedure of proposed protocol. Fig. 1 Flow diagram of GCPL

42 Game Theory Based Cluster Formation Protocol …

539

3.1 Definitions The definitions of related terms are as follows: Definition 1: Non-cooperative game: It models the behavior of nodes in the network, optimizing their utility in a given process, based on a thorough explanation of the moves open to each node and details. Definition 2: Nash Equilibrium: It can be characterized as a secure state of a network structure involving the engagement of several participant nodes in a noncooperative game, where each node is presumed to know the equilibrium strategies of other nodes, and if the strategies of the other nodes remain unchanged, no node will win from unilateral change of strategy. Definition 3: Payoffs: In game theory, payoffs are numbers which represents the motivations of players.

3.2 Localization of Sensor Nodes Without GPS In this proposal, GPS connectivity to all sensor nodes is avoided to reduce the deployment cost. The advanced DV-hop technique has been modified for localization of nodes. In this technique, a few number of nodes have GPS connectivity called as anchor nodes and rest other nodes, called as unknown nodes. The estimation of location coordinate involves 4 steps. (i) At first, every anchor node transmits a packet named as beacons, which consists of the attributes like ID, Hop count and its location coordinate. Initially the value of hop count is zero. A table is maintained by each unknown node m for the beacon packet information. In the case of receiving multiple beacon packets, a node keeps the packet with minimum hop count and forwards the packet to its neighbor after increasing the hop count value by 1. Thus, each unknown node will have the minimum hop count value from each anchor node. (ii) The magnitude of the Hop size is determined from one anchor node to another node. Here hop size value is computed by applying Cartesian distance equation as:

E Hopsizei

⎛ ⎞

   2  2 =⎝ h mini, j xi − x j + yi − y j ⎠ i= j

(1)

i= j

where, (x i , yi ) and (x j , yj )are the coordinate of anchor node i and j respectively and h mini, j is the minimum hop count between anchor node i and j. The hop-size value of each anchor node is broadcasted by itself in the network. On receiving the hop-size value from the nearest anchor node unknown node computes its distance from the anchor node by applying Eq. (2).

540

R. Vikram et al.

distua = E Hopsizei × hopu,a

(2)

Where, hopu, a is the number of hops between the anchor node a and unknown node u. Hopsizei is the hop size of anchor nodes. (iii) The unknown node co-ordinate is assumed as (x, y). The equation scheme can be made up as: 

(x − x1 )2 + (y − y1 )2 = d1  (x − x2 )2 + (y − y2 )2 = d2  (x − xn )2 + (y − yn )2 = dn The last equation is subtracted from the first (n − 1) equations and then by squaring both sides these equations become like 2 + dn2 − (Q n−1 + Q n ) −2(xn−1 + xn )x − 2(yn−1 + yn )y + 2 p = dn−1

    where, P = x 2 + y 2 , Q i = xi2 + yi2 ∀i = 1, 2, 3 . . . n. Now the above system of equation can be expressed in matrix form, AX = B Where, ⎡

⎤ −2(x1 + xn ) −2(y1 + yn ) 2 A = ⎣ −2(x2 + xn ) −2(y2 + yn ) 2 ⎦ −2(xn−1 + xn ) −2(yn−1 + yn ) 2 ⎡ ⎤ d12 + dn2 − (Q 1 + Q n ) B = ⎣ d22 + dn2 − (Q 2 + Q n ) ⎦ 2 dn−1 + dn2 − (Q n−1 + Q n ) ⎡ ⎤ x ⎣ X = y⎦ p X can be computed by applying Eq. (3). X = (AA)−1 A B

(3)

where A stands for transpose of matrix A. (iv) In the final step, the approximate coordinate is changed the precision  2 to increase  2 of the position.Earlier it is considered that P = x + y . Now we consider   that position is x  , y  and approximate unknown node co-ordinate is x  , y  . x  and y  can be written in the form of x  and y  as:

42 Game Theory Based Cluster Formation Protocol …

541

x  = t × x  y  = t × y  Now value of t can be evaluated by the given equation  2  2 P = x  + y

(4)

The updated unknown co-ordinate node can be obtained applying Eq. (5). x=

y  + y  x  + x  y= 2 2

(5)

3.3 Cluster Formation and Cluster Head Selection The cluster formation is initiated by the node that first senses the occurrence of an event. Sensor nodes are evenly distributed in the effective area. The nodes which detect event becomes active. Only active sensor nodes construct a cluster and elect one of them as a cluster head applying game theory approach. All active sensors transmit their sensed data to cluster head. Cluster head aggregate the gathered data and transmits it to the base station. Initiator node selection Section 3.2 evaluates location of each sensor nodes applying proposed modified DVhop technique. All the nodes in the cluster act as the player nodes which are in the sensing range of each other. In clustering game selection of initiator of the game is the first requirement. This leads to computation of centroid of all the player nodes on the basis of their co-ordinates. Equation (6) is applied for this purpose. 

n

x

i X c =  i=1 nn y i Yc = i=1 n

(6)

Now, Eq. (6) computes the distance of each player node from its centroid. di,c =

 (X c − xi )2 + (Yc − yi )2∀i ∈ n

(7)

The player node nearer to the centroid starts the clustering game, which is called initiator node. Let, ith node (initiator node) chooses its strategy D, then according to Nash equilibrium the following player node is selected. 

xi D , (x1 , x2 , . . . , xi−1 , xi+1 , . . . xn )ND



542

R. Vikram et al.

where, ith node has its strategy D and all other nodes have strategy ND. Cluster head selection based on game theory Once the initiator nodes are selected, cluster head should also be selected. Each node is modeled as a player node and selected as a cluster head (CH) at least once. Clustering game is defined here with the following function: f (CG) = {N , Si , Ui } where, N = Total number of nodes participate in the game called player nodes. {S i } = Set of strategy which is declared by each player node i. {U i } = Utility function for each player node i depends on declared strategy. Here, strategy of a player node has only two choices: either it chooses a strategy “declare itself as cluster head (D)” or “not declare itself as cluster head (ND)”. Thus, {Si } = {D, ND} In cluster formation phase, it is already considered that all sensor nodes in the cluster are within same range. Now the clustering game is initiated by initiator player node. It is the first player node who is choosing its strategy as D (declaring itself as cluster head). Let us assume that ith node chooses its strategy D and informs all the neighbors about its status. If any player node I chooses its strategy D, at that time any other player node in its cluster can’t be able to choose a strategy D. All of its neighbors choose strategy not declaring itself as cluster head, such that it satisfies Nash equilibrium. In symmetrical clustering game, n number of Nash equilibria exists in the game: N = {N1 , N2 , . . . , Nn } Then Nash equilibrium can be represented as  ⎫ ⎪  N1 D , [N2 , N3 , . . . , Nn ]ND , ⎪ ⎪ ⎪ N2 D , [N1 , N3 , . . . , Nn ]ND , ⎪ ⎬ ... ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ... ⎪ ⎪     ⎪ ⎪ ⎭ ⎩ N , N , N ,..., N nD 2 3 n−1 ND ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

Now each player node calculates its payoff value, which can be defined in Eq. 8. This way payoff value assists to elect the node as cluster head whose remaining energy is high and distance from base station to that node is low. Equation (8) depicts payoff computation of ith node: Payoff(vi ) =

Z i × Di,BS E PX,TX

(8)

42 Game Theory Based Cluster Formation Protocol …

543

where Z i is size of sensed data by the ith sensor, Di,BS is distance from node I to the base station BS, and E PX,TX is energy required to process and transfer the packet to CH. In the case of payoff three types of cases may be occurred such as: Case 1: If a node does not want to declare itself as cluster head (CH) or it chooses its strategy ND and any neighbors don’t choose its strategy D, its payoff will be zero and that node is not being able to transmit data towards the sink. Case 2: The payoff of anode will be v until at least one of its neighbors declares themselves as CH. Case 3: If a player node chooses its strategy D, its payoff v is reduced by an sum equivalent to the cost C for the efficient delivery of the results. Thus, if a node chooses its strategy D its payoff value will be (v − c). Therefore, utility function Ui for a particular player node i has the following form: ⎧ ⎨ 0, if Si = ND, ∀i ∈ N Ui (Si ) = v − c, if Si = D ⎩ v, if Si = ND and ∃k ∈ N such that, Sk = D In symmetrical Nash equilibrium, any player chooses its strategy following random probability distribution. If probability of choosing a strategy D for a player node is p then probability of choosing a strategy ND for that player node is (1 − p). Thus, expected payoff is computed for each available choice. The expected payoff when a player node chooses a strategy D is described in Eq. (9). E[U D ] = v − c

(9)

Then expected payoff for all other (N − 1) nodes to choose a strategy ND a E[UND ] = Pr(Not any one else declares) × 0 + Pr(atleast someone else declares) × v E[UND ] = v{1 − Pr(no one else declares)}   E[UND ] = v 1 − (1 − p) N −1 In Nash equilibrium, we will have, E[U D ]= E[UND ](Nash equilibrium condition(D, ND)or(ND, D))   v − c = v 1 − (1 − p) N −1 c = v(1 − p) N −1 c = (1 − p) N −1 v  c  N 1−1 p =1− v

(10)

544

R. Vikram et al.

If w =

c v

then 1

p = 1 − w N −1

(11)

Assuming some nodes are dead i.e. nodes those are not participating in the clustering game. Such nodes create error in the network for computing equilibrium probability p. Thus, it is computed by the Eq. (12). 1

p = 1 − w N −1−Nd

(12)

where, Nd is number of dead nodes, and w = vc such that 0 < w < 1. Now payoff of a player node i can be computed by the Eq. (13) [According to case 2] vi =

Z i × Di,BS E Pi

(13)

where, E Pi = E i − E cmi Z i is the size of data sensed by the ith sensor node, E cmi is the energy consumed by node ito send the packet to its corresponding CH, and E i is the preserved energy in node i. E Pi is residual energy of player node i. According to case (3) payoff of a node I computed by Eq. (14). vi − ci =

Z i × Di,BS × (Dri + 1) E Pchi

(14)

where, E Pchi = E i − E chi Where ci is the cost for node i to serve as CH. We can compute ci as follows: ci =

Z i × Di,BS Z i × Di,Bs × (Dri + 1) − E Pi E Pchi

(15)

Where, Dri is the number of neighbors of player nodes (excluding dead node), E chi is the energy consumed by node I when it serves as a cluster head. E Pchi is residual energy of node I which serves as cluster head. 2 where di,CH = E cmi = Z × E elec + Z × E fs × di,CH

2 R 3

42 Game Theory Based Cluster Formation Protocol …

545

4 Zi × ε f s × R2 9 4 = Dri × Z i × E elec + (Dri + 1) × Z i × E aggr + Z i × E aggr × di,BS

E cmi = Z i × E elec + E chi

Now in each round to compute final equilibrium probability pi residual energy of declared cluster head is also considered. Now pi is computed applying Eq. (16).  1 i N −N −1 e−n cost pi = 1 − wi d  1 N −N −1 where 1 − wi d

(16)

is computed in Eq. (11). Where, n icost is the residual energy

of ith node in the nth round and it is defined as: n icost

! N    1, is a MN α × E cmi + (1 − α)E chi , whereα = = 0, is a CH i=1

After finding equilibrium probability pi we need to compute wi for each player node to decide whether a player node is eligible for cluster head. Let’s say that cost c isn’t constant. It is dependent on the number of active player nodes participate in the clustering game and number of player nodes who select their strategy D. Thus, cost function is defined in Eq. (17). c = c(N , N D ) =

N c1 + c0 ND

(17)

Now expected value for selecting cluster head: E[C H ] = N p where p is the equilibrium probability for selecting a node as cluster head. Thus, expected cost will be c = c( p) =

c1 N c1 + c0 = Np p

(18)

Now, expected payoff when a player node wants to declare itself as cluster head can be calculated as E[U D ] = v − c

(19)

546

R. Vikram et al.

And the expected payoff when a player node does not want to select itself as a cluster head: E[UND ] = Pr(Not any one else declares) × 0 + Pr(atleast someone else declares)v E[UND ] = v{1 − Pr(no one else declares)}   E[UND] = v 1 − (1 − p) N −1

(20)

Thus,   v − c = v 1 − (1 − p) N −1 c = v(1 − p) N −1 c( p) = v(1 − p) N −1 c1 + c0 = v(1 − p) N −1 p

(21)

Equation (21) does not have any trivial solution. When c0 = 0 (initially when no any node wants to select as CH then cost is zero): c1 = v(1 − p) N −1 p c1 = p(1 − p) N −1 v Thus, in first round: 

w = p(1 − p) N −1

(22)

The maximum value of above equation [21] is: when N = 2 and p = 0.5, w  = 0.5(1 − 0.5)2−1 = 0.25 The value of w is always less than 0.25 for any other value of p and N. w  ≤ 0.25 ∀ p, N > 0 Therefore, after computing the equilibrium probability, the value of w for each player node is computed. Now all player nodes transmit the value of w to the initiator node in the cluster. The initiator node determines its w value and it pick the node as a cluster head whose w  value is maximum among all existing nodes in the cluster. In the next round, the remaining nodes participates in the clustering game. In this

42 Game Theory Based Cluster Formation Protocol …

547

way, after completion of n no of rounds all nodes are compelled to perform the role of the cluster head and lifetime of the network is increased.

4 Simulation Results The GCPL protocol has been simulated with the use of the MATLAB (R2017). The efficiency of the proposed protocol was contrasted with game theory-based clustering protocol such as CROSS [11], LGCA [12] and HGTD [13] and localization based protocol such as DV-HOP [1] and ADV-HOP [3]. In Fig. 2 it is observed that localization error is minimized with the rise in the number of anchor nodes. GCPL gives the better performance compared to DVHOP and ADV-HOP. Figure 3 describes the localization error when position of the sensor nodes is computed by applying DV-HOP, ADV-HOP, and GCPL localization technique. Here Localization error in these three protocols often increases as the number of nodes grows. The increase rate of localization error in GCPL is low compared to other protocols. Figure 4 shows that GCPL gives better result compared to CROSS, LGCA and HGTD in terms of network lifetime in all cases of network size. All the protocols have decreasing tendency as network size decreases, as distance to BS rises with network size rise. Figure 5 describes the comparative analysis of CROSS, LGCA, HGTD, and GCPL in the form of network size versus number of rounds until 10% nodes die. This reveals that the number of rounds declines in all protocols as the network size grows but GCPL still provides better results. Figure 6 shows that as the communication radius increases network lifetime decreases in all the protocols. In this analysis GCPL protocol gives better performance as compared to others. Fig. 2 Anchor node versus localization error

548

R. Vikram et al.

Fig. 3 Number of nodes versus localization error

Fig. 4 Network size versus network life time

5 Conclusions Localization problems are an open challenge for researchers in WSN. In this paper, a modified DV-hop based technique has been proposed for detecting geographical location of deployed sensors without using GPS. It also present modified game theory based cluster formation for localized sensor nodes in order to minimize energy dissipation by at least once spreading the cluster head position to all sensor nodes and through network life time. The payoff is being calculated for each individual player node in different strategy to be a cluster head or not, and both distance to sink node and node degree are considered. On the basis of payoff, it calculates the equilibrium probability and the parameter w twice based on modified game theory.

42 Game Theory Based Cluster Formation Protocol …

549

Fig. 5 Network size versus round until 10% nodes die

Fig. 6 Communication radius versus network life time

Cluster head selection is based on the value of parameter w which is near to 0.25. In this work, each player node serves the role of cluster head at least once. Thus the proposed protocol implements a localization based clustering method to decrease the deployment cost of the network in the remote area and increase lifetime of the network by applying modified game theory approach for cluster head selection. The simulation finding reveals that the GCPL protocol has outperformed the existing protocols CROSS, LGCA and HGTD in terms of energy conservation and reduces localization errors than DVHOP and ADVHOP. Acknowledgements This work is partially funded by grants from DSTSERB project ECR/2017/000983. For this support, the authors would like to thank the DSTSERB.

550

R. Vikram et al.

References 1. Du X (2016) DV-Hop localization algorithms in wireless sensor networks. http://hdl.handle. net/1828/7094. Last accessed 2016 2. Zhang X-l, Xie H-Y, Zhao X-J (2007) Improved DV-Hop localization algorithm for wireless sensor networks. Jisuanji Yingyong J Comput Appl 27:2672–2674 3. Kumar S, Lobiyal DK (2013) An advanced DV-Hop localization algorithm for wireless sensor networks. Wireless personal communications, pp. 1–21 4. Handy MJ, Haase M, Timmermann D (2002) Low energy adaptive clustering hierarchy with deterministic cluster-head selection. 4th international workshop on mobile and wireless communications network 5. Eshaftri M et al (2015) A new energy efficient cluster based protocol for wireless sensor networks. In: Federated conference on computer science and information systems (FedCSIS) 6. Babbitt TA et al (2008) Self-selecting reliable paths for wireless sensor network routing. Comput Commun 31:3799–3809 7. Yadav SG, Shiva Prasad, Chitra A (2012) Wireless sensor networks-architectures, protocols, simulators and applications: a survey. Int J Electr Comput Sci Eng (IJECSE, ISSN: 2277-1956) 1:1941–1953 8. Kumar D, Trilok C, Aseri R, Patel B (2009) EEHC: energy efficient heterogeneous clustered scheme for wireless sensor networks. Comput Commun 32:662–667 9. Tyagi P, Gupta RP, Gill RK (2011) Comparative analysis of cluster based routing protocols used in heterogeneous wireless sensor network. Int J Soft Comput Eng (IJSCE) 1:35–39 10. Kour H, Sharma AK (2010) Hybrid energy efficient distributed protocol for heterogeneous wireless sensor network. Int J Comput Appl 4:1–5 11. Cheng Y et al (2016) An energy efficient algorithm based on clustering formulation and scheduling for proportional fairness in wireless sensor networks. KSII Trans Internet Inf Syst 10 12. Mo H et al (2015) Game theoretic approach towards optimal multi-tasking and data-distribution in IoT. In: IEEE 2nd world forum on internet of things (WF-IoT), IEEE 13. Wu X, Zeng X, Fang B (2017) An efficient energy-aware and game-theory-based clustering protocol for wireless sensor networks, IEICE Transactions on Communications 14. Izadi D, Abawajy J, Ghanavati S (2015) An alternative clustering scheme in WSN. IEEE Sens J 15(7):4148–4155. https://doi.org/10.1109/jsen.2015.2411598 15. Mhemed R, Aslam N, Phillips W, Comeau F (2012) An energy efficient fuzzy logic cluster formation protocol in wireless sensor networks. Proc Comput Sci 10:255–262 16. Ahmadinia M, Meybodi MR, Esnaashari M, Alinejad Rokny H (2014) Energy-efficient and multi-stage clustering algorithm in wireless sensor networks using cellular learning automata. IETE J Res 59:774–782 17. Dong S, Zhou D, Ding W, Gong J (2013) Flow cluster algorithm based on improved K-means method. IETE J Res 59:326–333 18. Karyakarte MS, Tavildar AS, Khanna R (2015) Connectivity based cross-layer opportunistic forwarding for MWSNs, IETE J Res 61:457–465 19. Singh SP, Sharma SC (2018) Genetic-algorithm-based energy-efficient clustering (GAEEC) for homogenous wireless sensor networks. IETE J Res 64(5):648–659

42 Game Theory Based Cluster Formation Protocol …

551

20. Alnumay WS, Chatterjee P, Ghosh U (2014) Energy aware secure routing for wireless ad hoc networks. IETE J Res 60:50–59 21. Reddy KTV, Mahamuni C, Patnaik N (2015) Optimal backoff sleep time based protocol for prolonged network lite blacklisting of failure prone nodes in wireless sensor networks. IETE J Res. http://ieeexplore.ieee.org/abstract/document/7916768/

Chapter 43

SG_BIoT: Integration of Blockchain in IoT Assisted Smart Grid for P2P Energy Trading J. Chandra Priya, V. Ramanujan, P. Rajeshwaran, and Ponsy R. K. Sathia Bhama

1 Introduction Smart grid is the technology that allows full-duplex communication between the electric utility, and the customers sensing along with the electrical lines to respond quickly to the changing electricity demands. It moves the energy industry into a new era of reliability, availability, flexibility, accessibility, efficiency, economic, and environmental paradigm. It is a part of IoT framework which is used to monitor and manage remotely through a network of transmission lines, smart meters, transformers, sensors, and software. Smart grids are provided with sensors to gather and transmit data that make the automatic adjustment of electricity flow. An electricity disruption will lead to a series of failures and catastrophic events in banking, communication and security. With the introduction of smart grid, the electricity utilities prepare to address emergency situations which will allow for automated re-routing when a utility fails. This will minimize the outages, and its effects. The main advantage of smart grid is that it gives consumer control. But, smart grids are vulnerable to multiple attacks that can occur at its communication and networking. It can have detrimental effects on its operation. The current IoT systems depend on centralized communication models like client-server architecture. The IoT devices are recognized, validated, and communicated through cloud servers. The centralized approach is prone to failures and can be compromised by hackers. There is a need to perform the functions of traditional IoT solutions without a centralized J. Chandra Priya (B) · V. Ramanujan · P. Rajeshwaran · P. R. K. Sathia Bhama Anna University, MIT Campus, Chennai, India e-mail: [email protected] V. Ramanujan e-mail: [email protected] P. Rajeshwaran e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_43

553

554

J. Chandra Priya et al.

control. Decentralizing IoT architecture must support peer-to-peer communication and anonymity in device coordination. The ever-rising number of IoT devices has substantially increased the need for stability and security to store and process the data from them. Blockchain, which is a distributed, decentralized public ledger is an ideal choice to solve the security issues on IoT. Blockchain is an effective platform which efficiently transfers and stores huge IoT data. Digital information is stored in public blockchain that consists of blocks containing transaction information. A smart city is an efficient, sustainable and technically advanced community that can be worked as a tool for controlling the rapid urbanization and electricity-related problems. Blockchain grabs intense attention from the academic community because of its auditability, decentralization, anonymity, and persistency [1]. Blockchain-as-aService (Baas) platform is developed over the cloud computing environments. Using these services, studies are made to investigate the application of blockchain technology in IoT environment. Transparency is needed to reduce the damage to Baas platform. Smart contracts are deployed for automatic repair and optimization of blockchain performance. Blockchain is integrated into smart grids to decentralize and share electricity among houses. Residents who own solar panels can produce and sell energy to other houses in their neighborhood. Blockchain acts as an automated transaction accounting system. It records the available energy, price, location of source, and the location of the destination where the energy is supplied or sold. This system could disrupt the traditional energy industries and create a market-driven shared economy. Blockchain uses cryptocurrency that secures energy exchanges. This paper focuses on building a blockchain platform that enables transmission of electric power from IoT enabled transformers (IoT-T) to industries or, apartments optimally, and securely region-wise in smart city.

2 Related Work IoT owns a great threat to privacy and security of sensitive information because of its need for a centralized party [1]. The paper enlightens the opportunity to design an optimized and scalable BC architecture [2] for IoT using a Blockchain Network, Wireless Sensor Networks, and a specific node of blockchain responsible to deploy smart contracts. A lighter version of RSA [3], Elliptic Key Cryptography is used and provides better performance on resource-constrained devices. However, the performance of the system is lowered due to the powerful hash function that demands huge amount of computation, resources, time, and energy. The paper [4] proposes optimization mechanisms for a multi-layered IoT network model combined with blockchain network. The system optimizes the network by allocating additional computation function to any of the secondary nodes when a primary node in the layer is about to exhaust all of its resources. Blockchain-as-a-Service over cloud computing is used to develop NutBass which is used to attain a business code

43 SG_BIoT: Integration of Blockchain in IoT Assisted Smart Grid …

555

by more efficient blockchain in business scenarios [5]. IoT possessed the security issues of real-time transfer and consistency [6]. The paper proposed a secure and lowlatency clock synchronization scheme for blockchain-enabled IoT. The nodes in the blockchain records and broadcast time to reduce attacks from external sources, and ensures trust. The security issues are categorized based on the layered architecture in IoT [7]. Monitoring of blockchain transactions are important for accountability, and authentication [8]. A blockchain explorer is created and maintained for this purpose. Decentralized security architecture based on Software Defined Networking (SDN) coupled with a blockchain technology for IoT network [9]. Blockchain deals with a lot of challenges upon its integration with IoT [10–13]. The research discovers the opportunities, and challenges for Blockchain enabled IoT. The work analyses challenges in blockchain-IoT application and improve IoT potential using blockchain. Some important challenges include storage capacity, scalability, anonymity, data privacy, reliability, decentralization. Blockchain gives good Quality of Service (QoS) in business platforms for the client as well as service providers [14, 15]. SimBlock is used to verify the behavior of the blockchain network [16]. The influence of a node in Blockchain is easily investigated using SimBlock which is publicly available. Video surveillance is used as an indispensable management tool that faces many challenges; such as vulnerable to tact, massive data, and so on [17]. These problems are solved by permissioned Blockchain and edge computing. Named Data Network is used in security which proposes a key management model. The challenges faced by NDN such as single point failure and overhead of certificate chain traversal is solved by using blockchain-based key management [18]. One of the widely used Proof-of-Work algorithms based on blockchain which faces challenges such as low transition rate and high power consumption [19]. Proof-of-Stake called Bazo, used in IoT data stream which enhanced performance compared to PoW based on blockchain. Shared and transaction aggregation methods used to improve Bazo performance. A blockchain-based architecture for the smart grids proposed a system that can completely record and track the energy that the customers have consumed and generated [20, 21]. The paper reasoned out the importance of blockchain on energy sharing and procedurally explained the architectural construction of blockchain-based smart contract for energy demand management.

3 Proposed Model 3.1 Basic Components Blockchain Blockchain is a distributed ledger that contains ordered collection of records that are chained together on blocks. Each block in the blockchain contains information about the transaction of the respective block. There are nodes which are specially designed

556

J. Chandra Priya et al.

for performing transaction, and broadcasting blocks into the network. To broadcast a block in the network, the consensus nodes generate a certificate with respect to any event, and it is deployed and broadcast across the blockchain network. Blockchain network can be accessed only by the consensus nodes. Public and Private Keys To ensure data integrity and confidentiality, cryptocurrency keys are employed to securely perform transaction between consumers and electric utilities. There are two types of keys in any blockchain network. They are private key and public key. The private key is kept secret and the public key is broadcast into the network Consumers generate private key which are used as digital signature for accessing data. A public key is generated by consumer and transferred to the certificate authority which acts as an authentication source for verification of consumer’s identity for accessing data. The public key is used for data encryption that is to be transferred to the consumer from the certificate authority.

3.2 Overview The proposed model consists of IoT enabled Transformers (IoT-T) which acts as a central power distributor to meet the energy requirements in the respective area. Smart Meter is installed at every house which stores energy, senses the demand for electricity, and adjusts the amount of electricity that is supplied. The IoT enabled Transformers will receive electric power from Solar Power Plants, Wind Mills, Thermal Power Plant, and Nuclear Power Plants region-wise. Though transformer is centralized for a region, transformers themselves are decentralized amongst themselves. Each house needs varied amount of power supply in different time period. Energy consumption keeps varying from house-to-house. The IoT-T will send electric power to consumers in apartments or, industries. These IoT-T is capable of identifying which apartment requires more or less energy. The basic idea behind the IoT-T is to regulate the amount of power supply transferred depending upon the usage. The smart city consists of Smart meter, and a Solar panel installed at each house. The IoT-T communicates with the smart meter to continuously supply power to the respective house to meet the electricity demands. The smart meter installed in every house communicates with the IoT-T and sends data regarding the energy consumed, and produced from the local solar panel installed. Whenever there is no power utilization in a house, the supplied electric power from IoT-T and stored energy from locally installed solar panels are transferred back to IoT-T which uses this collected energy to meet the electricity demands of some other houses. Thus, these transformers distribute power to a particular area as well as it maintains the details of power supplied to, and received from each and every house. These transactions are stored in blockchain. Electricity Bill for every house is calculated based on the transactions that stored the amount of energy received from IoT-T and amount of energy transferred from solar panel installed in the respective house.

43 SG_BIoT: Integration of Blockchain in IoT Assisted Smart Grid …

557

3.3 Architecture User Layer The consumer layer is the tier-1 (Fig. 1 Blockchain-Smart Grid Architecture) layer which consists of customers who use electricity from the electric utilities for their demands and requirements. Customers should have the Smart meters are installed in their house. Electricity requirements meet the energy demands of several appliances. The customers may or may not have solar panels installed. If the consumer produces energy through the solar panels installed in their house, then they are called as prosumers because they produce as well as consume energy. They can supply energy to those who are in need of electricity through peer-to-peer energy sharing. The consumer layer is interfaced with the mining layer in the smart grid network. Mining Layer The mining layer allows the customers to register themselves to be a part of smart grid network by stating necessary information about them. The mining layer performs the task of registration of a new customer. The certificate authority guarantees authentication of users using a unique identification number that is generated for every customer during registration. Every transaction commenced by the customer should live through the verification of certificate authority. If the certificate authority finds any transaction unauthorized, then the address of the customer who initiated the transaction is flagged. It acts as the first line of defence in the blockchain network. Consensus Layer The layer which has the complete responsibility for authentication and verification of every request in the smart grid network is the consensus layer. The blocks in the consensus layer run a consensus mechanism to achieve agreement on a state. The most commonly used algorithm is proof of work (POW) wherein, whenever there is a node who wants to join in the blockchain network, the participant node needs to prove that the work performed and deployed certifies them as an authenticated user to add new transactions to the blockchain. This mining process is time-consuming and demands a higher utilization of resources. Whenever the user tries to access data in the network who has no permission to access it, algorithms implemented on this layer helps to deny such illegitimate requests and accesses. Smart contracts are used for this purpose. Blockchain Layer The layer where the transaction details regarding the customer ID, location, energy usage, energy produced, ethers gained by the customer is stored securely in the blockchain layer. Every node in the blockchain consists of a customer transaction that are chained together along with the cryptographic hash of the previous block, timestamp, and nonce value. The nonce value determines the difficulty level to generate a valid block. The energy consumed and produced by the customers can be identified, and based on the timestamps electricity bill can be automatically generated which ensures the backtraceability of blockchain for accounting and auditing.

558 Fig. 1 Blockchain-smart grid architecture

J. Chandra Priya et al.

43 SG_BIoT: Integration of Blockchain in IoT Assisted Smart Grid …

559

Electric Utilities The electric utilities which are the source of energy suppliers interfaces with the consensus layer by the consensus nodes to distribute the energy to the consumers on the smart gird network. Both the renewable and non-renewable energy are sources of electricity distribution, and sends the power to transformers which are IoT enabled (IoT-T) that acts as the centralized energy supplier in each area.

3.4 Working Flow The customer who wishes to join or request resources from the network would use public key cryptography by generating public and private keys. The public key is sent to the certificate authority for verification, and the customer digitally signs the request message using his or her private key. This would prove that he or she is the creator of the request message. The certificate authority can verify the same using the request message it received, and the customer’s public key. This makes the communication and interaction between the nodes in the blockchain network secured and authentic. Digital signatures are related to actual signatures in documents. They help to ensure that the transaction is initiated and signed by the owner. This transaction is sent to the miners in mining layer to verify the authenticity of the transaction using the sender’s public key. If the certificate authority finds that the transaction is legitimate, the transaction is encapsulated in a block, and sent to the consensus layer. This is the layer where all the consensus blocks approve if the block is allowed to join the network. The process is termed as consensus mechanism which runs a Proof of Work algorithm. The consensus blocks relies on computing SHA256 Hash Function for many inputs until the nonce value for the given block is identified before adding it to the blockchain. This makes it computationally difficult for anyone to add illegitimate transactions or edit those which are already recorded in the blockchain. Thus the authenticated user can join the network securely after successfully completing the POW algorithm in the consensus layer. The customers not only act as consumers but also produce their own electricity using solar panels which allows them to utilize the energy stored in the solar panel when required. The excess amount of solar energy saved by an individual is stored in the blockchain using the mining and consensus process. These energy details are stored in the public ledger which allows transferring the energy to the customer who is in need of electricity. This process is performed automatically whenever the user produces more solar energy beyond his/her requirements. At the same time, this process can be controlled in such a way that, the user can supply electricity to a specific individual using the smart gird network integrated by blockchain. The

560

J. Chandra Priya et al.

consumers save electricity from the peer-to-peer energy trading system. This energy is recognized in the form of ethers in the blockchain. Ethers are those that provide an incentive for the nodes to validate blocks in the blockchain. Whenever any customer is in need of electricity, he or she can request neighboring users to transfer energy. The user transfers requested energy in the form of ethers. The customer can use this ethers to meet his or her energy requirements. In this way the smart grid network successfully achieves a micro-grid network, which helps the customers during energy crises and disasters. The customers act as both consumers and producers. They are known as prosumers. When consumers want to access the transaction details for electricity usage, the consumer generates a key-value pair that contains the public and private key. The private key is stored, and the public key is shared with the smart grid. The consumers issue a request, which is known as certificate, and digitally signs it using their private key. This is sent to certificate authority in the smart grid. The certificate authority confirms the request by signature verification with the help of consumer’s public key. The requested data is processed in the smart grid network and encrypted with the certificate authority’s key. It is sent to the requested consumer. The consumer decrypts the data received from the certificate authority and reads the information. The electricity bill is calculated based on the amount of energy utilized and the amount of energy produced.

3.5 Algorithm Explanation The algorithm is provided (Algorithm 1) for the smart contract, which allows the customers to join in the blockchain network and transfer ethers to a specified receiver address. prosumers is a list that stores the details of the customer. When the customer wants to join the network, addCustomer function is called that takes customer id and their location as arguments. owner is an address type that stores the address of the customer who wants to join in the blockchain, which is implicitly stated by msg.sender. The function addCustomer is included with a modifier onlySender which denies unauthorized users to perform any transaction. The address of the user initiating the transaction must match with the owner address variable that was initialized at the beginning. The customer is incentivized with ethers when the customer sells their stored energy. When the customer wants to transfer energy to any other requested users who are in need of electricity, the transferEnergy function sends the specified amount of ethers to the receiver address. The receiver can sell these ethers in the blockchain network to get electricity from the electric utilities (suppliers-IoT-T).

43 SG_BIoT: Integration of Blockchain in IoT Assisted Smart Grid …

561

Algorithm 1 Smart Contract—Pseudocode Initialization:

i = prosumerCount prosumers = list() owner = msg.sender customer = customer_details(say id,location etc..) Functions: modifieronlySender{ require(msg.sender == owner) checkDigitalSignature(msg.sender); } functionaddCustomer( string memory _id, string memory _location ) public onlySender { i=i+1 prosumers[i] = customer( i, _id, _location); updateSmartGridDatabase(prosumers[i]); } functiontransferEnergy( uint amount, address receiver ) { receive.transfer(amount); updateBalance(); }

4 Experimental Setup The simulation environment has been set up in Ethereum blockchain platform running on a private network with Intel core i5 processor of Windows 10 operating system. Table 1 shows the comparative study of the proposed system with other literature Table 1 Comparison of proposed work with other literatures Features

[20]

[21]

[22]

[23]

[24]

[25]

Proposed model

Data synchronization

Y

N

Y

Y

Y

Y

Y

Data modification

Y

Y

Y

Y

N

Y

Y

Data conceptualization

N

Y

N

N

Y

Y

Y

Information audit log

N

N

N

Y

N

Y

Y

User friendly access

Y

N

N

Y

Y

N

Y

Optimal data storage

N

Y

N

Y

N

N

Y

Storage of reserved information

Y

N

Y

Y

Y

Y

Y

562 Table 2 Simulation results

Table 3 Environment and tools setup

J. Chandra Priya et al. Difficulty level

Mining time (s)

Hash

3

0.00745

000a56………d56y

4

1.2685

0000d7………c456

5

9.0229

00000s………f67y

6

20.0089

000000………e5ea

Environment

Tool

Language

Solidity, JavaScript

Library

Web3.js

Platform

Ethereum

Framework

Truffle

Dependencies

Ganache, Metamask extensions

studies, Table 2 depicts the environmental tool setup and Table 3 shows the results obtained. The results prove that the proposed blockchain platform enables sharing of energy resources ideally.

5 Conclusion This proposed paper discussed about building a platform of a blockchain-based architecture for the smart grid network which allowed users to consume and produce energy. A layered approach to integrate blockchain on smart grid networks is discussed. Blockchain provides easy accountability and auditability which allows dynamic pricing of electricity bills. The customers can easily join the blockchain with the help of the mining layer and consensus layer. The details of customers requesting energy are obtained and stored in the blockchain and access to the energy utilities is granted after successful verification of users. Energy trading is made safe, secure and completely decentralized, which not only proves advantageous for smart girds, but also builds a path for the development of micro-grids. Further studies are focused to implement the proposed system to achieve desired results.

References 1. Thakore R, Vaghashiya R, Patel C, Doshi N (2019) Blockchain-based IoT: a survey. Procedia Comput Sci 155:704–709 2. Le-Dang Q, Le-Ngoc T (2019) Scalable blockchain-based architecture for massive IoT reconfiguration. In: 2019 IEEE Canadian conference of electrical and computer engineering (CCECE), pp 1–4. IEEE

43 SG_BIoT: Integration of Blockchain in IoT Assisted Smart Grid …

563

3. Priya JC, Bhama PRS, Swarnalaxmi S, Safa AA, Elakkiya I (2018) Blockchain centered homomorphic encryption: a secure solution for e-balloting. In: International conference on Computer Networks, Big data and IoT, pp 811–819. Springer, Berlin 4. Chakraborty RB, Pandey M, Rautaray SS (2018) Managing computation load on a blockchain– based multi–layered Internet–of–Things network. Procedia Comput Sci 132:469–476 5. Chandra Priya J, Sathia Bhama PRK (2018) Disseminated and decentred blockchain secured balloting: apropos to India. In: 2018 tenth international conference on advanced computing (ICoAC), pp 323–327. IEEE 6. Fan K, Sun S, Yan Z, Pan Q, Li H, Yang Y (2019) A blockchain-based clock synchronization scheme in IoT. Future Gener Comput Syst 101:524–533 7. Khan MA, Salah K (2018) IoT security: review, blockchain solutions, and open challenges. Future Gener Comput Syst 82:395–411 8. Lee C, Kim H, Maharjan S, Ko K JWK (2019) Blockchain explorer based on RPC-based monitoring system. In: 2019 IEEE international conference on blockchain and cryptocurrency (ICBC), pp 117–119. IEEE 9. Rathore S, Kwon BW, Park JH (2019) BlockSecIoTNet: blockchain-based decentralized security architecture for IoT network. J Netw Comput Appl 143:167–177 10. Reyna A, Martín C, Chen J, Soler E, Díaz M (2018) On blockchain and its integration with IoT. Challenges and opportunities. Future Gener Comput Syst 88:173–190 11. Hossein KM, Esmaeili ME, Dargahi T (2019) Blockchain-based privacy-preserving healthcare architecture. In: 2019 IEEE Canadian conference of electrical and computer engineering (CCECE), pp 1–4. IEEE 12. Yu S, Lv K, Shao Z, Guo Y, Zou J, Zhang B (2018) A high performance blockchain platform for intelligent devices. In: 2018 1st IEEE international conference on hot information-centric networking (HotICN), pp 260–261. IEEE 13. Wang R, He J, Liu C, Li Q, Tsai WT, Deng E (2018) A privacy-aware PKI system based on permissioned blockchains. In: 2018 IEEE 9th international conference on software engineering and service science (ICSESS), pp 928–931. IEEE 14. Lee H, Sung K, Lee K, Lee J, Min S (2018) Economic analysis of blockchain technology on digital platform market. In: 2018 IEEE 23rd Pacific Rim international symposium on dependable computing (PRDC), pp 94–103. IEEE 15. Dittmann G, Jelitto J (2019) A blockchain proxy for lightweight IoT devices. In: 2019 Crypto valley conference on blockchain technology (CVCBT), pp 82–85. IEEE 16. Banno R, Shudo K (2019) Simulating a blockchain network with SimBlock. In: 2019 IEEE international conference on blockchain and cryptocurrency (ICBC), pp 3–4. IEEE 17. Wang R, Tsai WT, He J, Liu C, Li Q, Deng E (2019) A video surveillance system based on permissioned blockchains and edge computing. In: 2019 IEEE international conference on big data and smart computing (BigComp), pp 1–6. IEEE 18. Lou J, Zhang Q, Qi Z, Lei K (2018) A blockchain-based key management scheme for named data networking. In: 2018 1st IEEE international conference on hot information-centric networking (HotICN), pp 141–146. IEEE 19. Niya SR, Schiller E, Cepilov I, Maddaloni F, Aydinli K, Surbeck T, Stiller B (2019) Adaptation of proof-of-stake-based blockchains for IoT data streams. In: 2019 IEEE international conference on blockchain and cryptocurrency (ICBC), pp 15–16. IEEE 20. Li Y, Rahmani R, Fouassier N, Stenlund P, Ouyang K (2019) A blockchain-based architecture for stable and trustworthy smart grid. Procedia Comput Sci 155:410–416 21. Wang X, Yang W, Noor S, Chen C, Guo M, van Dam KH (2019) Blockchain-based smart contract for energy demand management. Energy Procedia 158:2719–2724 22. Metke AR, Ekl RL (2010) Security technology for smart grid networks. IEEE Trans Smart Grid 1(1):99–107 23. Mengelkamp E, Gärttner J, Rock K, Kessler S, Orsini L, Weinhardt C (2018) Designing microgrid energy markets: a case study: The Brooklyn Microgrid. Appl Energy 210:870–880

564

J. Chandra Priya et al.

24. Ye F, Qian Y, Hu RQ (2015) An identity-based security scheme for a big data driven cloud computing framework in smart grid. In: 2015 IEEE global communications conference (GLOBECOM), pp 1–6. IEEE 25. Mylrea M, Gourisetti SNG (2017) Blockchain for smart grid resilience: exchanging distributed energy at speed, scale and security. In: 2017 Resilience Week (RWS), pp 18–23. IEEE

Chapter 44

Software Defined Network: A Clustering Approach Using Delay and Flow to the Controller Placement Problem Anilkumar Goudar, Karan Verma, and Pranay Ranjan

1 Introduction The modern telecommunication systems require wireless networks, as they are efficient, mobile, and highly responsive. To enhance these systems, SDN has emerged. These SDNs give greater agility and flexibility. SDN provides open, user-controlled configurations of the forwarding hardware in a network. A most prominent open research issue, on which SDN depends, is the problem of edge controller placement [1]. The controller arrangement is one of the most significant parts of SDN [2]. This problem was first presented in [3] and is an NP-hard problem [4]. Primarily, the controller placement reduces to finding the number of nodes, and the locations are designated as controllers in the network. The locations of the controller induce the following costs; the delay in the switches and the controller, the synchronizing delay between the controllers and the flow distribution among each cluster. The proposed algorithm views the problem in terms of the clustering approach and network partition. The entire SDN is divided into multiple sub-networks with a single controller and multiple switches in each sub-network. In the literature, there are many clustering-based approaches and they need some initialization and are prone to local minima problems. The proposed algorithm has been derived from the algorithm of deterministic annealing(DA), which is modified to avoid the initialization and A. Goudar (B) · K. Verma · P. Ranjan National Institute of Technology Delhi, New Delhi, India e-mail: [email protected] K. Verma e-mail: [email protected] P. Ranjan e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_44

565

566

A. Goudar et al.

local minima problem. The proposed algorithm is (1) fast and scalable, as it has linear computational complexity, (2) it does not need initialization, (3) uses Shannon entropy, to avoid the local minima problem. The paper is ordered as follows. Section 2 covers the related work. Section 3 formulates the problem and the proposed algorithm. Section 4 describes the solution approach for the mentioned problem statement and the Sect. 5 describes the results and performance of the proposed solution. While Sect. 6 covers about conclusion and future work.

2 Related Work The controller placement problem is one of the most prominent issues in multiple controller’s architecture. With the introduction of multiple controller architecture (such as Onix [6], Kandoo [8], HyperFlow [7]), the controller placement problem has drawn much more attention. The problem aims to find the k nodes of SDN to be designated as controllers. The problem was first reported by [3] and then later [9] summarized the problem as four groups: latency-oriented, cost-based, reliabilityoriented, and multi-objective. Killi et al. [10] proposed a network partition by using a controller placement algorithm which is based on the combination of game-theoretic initializations and k-means algorithm. Liao et al. [11] proposed a density-based controller placement that uses a clustering algorithm to partition the network into many sub-networks. Later in 2014, [12] considered the load of the controller along with the latency. Zhong et al. [13] studied both latency and reliability. Later, they solved the problem by proposing an algorithm called MCC. It was to use the minimum controller to fulfil the requirements of reliability and delay. Wang et al. [14] proposed to divide the network into many sub-networks. Also, they proposed an optimized K means to lessen the latency between the switches and its controller in a subnetwork. Das and Gurusamy [15] used a multi-objective optimization model, which derives a multi-period roll-out outline for controller placements. Soleymanifar et al. [5] proposed two main algorithms, namely ECP-LL and ECP-LB. This algorithm minimizes the cost(delay) between the switches and the controller and also between two different controllers. This paper improves the existing algorithm by considering a new factor called, load balancing while finding the position of k controllers. The algorithms LCP-LL and LCP-LB escape from the local optima problem by sensing the local optima in linear time complexity. The algorithm deterministic annealing [16] is the stepping stone for the LCP-LL and LCP-LB and the currently proposed algorithm.

44 Software Defined Network: A Clustering Approach …

567

3 Problem Statement Here, we first describe the wireless networks, network partition and then the formulation of controller placement is mathematically being done. Figure 1 shows a simple wireless network, where vertices represent the nodes and edges represent the link of communication between the nodes. In big wireless networks, one or more nodes can be designated as the controllers. In this paper, a network topology is represented as G(V, E), where V is the set of vertices and E is the set of edges. The following notations are used in the paper.V as a set of switches,  Vc as the setof vertices that can be designated as controllers. Additionally, P = pi Rd , i ∈ V indicates the position of each node in the wireless network, C = (ci ∈{0, 1}, i ∈ Vc ) represents that  a node is assigned as controller or not. Similarly R = ri, j ∈ {0, 1}, i ∈ V, j ∈ Vc determines the policy of controller assignment, where ri, j = 1, if node i is assigned to controller j else ri, j = 0. The cost between the switch i and the controller j is determined as the Manhattan distance  which is represented as di, j =  pi − p j . The synchronization cost between two controllers i and j is denoted as ci c j di j. The algorithm divides the network into k sub networks where, k

∪ Vi = V

(1)

i=1

The algorithm uniformly distributes the flow among clusters and mathematically it can be defined as V /m, where V is the total number of nodes and m is the total number of clusters formed so far. The main objective of the algorithm is to minimize the following equations, min R,C



ri j di j + γ

i ∫ V j ∫ Vc

i, j ∫ Vc

s.t

 j∈Vc

Fig. 1 Wireless network



ci c j di j

 k∫V

ri j = 1, ∀i ∈ V

rk j + γ

V m

(2) (3)

568

A. Goudar et al.

ri j ≤ c j Vi, j ∈ V

(4)

ci ∈ {0, 1}, i ∈ Vc

(5)

ri j ∈ {0, 1}, i ∈ V, j ∈ Vc

(6)

4 Solution Approach and Algorithm The algorithm assumes that the Manhattan distance between the nodes is equivalent to the delay and synchronization costs. It also assumes that the flow of the network can be uniformly distributed among the clusters. It also assumes that the geospatial coordinates of the nodes are given as the input instead of the mutual delay and the initial flow is in proportion to the number of nodes in the entire network. According to deterministic annealing [16], distortion which is defined as an average of the weighted distance, between the nodes and the centroids, which acts as the basic cost function. The following equation is used to define distortion. D=

V  i=1

p( pi )

m 

  p(c j / pi )D pi , c j

(7)

j=1

V

m

where, P = ∪ pi are the data points and C = ∪ c j are the centroid of clusters i=1 j=1   or controllers to be determined. p c is association probability of a point pi / p j i   with centroid c j and D pi , c j is the distortion measure and is chosen to be the manhattan distance. In Eq. 7, p( pi ) is assumed to be 1/V and l(V ) denotes the load of the network. To adapt the DA clustering to the load balancing, the distortion measure is defined as,        (8) d ci , c j + γ l(V ) D pi , c j = d pi , c j + γ i, j∈Vc

The system also has free energy [5] as F = D − T H , where T is the temperature, D is the distortion measure. The aim is to minimize the distortion by lowering the temperature. So the objective function is given as follows,   m m V  m  V           V D= p c j / pi d pi , c j + γ p c j / pi + γ d c j , ck m i=1 j=1 k=1 j=1 i=1 (9)

44 Software Defined Network: A Clustering Approach …

569

Using the partial derivatives of the free energy term with respect to association probabilities to zero, it results into,   p c j / pi =

 D p ,c exp − ( Ti j ) Zi

,

m    Zi = p c j / pi

(10)

j=1

The annealing schedule (cooling temperature) obeys the T = 1/ log n [16], hence our algorithm converges fast as compared to other algorithms. The algorithm is summarized as follows,

5 Results The algorithm performance is compared with the existing algorithm ECP-LL. The data set used is an Internet2. OS3E topology and the geospatial coordinates of the Indian Railway Stations. All the experiments are done on a system equipped with 3.60 GHz Intel Core i7-7700 CPU, 16 GB RAM and 64-bit Windows 10 as the operating system. The results are explained for Indian Railway Stations data and

570

A. Goudar et al.

Fig. 2 Controller placement for Indian railway stations

Internet2 OS3E topology sequentially. Figure 2 shows the controller placement identified using ECP-LL and the proposed algorithm, load balancing with ECP-LL for railway stations of Indian railways with several stations as 2000, 4000, 6000, and 8000, respectively. It is found that the location of each controller identified by the Load Balancing with ECP-LL algorithm, is matching to the controller location identified by the ECPLL algorithm for Indian Railways Station data. The Load Balancing with ECP-LL algorithm converges even earlier than the ECP-LL and identifies the location of each controller as similar to the ECP-LL. Table 1 and Fig. 3 shows the number of iterations taken by ECP-LL and the proposed Load balancing with ECP-LL algorithm. It is also found that the expected distortion always matches the calculated value in the case of the Load Balancing algorithm as compared to the ECP-LL algorithm, hence the Load Balancing algorithm converges fast as compared to the ECP-LL. Figure 4 shows the comparison of the distortion measure for the 2000 number of nodes.

44 Software Defined Network: A Clustering Approach … Table 1 Number of iterations for Indian railways stations data

571

Number of nodes

ECP-LL

Load balancing with ECP-LL

2000

45

30

4000

55

32

6000

58

33

8000

59

33

10,000

63

35

Fig. 3 Iteration comparison between ECP-LL and load balancing with ECP-LL algorithm

Fig. 4 Comparison of calculated distortion measure with projected distortion at each iteration for V = 2000

572

A. Goudar et al.

Fig. 5 Controller placement for Internet2 OS3E topology

Fig. 6 Comparison of calculated distortion measure with projected distortion measure at each iteration for Internet2 OS3E topology

The proposed algorithm performance is also being compared by running on Internet2 OS3E topology. Figure 5 shows the controller placement for Internet2 OS3E topology and Fig. 6 shows the comparison of the distortion measure for Internet2 OS3E topology. And it has been found that the proposed Load Balancing with ECP-LL algorithm runs faster than the ECP-LL.

6 Conclusion and Future Work The uniform flow distribution or load balancing is an important factor in the cluster of subnetworks as it ensures that each subnetwork does not get exhausted. The paper solves the controller placement problem, using a clustering approach and the objective of the study is to encapsulate the load balancing in the ECP-LL and verify the accuracy. The proposed ECP Load Balancing algorithm considers the uniform flow distribution along with the synchronization and delay cost. It is shown that the proposed algorithm converges faster as compared to ECP-LL for a various number of nodes in a network topology. As an extension of this work, the ECP-Leader based

44 Software Defined Network: A Clustering Approach …

573

[ECP-LL and ECP-LB] algorithm can also be extended to adapt the load balancing factor and find the appropriate controller locations.

References 1. Alshamrani A, Guha S, Pisharody S, Chowdhary A, Huang D (2018) Fault tolerant controller placement in distributed SDN environments. In: 2018 IEEE international conference on communications (ICC). IEEE 2. Kuang H, Qiu Y, Li R, Liu X (2018) A hierarchical K-means algorithm for controller placement in SDN-based WAN architecture. In: 2018 10th international conference on measuring technology and mechatronics automation (ICMTMA). IEEE 3. Heller B, Sherwood R, McKeown N (2012) The controller placement problem. ACM SIGCOMM Comput Commun Rev 42(4):473–478 4. Singh AK, Srivastava S (2018) A survey and classification of controller placement problem in SDN. Int J Netw Manage 28(3):e2018 5. Soleymanifar R, Srivastava A, Beck C, Salapaka S (2019) A clustering approach to edge controller placement in software defined networks with cost balancing. Preprint at http://arXiv. org/1912.02915 6. Koponen T, Casado M, Gude N, Stribling J (2014) Distributed control platform for large-scale production networks. U.S. Patent No. 8,830,823 9 Sept 2014 7. Tootoonchian A (2010) A distributed control plane for OpenFlow. Proceedings on NSDI internet network management workshop/workshop on research on enterprise networking (INM/WREN) 8. Jimenez Y, Cervello-Pastor C, Garcia AJ (2014) On the controller placement for designing a distributed SDN control layer. In: 2014 IFIP networking conference. IEEE 9. Lu J, Zhang Z, Hu T, Yi P, Lan J (2019) A survey of controller placement problem in softwaredefined networking. IEEE Access 7:24290–24307 10. Killi BPR, Reddy EA, Rao SV (2018) Cooperative game theory-based network partitioning for controller placement in SDN. In: 2018 10th international conference on communication systems & networks (COMSNETS). IEEE 11. Liao J, Sun H, Wang J, Qi Q, Li K, Li T (2017) Density cluster-based approach for controller placement problem in large-scale software defined networkings. Comput Netw 112:24–35 12. Yao G, Bi J, Li Y, Guo L (2014) On the capacitated controller placement problem in software defined networks. IEEE Commun Lett 18(8):1339–1342 13. Zhong Q, Wang Y, Li W, Qiu X (2016) A min-cover based controller placement approach to build reliable control network in SDN. In: NOMS 2016–2016 IEEE/IFIP network operations and management symposium. IEEE 14. Wang G, Zhao Y, Huang J, Duan Q, Li J (2016) A K-means-based network partition algorithm for controller placement in software defined network. In: 2016 IEEE international conference on communications (ICC). IEEE 15. Das T, Gurusamy M (2018) INCEPT: incremental controller placement in software defined networks. In: 2018 27th international conference on computer communication and networks (ICCCN). IEEE 16. Rose K (1998) Deterministic annealing for clustering, compression, classification, regression, and related optimization problems. Proc IEEE 86(11):2210–2239

Chapter 45

Netra: An RFID-Based Android Application for Visually Impaired Pooja Nawandar, Vinaya Gohokar, and Aditi Khandewale

1 Introduction Globally the number of people of all ages having visual impairment is estimated to be 285 million, of whom 39 million are blind. People 50 years and older are 82% of all blind. The major causes of visual impairment are uncorrected refractive errors (43%) and cataract (33%); the first cause of blindness is cataract (51%). Blindness is a lack of vision. It may also refer to a loss of vision that cannot be corrected with glasses or contact lenses. Visually impaired people have to face many difficulties in life like navigating around the places, finding required things, choosing Clothes, and many more. It’s very easy for people who are not visually impaired to select daily clothes. We select a matching top and bottom with our eye sight. Also we can differentiate between regular and party wear dresses. But for blind people, it’s very difficult to do that. They depend exclusively on their helper for this. And if a blind person is living alone, then according to an actual survey conducted, as soon as blind persons returns home they wash those clothes immediately and then they keep that pair immediately in bag after drying. It is also difficult to trace any missing object in their day to day life. To get the information of a product in front of them is also very difficult. All

P. Nawandar (B) J.D.I.E.T Yavatmal Princeton University, Yavatmal, India e-mail: [email protected] V. Gohokar School of ECE, MIT-WPU, Pune, India e-mail: [email protected] A. Khandewale MIT-WPU, Pune, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_45

575

576

P. Nawandar et al.

these regular life difficulties which are not difficult at all for people with vision are faced by blind people. Various techniques are used by researcher in this field like a mini-laptop or a PDA can be used to perform the texture[] and color matching process and RFID-based object tracking for the visually impaired which includes 3 operating modes: normal, in which the database is updated regularly with locations; query, in which the name of the object is indicated and the last location in which said object was detected is supplied; and search, in which an alarm is activated if the object is detected[ ]. In order to give ease to their life Android application using React native platform is prepared, For user interface AWS and dynamo dB and lambda are the platforms used. Two API’S are created one to fetch stored information and one to update the information at user end. On main application for user three functionalities are present. With same app one can locate missing products, can get information on product present in front of them, and can find the exact pair of clothes to wear. For actual implementation of application USB RFID with tags are used. At backend support, Amazon Web Services are used. As soon as we insert USB RFID to mobile your application is ready to use. With the help of Google voice assistance user can give voice command to select desired mode. In mode 1 user will get the information of product in range of RFID reader. All the products are having rfid tags attached to them in which information about product is already saved. As soon as tag comes in range of reader information of particular tag will be given to user. If user wants to locate or search missing object mode 2 can be selected and use the allocated keyword of the product to find.

1.1 System Architecture System architecture consists of Hardware and software. The Pictorial representation of the system architecture is given in Fig. 1. In this application, USB RFID reader and tags are heart of hardware structure. Radio frequency identification (RFID) is an automatic identification method relying on storing and remotely retrieving data using devices call RFID tags or RFID transponders. Data capacity of each RFID is big enough so each RFID has its own unique identity. Main components of RFID system is its Tag and reader. Tags are made up of microchip inbuilt which holds information for reader. Reader contains antenna and trans receiver which receives particular information correspond to the tag. There are three types of RFID present Passive, Active, and semi Passive. Passive RFID contains no battery they get power from their reader. Reader sends electromagnetic waves that send current in tag antenna which then powers the microchip present on tag. Active RFID contains battery that run the microchip circuitry. Due to battery tag able to send stronger signal to the reader so range for communication increases incase of active RFID but size and bulkiness also increased. In this system USB RFID Reader along with RFID tags are used. The specification of which is given in Table 1.

45 Netra: An RFID Based Android Application for Visually Impaired

Fig. 1 Pictorial representation of system Table 1 RFID reader and tag used in Netra application Details of USB RFID reader Different types of RFID Tag Reader card speed: 0.2/s Frequency: 125kHz Operating temperature: -35°~ +60 ° Storage Temp.:-50°~ +80° Operating humidity: 10 - 90%

RFID

Card

Working voltage: DC 5V Working current: 70mAh Read range: 0~80MM Dimensions: 35mm X 35mm X 7mm RFID IC key Tag

Figure3. RFID

Clothing RFID Tag

577

578

P. Nawandar et al.

Fig. 2 Creation of compute in Lambda

For software implementation React native open-source mobile application framework created by Facebook along with Amazon Web Services (AWS) are used. AWS is the most descriptive and broadly accepted cloud platform amongst which offers 175 features of services for the data centers. This service is very user- friendly toward million customer across global which includes fastest growing start-ups. Amazon Web Services provide much different cloud service than any other cloud service provider from infrastructure technologies like compute, storage, and databases–to emerging technologies, such as machine learning and artificial intelligence, data lakes and analytics, and Internet of Things. This makes it faster, easier, and more costeffective to move your existing applications to the cloud and build nearly anything you can imagine. In order to use AWS services, user has to create AWS account. To develop Netra application AWS services such as Lambda and Dynamo. Lambda is used for compute .Lambda Creates a Node.js function and an execution role that grants the function permission to upload logs. Lambda amuses the execution role when you invoke your function and used to it to create credentials for the AWS SDK and to read the data from event sources (Fig. 2). To create database for NetraDynamoDB is used. DynamoDB is a fully managed non-relational database service that provides fast and predictable performance with seamless scalability. Using this a table containing information of RFID UPI is created as shown in Fig. 3. Working of Netra Application The pictorial representation of all the modes of Netra application is given below. The detailed description of each mode and application is given in presiding paragraphs (Fig. 4). The android application designed work in two phases as shown in Fig. 5. User Application Phase As shown below the user application phase is divided into three parts.

45 Netra: An RFID Based Android Application for Visually Impaired

Fig. 3 Creation of database in DynamoDB

Fig. 4 Pictorial representation

Netra Application

User Application Phase

Fig. 5 Phases of Netra application

Registration / Information updating phase

579

580

P. Nawandar et al.

User Application Phase

Asset Information

Asset Tracking

Cloth Matching

Fig. 6 Pictorial representation of application phase

1. Asset Information 2. Asset tracking 3. Cloth Matching (Fig. 6). Asset Information In particular, the use of RFID will provide solutions for the possibility of identifying information regarding the assets. In systems terms, the information on the RFID tag is captured by the RFID reader, and this information is sent through the RFID Manager to the Location Information Middleware. The Location Information Middleware(RFID Database) then converts the RFID reader’s information into location information; the location information and the RFID tag information are sent on to the Integrated Asset Management Database. Here the location information is tied to the asset information so that the asset location information can be identified in real time (Figs. 7 and 8). Asset Tracking From the group of many objects it is very difficult for blind people to track particular object and get the information of that object. To solve this problem asset tracking mode of application is used. User has to switch to tracking mode by giving voice command to the application. Application will hold the asset name which user want to search for and on moving RFID receiver from set of object as soon as receiver will come in contact with particular tag it will give the massage as shown in given Figs. 9 and 10. Cloth Matching To match correct top and bottom of cloth in day-to-day life is very difficult task for blind people. On daily basis, person needs to choose appropriate pair of clothes to wear. It is a very difficult task for visually impaired person to choose clothes with appropriate match A USB RFID with embedded or movable RFID tag is used to perform the color matching process. In this mode of application, a tag is already prefixed on the cloth and as soon as we move USB RFID from the cloth we will get the information about which is correct match of cloth. The result of matched pair is displayed as sound outputs to the users for “not match “or “match” (Figs. 11 and 12).

45 Netra: An RFID Based Android Application for Visually Impaired

Tag ID 2763601586 3219109682 3216648578 2319200754 32116785154

581

Information Bag Mobile Blue Pouch Bottle Keys

Fig. 7 Flowchart for asset information

Fig. 8 Actual image of asset information

Registration/Information Updating phase This phase is added for updating or deleting the information feed on tags. Using this feature user can easily update the information at their end. For instance user purchase the new dress and they have to add this dress to their database of application then they just have to tag both bottom and top and corresponding number related to tag has to upload on database using device. And if they want to discard anything as instance

582

P. Nawandar et al.

Tag ID 2763601586 3219109682 3216648578 2319200754 32116785154

Information Bag Mobile Blue Pouch Bottle Keys

.

Fig. 9 Flowchart for asset tracking

Fig. 10 Demonstration image of asset tracking

their bottle is damaged and wanted to delete it from database then by simply using the icon delete they can delete that product (Fig. 13).

45 Netra: An RFID Based Android Application for Visually Impaired

Cloth Tag ID 2763601586

2763601585

2763601584

583

Matching ID 3219109682 3216648578 2319200754 3211678515 3219109681 4219109682 4216648578 4319200754 2116785154 4219109681 5219109682 5216648578 5319200754 5116785154 5219109681 5219109689

Fig. 11 Flow chart for dress matching

Fig. 12 Demonstration image of dress matching

2 Results It is verified and tested architecture, in which RFID tags are used to track the asset, to give information of particular object and also to find match pair of cloth., An android application makes it fast and convenient for user to select the set of cloths with audio massage matching or not, approximately 10 sets of cloths were tested and among which 10 giving correct output. The most attractive feature of system is user can edit list of RFIDs stored in data, if set of cloth is of no more use then, it can be deleted

584

P. Nawandar et al.

Fig. 13 Demonstration image for updating

from memory and new purchase can be added in list easily. This application will defiantly help blind people to get ready without any confusion.

Chapter 46

Efficient Routing for Low Power Lossy Networks with Multiple Concurrent RPL Instances Jinshiya Jafar , J. Jaisooraj , and S. D. Madhu Kumar

1 Introduction Internet of Things (IoT) is a significant breakthrough in the area of Information and Communication Technology (ICT). It is the concept of connecting any devices or objects embedded with sensors (“things”) to the global Internet platform, which combines data from these and use it to address specific needs. IoT has the following components—Sensors, Networks, Data Processing, and User Interface (Refer Fig. 1). Network is an essential component in IoT that connects the “things”/sensors. IoT networks are often referred as Low Power Lossy Networks (LLNs) which consist of nodes that are constrained in terms of memory, power, and processing capability. Such a resource constrained nature coupled with the ever growing demands of the IoT paradigm makes routing in LLNs an extremely challenging task. Similar to the case of other advanced networks, the Internet Engineering Task Force (IETF) has developed base protocols for LLN routing. IPv6 Routing Protocol for Low power Lossy Networks (RPL) is the first protocol developed for LLNs by the Routing over Low Power and Lossy Networks (RoLL) [1] working group, and is defined to be the standard protocol for LLNs by the IETF. RPL works by constructing a Destination Oriented Directed Acyclic Graph (DODAG) directed toward the sink/root. Often considered as the de facto IoT routing protocol, RPL has been subject to a lot of advancements ever since its inception. Further details regarding RPL has been provided in Sect. 2. Majority of the research in J. Jafar · J. Jaisooraj (B) · S. D. Madhu Kumar Department of Computer Science and Engineering, National Institute of Technology Calicut, Calicut, Kerala, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_46

585

586

J. Jafar et al.

Fig. 1 Internet of Things (IoT)

RPL comes under the consideration of issues like mobility and scalability. Even though these are issues of importance, LLNs being a part of IoT pose even more challenges. One among such challenges is the handling of multiple RPL instances running concurrently. A characteristic that distinguishes IoT networks (such as LLNs) from other wireless networks is the fact that IoT is heavily application oriented. Consequently networks taking part in the IoT environment have to serve a wide range of applications. LLNs, which are connected to the Internet through IoT gateways, will have to serve applications corresponding to each of these gateways. Even worse is the fact that these applications may coexist in the same network. As a result multiple DODAGs, each serving unique applications, need to be handled by an efficient routing protocol. Still, the amount of work carried out to tackle this challenge is surprisingly less. In this paper, we propose an RPL variant which addresses this issue. We have designed an Objective Function (OF) which takes into account different traffic types so as to indicate the different requirements of various applications (or multiple instances). The rest of this paper is organized as follows: Sect. 2 provides a walkthrough on related works in the area, Sect. 3 explains the new routing metric proposed, Sect. 4 provides the results and discussion, and finally Sect. 5 concludes the paper.

46 Efficient Routing for Low Power Lossy Networks with Multiple …

587

2 Related Work As mentioned in Sect. 1, routing in LLN has attracted research interest which has led to the development of a variety of routing protocols. These routing protocols cover different categories like proactive, on-demand, opportunistic, and cognitive routing. The IPv6 Routing Protocol for Low power and Lossy Networks (RPL) and Lightweight On-demand Ad hoc Distance vector routing—next generation (LOADng) [2] form the two basic protocols in LLN routing. The former belongs to the class of proactive routing whereas the latter belongs to on-demand routing. In order to keep up with the scope of this paper, we provide a brief review of RPL-variants in this paper. Ever since its inception in 2012 by RoLL working group of the Internet Engineering Task Force (IETF), it has maintained its tag as the core IoT routing protocol. RPL is a Distance Vector routing protocol that operates by constructing a DODAG. The DODAG is maintained by using control messages like DODAG Information Object (DIO), DODAG Information Solicitation (DIS), and Destination Advertisement Object (DAO). Out of these control messages, DIO plays the most important role in constructing a DODAG. DIO messages consist of Objective Function (OF) information which is used for calculating the rank of a node. There are two standard Objective functions used in RPL—OF0 and Minimum Rank Hysteresis Objective Function (MRHOF). OF0 uses the hop count metric, whereas MRHOF is based on the metric container concept specified in RFC6551 [3]. See [4] for a detailed study of various Objective Functions. The node with minimum rank value is always chosen as the preferred parent. In this manner, each and every node present in the DODAG is always a part of some route toward the destination (i.e., root). DIS and DAO messages handle the function of providing connection/re-connection to new/disconnected nodes, and setting downward routes respectively. In order to meet the increasing demands of IoT paradigm, several enhancements have been made in the years following its inception. RPL was basically designed for static networks, but things have changed over time with the introduction of paradigms like Industrial IoT (IIoT) where mobility is of utmost importance. Further, the number of applications currently being supported by the IoT is huge compared to the scenario during its inception. Hence, there is a need for further advancements in RPL to meet the new requirements. Even though a lot of works and studies are progressing to meet the challenges of a changing network, most of them are concentrated on the mobility aspect of IoT. On one hand, the protocols like [5–7] focus on incorporating mobility in RPL. [8, 9], on the other hand, focus on resource efficiency and mode switching (storing and non-storing modes) respectively. Comparatively, only minimal researches has been done on LLN routing with multiple RPL Instances. In [10], the authors have proposed the usage of two different Objective Functions (OFs) to handle multiple instances. The metrics in consideration were Hop count (HC) and Expected Transmission Count (ETX). But, these two metrics alone do not guarantee to meet the requirements of diverse IoT applications. Nguyen et al. in [11]

588

J. Jafar et al.

put forwards another RPL variant which handles only two RPL instances. Nassar et al. [12] developed a unique Objective Function (OF) which considered multiple traffic scenarios. In this work, we have chosen [12] as the base paper and made further enhancements in the Objective Function. The proposed Objective Function is explained in Sect. 3.

3 MEHOF As mentioned in Sects. 1 and 2, the new objective function Multiple instances ETXHop count Objective Function (MEHOF) aims at providing efficient routing in a Low power lossy Network with multiple instances running concurrently. This has been performed by adjusting the values of α and β according to the criticality of the application (see Table 1 for symbols used). Based on the criticality of applications we can tune α and β to meet the network requirements efficiently as in the paper [12] by Nassar et al. (as shown in Table 2): • α = 0.9 and β = 0.1 for critical traffic with a reliability of > 99.5% and a delay ranging between 1 and 30 s. • α = 0.1 and β = 0.9 for non-critical traffic with a reliability of > 98% and a delay of a few days. • α = 0.3 and β = 0.7 for periodic traffic with a reliability of > 98% packets and an authorized delay ranging between 5 min and 4 h. MEHOF comprises of Expected Transmission Count (ETX), Hop Count (HC), Energy Consumed (EC), and the number of instances in which a node is part of (m). In a low power lossy network with multiple RPL Instances, each RPL Instances denote different applications requiring different reliability, latency, and criticality. Table 1 Table of symbols in MEHOF Parameters

Value

α, β

Tunable parameters

ETX

Expected transmission count

HC

Hop count

m

Number of instances

Table 2 Instance classification and values of α, β Instance

Type of traffic

Reliability

α, β

Instance 1

Critical traffic

Network should be highly reliable

α = 0.9, β = 0.1

Instance 2

Non-critical traffic

Networks that are low/medium reliable

α = 0.1, β = 0.9

Instance 3

Periodic traffic

Network can be medium reliable

α = 0.1, β = 0.9

46 Efficient Routing for Low Power Lossy Networks with Multiple …

589

These differences have been handled using the tunable parameters α and β where α = 1 − β, 0 < α < 1 and 0 < β < 1. MEHOF has been formulated as follows: MEHOF = (ax ETX + bx HC) + EC + m

(1)

As compared to ETX, Hop Count gives faster convergence as we can observe from N. Pradeska et al. [13] assessed performance of standard Objective function OF0 and MRHOF, and observed that OF0 based on Hop Count is suitable for networks with faster convergence (Convergence Time of RPL: Amount of time needed by all reachable nodes in the network to join a DAG) and lower power consumption. They also observed that OF0 based on Hop Count acts better in mobile environment than MRHOF based on ETX. These observations served as the motivation for us to design MEHOF. Algorithm 1 and Algorithm 2 provide the detailed procedure for calculating MEHOF. Algorithm 1 calculates the rank value of a node based on the MEHOF value returned by Algorithm 2. After the calculation, node with minimum rank value is chosen as the preferred parent. MAX_PATH_COST, RPL_DAG_MC_ETX_DIVISOR, RPL_DAG_MC, RPL_DAG_MC_ETX and RPL_DAG_MC_ENERGY are standard parameters specified in MRHOF [14]. Algorithm 1 Input: A node p Output: Rank Value if p == NULL then return MAX_PATH_COST * RPL_DAG_MC_ETX_DIVISOR; end if RPL_DAG_MC == RPL_DAG_MC_NONE then return p.rank + p.link_metric; else if RPL_DAG_MC == RPL_DAG_MC_ETX then return p.mc.obj.etx + p.link_metric; else if RPL_DAG_MC == RPL_DAG_MC_ENERGY then Calculate cpu, lpm, transmit, and listen timeusingenergest; EC = cpu + lpm + transmit + listen; HC = p.dag.instance.min_hoprankinc; Calculate MEHOF; return MEHOF + p.link_metric; end end end

590

J. Jafar et al.

Table 3 Network simulation environment

Parameters

Value

Number of nodes

5, 10, 20, –, 50

Transmission range

50 m

Network protocol

RPL

Topology

Random

Radio medium

UDG (Unit disk graph)

Contiki mote types

Tmote sky, Wismote, z1

Algorithm 2 Input: HC, EC Output: MEHOF value MEHOF = (α x ETX + β x HC) + EC + m; return MEHOF;

4 Results and Discussion 4.1 Simulation The proposed Objective Function (OF) has been simulated and tested in Contiki. It is an open-source operating system for memory-constrained, low power devices released under BSD license. We have used the COOJA network simulator present in Contiki for simulating the protocol with new Objective Function. Nodes are taken to be static and organized randomly. Simulations have been carried out for Tmote Sky mote, Wismote, Z1 mote, etc. The number of nodes used for simulation ranges from 10 to 50. Table 3 shows the simulation parameters and their values.

4.2 Observation The proposed Objective Function, MEHOF, has been compared with MRHOF and OF0. Since MEHOF has been designed to consider critical, non-critical, and periodic traffic patterns, we have considered the same while performing the simulation. We have considered important network parameters like packet delivery ratio, control overhead, average inter-packet time, and average power consumption of nodes for making the comparison.

46 Efficient Routing for Low Power Lossy Networks with Multiple …

591

• Packet Delivery Ratio—The packet delivery ratio finds its definition as the ratio of packets successfully received out of the total packets sent. Results indicate a better performance of MEHOF compared to the other two when it comes to packet delivery ratio (Fig. 2). • Control Overhead—Control overhead is an indication regarding the number of control messages that are needed to be sent for the successful transmission of a packet. The simulation results indicate a significant reduction in terms of control overhead for MEHOF, as shown in Fig. 3. • Average Inter-packet time—It provides the measure of the times between packets arriving at a host over a period. As shown in Fig. 4, MEHOF outperforms the other two in all cases.

Fig. 2 Packet delivery ratio versus number of nodes

Fig. 3 Control overhead versus number of nodes

592

J. Jafar et al.

Fig. 4 Average Inter-packet time versus number of nodes

• Average Power Consumption—An extensive simulation (results shown in Fig. 5) asserts that the average power consumption of nodes in MEHOF is comparable with that of MRHOF and OF0. From the above results, we can conclude that MEHOF provides better values for packet delivery ratio, control overhead, and inter-packet time, while providing comparable values for average power consumption.

Fig. 5 Average power consumption versus number of nodes

46 Efficient Routing for Low Power Lossy Networks with Multiple …

593

5 Conclusion and Future Work In this paper, we have presented an RPL variant which uses a novel Objective Function named Multiple instance ETX-Hop count Objective Function (MEHOF) so as to consider multiple routing instances running concurrently in the same network. We have considered different traffic patterns namely, critical, non-critical, and periodic, in order to test the protocol. Simulation results have shown a better performance, and thereby increased efficiency, for our proposed protocol. Considering the fact that IoT is application oriented, scenarios involving multiple instances (each serving unique applications) need to be given due consideration. As part of future work, we aim to make the protocol more accurate by making the nodes to identify the number of instances in which it is a part of, rather than entering it manually. The same scenario can be extended to include mobile nodes also.

References 1. Thubert P, Winter T, Brandt A, Hui J, Kelsey R, Levis P, Pister K, Struik R, Vasseur JP, Alexander R (2012) RPL: IPv6 routing protocol for low-power and lossy networks. IETF 2. Clausen T, Yi J, Herberg U (2017) The lightweight on-demand ad hoc distance-vector routing protocol—next generation (LOADng): protocol, extension, and applicability. Comput Netw 126:125–140 3. Vasseur JP, Kim M, Pister K, Dejean N, Barthel D (2012) Routing metrics used for path calculation in low-power and lossy networks. In: RFC 6551, pp 1–30. IETF 4. Lamaazi H, Benamar N (2019) A comprehensive survey on enhancements and limitations of the RPL protocol: a focus on the objective function. Ad Hoc Networks, Elsevier 5. Kharrufa H, Al-Kashoash H, Kemp AH (2018) A game theoretic optimization of RPL for mobile Internet of Things applications. IEEE Sens J 2520–2530 6. Bouaziz M, Rachedi A, Belghith A (2019) EKF-MRPL: advanced mobility support routing protocol for Internet of mobile things: movement prediction approach. Future Gener Comput Syst, Elsevier 822–832 7. Lamaazi H, Benamar N, Imaduddin MI, Habbal A, Jara AJ (2016) Mobility support for the routing protocol in low power and lossy networks. In: 30th international conference on advanced information networking and applications workshops, pp 809–814 8. Kim HS, Cho H, Kim H, Bahk S (2017) DT-RPL: diverse bidirectional traffic delivery through, RPL routing protocol in low power and lossy networks. Comput Netw 150–161 9. Ko J, Jeong J, Park J, Jun JA, Gnawali O, Paek J (2015) DualMOP-RPL: supporting multiple modes of downward routing in a single RPL network. ACM Trans Sens Netw 1–20 10. Banh M, Mac H, Nguyen N, Phung KH, Thanh NH, Steenhaut K (2015) Performance evaluation of multiple RPL routing tree instances for Internet of Things applications. In: 2015 international conference on advanced technologies for communications (ATC), pp 206–211 11. Long NT, Uwase MP, Tiberghien J, Steenhaut K (2013) QoS-aware cross-layer mechanism for multiple instances RPL. Advanced Technologies for Communications (ATC), pp 44–49 12. Nassar J, Berthomé M, Dubrulle J, Gouvy N, Mitton N, Quoitin B (2018) Multiple instances QoS routing in RPL: application to smart grids. Sensors 18(8):2472

594

J. Jafar et al.

13. Pradeska N, Najib W, Kusumawardani SS (2016) Performance analysis of objective function MRHOF and OF0 in routing protocol RPL IPV6 over low power wireless personal area networks (6LoWPAN). In: 2016 8th international conference on information technology and electrical engineering (ICITEE), pp 1–6 14. Gnawali O, Levis P (2012) The minimum rank with hysteresis objective function. IETF

Chapter 47

Deep Learning-Based Wireless Module Identification (WMI) Methods for Cognitive Wireless Communication Network Sudhir Kumar Sahoo, Chalamalasetti Yaswanth, Barathram Ramkumar, and M. Sabarimalai Manikandan

1 Introduction Recent advances in the field of wireless communication like IoT, M2M has resulted in the scarcity of spectrum. Numerous application-specific tasks e.g. enforcement of spectrum for coverage map generation, wireless operators controlling the regulation, positioning and detection of wireless signal rely extensively on the time, space and frequency monitoring of the wireless spectrum. However the multidisciplinary nature makes the spectrum monitoring over a largely spanned area in a continuous manner difficult. Also, Cognitive Radio (CR) technology is seen as a potential candidate to address spectrum scarcity. One of the important characteristics of CR is to sense the environment. This is also termed as RF (Radio Frequency) situational awareness. RF situational awareness comprises of channel estimation, automatic modulation classification (AMC). IoT devices uses protocols like ZigBee, Bluetooth, Wi-Fi, LoRa etc. All these protocols use the same ISM band. There is also possibility of the presence of unintentional/malicious user. If the spectrum is not managed properly, this may lead to poor quality of service (QoS). For spectrum management it is important to S. K. Sahoo (B) · C. Yaswanth · B. Ramkumar · M. Sabarimalai Manikandan School of Electrical Sciences, Indian Institute of Technology Bhubaneswar, Bhubaneswar, Jatani, Khordha 752050, India e-mail: [email protected] C. Yaswanth e-mail: [email protected] B. Ramkumar e-mail: [email protected] M. Sabarimalai Manikandan e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_47

595

596

S. K. Sahoo et al.

monitor the spectrum and identify the kind of users. Automatic modulation classification (AMC) is one of the most integral component of spectrum monitoring. The signal detection uses AMC as an intermediate step before it is send for modulation, where the modulation scheme employed in the detected signal is identified. AMC algorithms involve extracting features like cyclostationarity [1], cummulants [2], higher order statistics [3] etc., and using these features for classification. For classification algorithms like ANN [4], Support Vector Machine (SVM) [5] and Deep learning (DL) [6] are used. One of shortcomings of AMC algorithms in literature is that they do not identify kind of wireless protocol that has been used. Many of the IoT based protocols in ISM band use the same modulation scheme. Also most of the AMC algorithm in literature are studied on synthetic generated signals. In this work Deep Learning-based wireless spectrum monitoring system is proposed. The proposed algorithm classifies between popular IoT communication protocols like Bluetooth, ZigBee and Wi-Fi. The DL model is trained on real-time recorded RF signals. For recoding an indoor wireless testbed is developed. Blade RF SDR is used for recording the real-time signals. For DL different architectures like 2DCNN, LSTM and CLDNN were used. The paper is organized as follows: In Sect. 2 the real-time signal acquisition procedure along with wireless testbed is explained. In Sect. 3, different DL architectures used in this work are explained along with the training mechanism. Section 4 presents the performance analysis and comparison of different models. Section 5 presents conclusion and future work.

2 System Model 2.1 Real-time Signal Acquisition As mentioned earlier in this work real-time signals are used to train the DL models. The receiver model is shown in Fig. 1. In order to record the real-time signals Blade RF is used. Blade RF is an SDR that can down convert RF Signals to I, Q samples in the frequency range from 47 to 6 GHz, and has a maximum sampling rate of 61.44 MHz. Blade RF is tuned to 2.4 GHz for recording Bluetooth, ZigBee, Wi-Fi signals. In this work the received signals are sampled at a rate of 2 Mega samples per second by RFIC AD9361 present in Blade RF and obtained I, Q samples are used to train DL models. This SDR is interfaced to computer using libblade RF driver and controlled from command line interface to change its settings. In order to record real-time Bluetooth, ZigBee, Wi-Fi signals, an indoor testbed was built, shown in Fig. 2. ZigBee transceiver module is present on right side. ZigBee signals were being transmitted from this module at 2440 MHz. Blade RF SDR module present on left side is connected to laptop. In laptop, SDR sharp, a spectrum analyzer software is running and it was tuned to 2440 MHz. Blade RF captures ZigBee signals at 2440 MHz, converts into I-Q values and sends to this software for processing. In

47 Deep Learning-Based Wireless Module Identification (WMI) …

597

Fig. 1 Capturing of signal through Blade RF SDR

Fig. 2 Testbed developed to record the wireless signals

the laptop, top plot signifies power spectrum density and in bottom plot, spectrogram was displayed. Specification of recorded signals are shown in Table 1. Acquisitions of signal has been performed in an isolated environment where individual signals has been collected for 60 s. The amplitude and phase plots of the real-time collected I and Q samples of Wi-Fi, Bluetooth and ZigBee signals are shown in Fig. 3. From the Fig. 3,

598 Table 1 Range of frequency captured by SDR

S. K. Sahoo et al. S. No.

Wireless protocol

Center frequency (MHz)

Bandwidth (MHz)

1

Wi-Fi

2412

2401–2423

2

ZigBee

2444

2442.5–2445.5

3

Bluetooth

2444

2434–2454

it can be inferred that the recorded signals are quasi stationary random process and hence DL based algorithms are used to classify them.

3 Deep Learning Models As mentioned earlier in this work DL models are used for spectrum monitoring. Machine learning, a set of algorithm which can learn a statistical model from history of data to accomplish a specific task [7]. The obtained statistical model is data driven model and it requires the assistance of domain knowledge information. However the prediction accuracy of machine learning models is dependent on the choice of representing the data and choosing the features [7]. DL based models do not require feature selection and hence overcomes this limitation of ML based models. Three DL models like convolutional neural network (CNN), Long Short Term Memory (LSTM), Convolutional long short-term neural network (CLDNN) are considered and they are briefly described in the following subsections.

3.1 CNN Architecture Model Efficient spatial feature extraction operation is performed by means of the proposed CNN architecture where a group of convolution and pooling layers are arranged in hierarchal manner to select local features from the multidimensional input data array, also referred as tensor [7]. The input may consist of time-series data or images which consists of two-dimensional grid of pixel values. After this input layer a set of hidden layers extract several temporal features. These layers are called as “hidden” layer because the nature of these weights is not known in advance due to their random initialization [7, 8]. The layers in the CNN architecture is made up with group of filters used to extract feature from coarser to finer manner in hierarchical style; therefore, they are also referred to as convolutional layers. These filters or kernels perform convolutional operation over input to extract spatial information producing an output encoded as a transformed form of input. Hence, they also referred as feature detectors [7]. The visible layer which is also called input layer of the proposed network having a size of 2 × 128 where xk∈ R2 × 128. The CNN structure utilized for wireless protocol classification has given in Fig. 4.

47 Deep Learning-Based Wireless Module Identification (WMI) …

599

Fig. 3 The amplitude and phase plots of the real-time collected I and Q samples: a Bluetooth signal, b ZigBee signal, c Wi-Fi signal for 128 samples

600

S. K. Sahoo et al.

Fig. 3 (continued)

Fig. 4 Proposed CNN Model

Receiving Amplitude/Phase captured data vectors, xk∈ R2 × 128 which consists of sampled complex signal values from state-of-art wireless devices such as WiFi, ZigBee, Bluetooth. High level temporal features have extracted by nine hidden convolutional layers with the help of kernels and nonlinear activation function ReLU [8]. The convolutional operation in the first layer is performed using 192 kernels with size of 1 × 3 arranged by stacking one after the other. It performs the 2-D convolution operation over input samples consisting of representation of wireless signals with proper padding in such way that output and input have same dimension. These filters create high level feature maps which are offered to the next hidden representation of the proposed model as input. Dropout regularization is incorporated in each layer

47 Deep Learning-Based Wireless Module Identification (WMI) …

601

with the drop out probability of 0.5 to overcome problem of overfitting [8]. Finally, the tensor is flattened and fully connected with a dense layer of 128 neurons with ReLU activation function to induce nonlinearity. In the final layer, we incorporate softmax classifier which gives the probability of the input signal X for a particular labeled class. Y which is given by P(y = k|x; θ ), where k is described as one-hot encoded vector [7]. In this case k ∈ R3 for the classification of Bluetooth, Wi-Fi, ZigBee signals.

3.2 LSTM Architecture Model Although to extract spatial features CNN may provide better performance, it cannot give performance in time series [9]. Temporal property of I and Q value of the realtime recorded signal is most important in ISM protocol classification. LSTM can be considered to be a special type of architecture to analyze time-series data using the concept of selective read, write and forget units incorporated by gates. Hence, LSTM give better performance for time-series data analysis compared to CNN [9]. As input samples are in polar form so LSTM layer can be used to extract temporal dependency of both amplitude and phase of different wireless signal. In the proposed model four LSTM layer having 256 cells and 4 Dropout layer has been used. Here Dense layer with softmax activation function [8] as the last hidden layer has been used to project the output of the fourth LSTM layer into the final probability of 3 classes. The proposed model structure is shown in Fig. 5.

Fig. 5 Proposed LSTM architecture model

602

S. K. Sahoo et al.

Fig. 6 Proposed CLDNN Model

3.3 CLDNN Architecture Model A Convolutional long short-term neural network (CLDNN) has been introduced in which is combination of CNN and LSTM architecture model [9]. The model contains 4 convolutional layer having number of filters 2,562,568,080 respectively and 4 drop out of 0.4 to avoid overfitting for the testing data. Consequence a LSTM of 50 units has been added to give best performance on the basis of experiment. The architecture of the CLDNN model is shown in Fig. 6.

3.4 Implementation Detail The models in Figs. 4, 5 and 6 were trained and tested using high computational cloud platform Google Colab with cuda enabled graphics processing unit (GPU) which comprises of Keras library. All models have trained with 70% of randomly selected total samples with batch size of 1024 and 30% samples have been used for testing. For training the above described model, 114,688 segments each having duration of 7.8125 ms of size [2 × 128] samples of each protocol are used.

4 Results and Discussion In this section the performance of all the three DL models are evaluated using realtime signals. For real-time evaluation, 491,520 segments of recorded data each having duration of 7.8125 ms is considered. Each segment is having 256 samples (128 for I and 128 for Q). The performance matrices used for evaluation are described in the following subsection.

47 Deep Learning-Based Wireless Module Identification (WMI) …

603

4.1 Performance Matrices For evaluation the performance matrices considered are accuracy, precision rate (PR), recall rate (RR) and F 1score , which are defined as follows: PR =

TP TP + FN

(1)

RR =

TP TP + FP

(2)

F1score = 2 ×

PR × RR PR + RR

(3)

where true positive (TP), false positive (FP) and false negative (FN) are defined as: • True positive case occurs when the estimated class of the signal is same as the labeled class of that signal. • False positive case occurs when the signal does not belong to that class, however the estimated outcome predicts the signal to be of that class. • False negative case for a class occurs when the signal belongs to that class, however the estimated outcome predicts the signal not to be of that class. Confusion matrices of all the three DL models is shown in Tables 2, 3 and 4 respectively. The diagonal elements in the confusion matrix indicate the accuracy of classification. The confusion matrix is obtained using testing data. The overall average accuracy for all the three models are summarized in Table 5. The other performance matrices for all the three models are summarized in Table 6. From the results it can be seen that CLDNN outperforms 2D-CNN and LSTM with an accuracy of 91.48%. Also CLDNN achieves RR of 91.89%, PR of 92.5% and F1 score of 91.48% which is better than 2D-CNN and LSTM. This is because CLDNN combines of advantages of both 2D-CNN and LSTM. Table 2 Confusion matrix for 2D CNN based scheme

2-D CNN Input segment of length 128 samples B Bluetooth (B)

Z

W

48,859

262

77

ZigBee (Z)

7929

41,319

81

Wi-Fi (W)

14,207

230

34,492

Normalized confusion matrix Bluetooth (B)

0.99

0.01

0.00

ZigBee (Z)

0.16

0.84

0.00

Wi-Fi (W)

0.30

0.00

0.70

604

S. K. Sahoo et al.

Table 3 Confusion matrix for LSTM architecture based scheme

LSTM Input segment of length 128 samples B Bluetooth (B)

Z

W

42,610

1610

ZigBee (Z)

2124

45,202

4978 2003

Wi-Fi (W)

4231

1616

43,082

Normalized confusion matrix

Table 4 Confusion matrix for CLDNN based scheme

Bluetooth (B)

0.87

0.03

0.10

ZigBee (Z)

0.04

0.92

0.04

Wi-Fi (W)

0.09

0.03

0.88

CLDNN Input segment of length 128 samples B Bluetooth (B)

Z

W

44,734

895

8569

ZigBee (Z)

1767

45,952

1610

Wi-Fi (W)

3794

915

44,220

Normalized confusion matrix

Table 5 Performance comparison of DL models

Bluetooth (B)

0.91

0.02

0.07

ZigBee (Z)

0.04

0.93

0.03

Wi-Fi (W)

0.08

0.02

0.90

Architecture model

2D-CNN

LSTM

CLDNN

Training accuracy

84.69

98.69

92.21

Testing accuracy

84.54

88.76

91.48

Table 6 Performance comparison of proposed schemes: (SL: Segment length in terms of number of samples) LSTM

2D CNN

CLDNN

SL

Class

RR

PR

F1

RR

PR

F1

RR

PR

F1

128

Bluetooth

87.02

86.60

86.81

68.77

99.31

81.26

89.97

90.92

89.93

ZigBee

93.33

91.63

92.48

98.82

83.72

90.63

96.21

93.15

94.65

Wi-Fi

86.05

88.05

87.03

99.54

70.49

82.51

89.51

90.37

89.94

Avg.

88.80

88.76

88.77

89.04

84.50

84.80

91.89

92.50

91.48

47 Deep Learning-Based Wireless Module Identification (WMI) …

605

5 Conclusion and Future Work In this work DL based automatic wireless protocol classification system is proposed. The proposed system is designed and tested using real-time recorded signals obtained from an indoor wireless testbed and can classify between Bluetooth, ZigBee and WiFi. Three DL models like 2D-CNN, LSTM and CLDNN are considered. Form the performance analysis it is found the CLDNN outperforms 2D-CNN and LSTM. In future, combination of above-mentioned signals, Bluetooth + ZigBee, Bluetooth + Wi-Fi, ZigBee + Wi-Fi and Bluetooth + ZigBee + Wi-Fi are taken into consideration for classification. Further, a system will be designed using Raspberry Pi in which deep learning algorithm will process the real-time signal that is received from SDR receiver. When this system is taken to any geographical location, the wireless communication protocols present in that area will be displayed on the LCD screen that is connected to Raspberry Pi.

References 1. Ramkumar B (2009) Automatic modulation classification for cognitive radios using cyclic feature detection. IEEE Circ Syst Mag 9(2):27–45 2. Dobre OA, Abdi A, Bar-Ness Y, Su W (2006) Cyclostationarity-based blind classification of analog and digital modulations. In: IEEE military communications conference (MILCOM). IEEE, pp 1–7 3. Lichun L (2002) Comments on signal classification using statistical moments. IEEE Trans Commun 50(2):195 4. Azzouz EE, Nandi AK (1996) Modulation recognition using artificial neural networks. In: Automatic modulation recognition of communication signals. Springer, Boston, MA, pp 132– 176 5. Zhang W (2014) Automatic modulation classification based on statistical features and support vector machine. In: 14th URSI general assembly and scientific symposium (URSI GASS). IEEE, pp 1–4 6. O’Shea TJ, Roy T, Clancy TC (2018) Over-the-air deep learning based radio signal classification. IEEE J Sel Top Sig Process 12(1):168–179 7. Kulin M, Kazaz T, Moerman I, De Poorter E (2018) End-to-end learning from spectrum data: a deep learning approach for wireless signal identification in spectrum monitoring applications. IEEE Access 6:18484–18501 8. Goodfellow I, Bengio Y, Courville A (2016) Deep learning, vol 1. MIT Press. In: Deep learning Homepage, http://www.deeplearningbook.org 9. Ramjee S, Ju S, Yang D, Liu X, Gamal AE, Eldar YC (2019) Fast deep learning for automatic modulation classification. arXiv preprint arXiv:1901.05850

Chapter 48

Style Transfer for Videos with Audio Gaurav Kabra and Mahipal Jadeja

1 Introduction 1.1 What is Style Transfer? Consider an art image A and your photograph P. We want to construct image X such that X matches the style of A and content of P at the same time by optimizing the pixel values of X. This work is emerging topic in image processing due to the GPU and TPU optimizations possible only recently. Existing approaches focus on only video or only audio. Our work serves to stylize simultaneously both video and audio.

1.2 Potential Applications of Style Transfer The work for video can be used to generate synthetic artworks/artistic videos such as to give effects from famous painting and can be used in movies such as James Camerons Avatar so that actors can shoot without makeup and later using computers, we can make them appear as if originally they had makeup. The work for audio can be used for creating a new audio from two input audio. For instance, background music can be modified by mixing style of other audio. Many mobile applications may be developed using our approach. Most similar (yet not the same) application is “Voice Changer with Effects” where your voice may G. Kabra (B) · M. Jadeja Malaviya National Institute of Technology, Jaipur, India e-mail: [email protected] M. Jadeja e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_48

607

608

G. Kabra and M. Jadeja

be changed to that of a robot, for example. It may extend its functionality in future updates by incorporating our audio work that will be stylizing background audio, not your voice.

1.3 Organization of This Paper The rest of the paper is organized as follows. In Sect. 2, related work is discussed along with explanation of key concepts of various types of losses during style transfer. In Sect. 3, methodology for the proposed approach is discussed including information about dataset and processing technique. Section 4 discusses about key results and implications. Our proposed method is compared with an existing method in the same section. The final section summarizes our work and provides future directions for the research.

2 Related Work 2.1 Early Developments To understand underlying style of the image is a very crucial aspect of computer vision but not it has gained a very little attention in terms of research [1–3]. Gatys et al. [4] put forward first working methodology for style transfer in images. In his method, he used a type of CNN [5] called VGG-19 architecture that is pre-trained on ImageNet dataset of Stanford Vision Lab, Stanford University. This was used to extract content and style of image. He showed that it is possible to separate content and style of an image. He used average-pooling instead of max-pooling as former was giving more appealing output. Moreover, his model did not have any fully connected (FC) layers. Other related approaches are discussed in [6] and audio specific ideas are discussed in [7].

2.2 Method for Image Style Transfer We want to construct image X such that X matches the style of A and content of P simultaneously. We start by initializing X to some random values (called noise). Content Loss: Let us select a hidden layer (l) in VGG-19 to calculate the content loss. Let p: original image and x: generated image. Let Pl and F l denote feature representations of the respective images corresponding to layer l. Then the content loss will be defined as:

48 Style Transfer for Videos with Audio

609

Fig. 1 Different filter sat layer l

L content (ρ, x, L) =

2 1  l Fi j − Pilj 2 ij

(1)

Style Loss: Before understanding what style loss is, we first need to understand what is style of an image. Intuitively it is texture information and local color scheme but not global arrangement. Mathematically, it is the correlation between filters at a given layer l. For example, in Fig. 1 different filters (or channels or feature maps) at layer l have been shown. Calculation of correlation between different filters/channels involves the dot product between the vectorized feature maps i and j at layer l.The matrix thus obtained is called Gram Matrix (G). Two channels are correlated if and only if the value of dot-product across the activation of two filters is large. G li j =



l Fikl F jk

(2)

k

Figure 2 shows a scheme for Gram Matrix calculation. Style loss is the square of difference between the Gram Matrix of the style image with the Gram Matrix of generated Image. So let a: style image and x: generated image. Let Al and Gl denote respective style representations in layer l. The contribution of layer l to total style loss is El =

 2 1 G li j − Ali j 2 2 4Ni Mi i, j

And hence the total style loss is:

(3)

610

G. Kabra and M. Jadeja

Fig. 2 Gram matrix calculation

L style (a, x) =

L 

wl El

(4)

l=0

Where wl are corresponding to weighting factors of the contribution of each layer to the total loss. The process of optimizing pixel values includes two tasks at the same time—minimize both the content loss (L content ) and style loss (L style ) by using backpropagation. So the loss function to be minimized turns out to be: L total (P, A, X ) = α × L content + β × L style

(5)

Here α and β are hyper-parameters that need to be set. The ratio α = β determines emphasis is on content or on style: 1. A large α/β ratio means more emphasis is on content of P in X. 2. A small α/β ratio means more emphasis is on style of A in X. Other audio related work are [7–10].

48 Style Transfer for Videos with Audio

611

Fig. 3 Overview of the experimental methodology

3 Elements of Experimental Methodology 3.1 Dataset Since this work includes Style Transfer on both videos and audios, the input is a small duration video (about 1 min) downloaded from YouTube called storm.mp4. For applying style to frames of video, we used The Starry Night by von Gogh. For imparting style to the audio in storm.mp4, we used an instrumental music crescent.mp3 (Fig. 3).

3.2 Processing Technique The most prominent deep neural networks able to process images are Convolutional Neural Networks (CNNs). Each layer of CNN learns certain features of input image. The output of each layer is called a feature map. In the Fig. 4, at Layer 1 using 32 filters the network may capture simple patterns; for example, lines or edges which is of immense importance to the network. As we move deeper to Layer 2 with 64 filters, the network starts to record more complex

612

G. Kabra and M. Jadeja

Fig. 4 General scheme of a CNN

features by using the edges detected in previous layer(s) to form corners (intersection points of edges) and then parts of objects. This process of recording different simple and complex features is called feature extraction.

3.3 Method for Audio Style Transfer An approach similar to Image Style Transfer (as described in Sect. 2) is used but modifications are made for audio signals (See our implementation). AlexNet architecture is trained having smaller receptive size of 3 × 3 instead of original filter size to maintain resolution. Training of AlexNet is done on spectrograms of instrumental sounds with 3 × 3 convolution filters and 2 × 2 max pooling. It has 6 layers in total and Adam optimizer. A video is collection of frames which are swapped sequentially at a sufficiently fast rate. So we first obtained the frames of video before applying style. Next we applied our code to each of the frame and saved the results. Last thing for Video Style Transfer was to re-attach the frames to form a video. For applying Audio Style Transfer, we extracted the audio from the original video and then applied to it the style audio (crescent.mp3) and saved the result. The last thing to do was to combine generated video and audio. For that we used asoftware ffmpegthat was downloaded locally on our computer from https://www.ffmpeg.org/ download.html. We used VGG-19 instead of VGG-16 that is 19 layer deep CNN trained on Stanford Vision Lab’s ImageNet Dataset with about 14 million images. VGG-19 can detect higher level features of an image. A schematic representation of VGG-16 and VGG-19 has been shown in Figs. 5a, b respectively.

48 Style Transfer for Videos with Audio

613

Fig. 5 Schematic representations of VGG-16 and VGG-19

4 Results and Analysis For the given input, how our method works is shown using following example. Here input and style images are shown in Figs. 6 and 7 respectively. The output of our proposed method is shown in Fig. 8.

Fig. 6 Input

614

G. Kabra and M. Jadeja

Fig. 7 Style image

4.1 Implementation Details The major novelty of our work is styling in terms of video along with audio styling. The existing approaches only focus on video [4] styling without audio or just audio [7]. (Click here for the existing code. Click here for the code of proposed approach). The potential applications of our work are in the following domains: 1. Social Communication: Due to its eye-catching output, Style Transfer can be used in networking sites such as Facebook and Twitter. The comments by users can be further used to improve the algorithm. 2. Entertainment World Domains such as gaming and animations may find Style Transfer useful. 3. Mobile Applications Apps such as Prisma have been developed and gained a wide popularity in short term of release.

4.2 Comparison with Some Previous Methods Determining quality of images is almost a task subjective in nature. Most widely accepted method is how user rates/prefers the aesthetics of the result. Other approach

48 Style Transfer for Videos with Audio

615

Fig. 8 Our result

can be how faster the algorithm converges. Our work outperforms in visual plausibility when compared with some of the existing works. For example, one of the implementation beaten by our code is found here (Fig. 9).

5 Conclusion In the present work we implemented an algorithm that can apply style transfer to not only visual part of a video but also to its audio part. Although the output of our work is quite impressive visually but it requires high computation power(for example, on local computer it took 9 h 30 min for just 85 iterations which is not in real-time for obvious reasons). Hence we turned to Google Colaboratory, which significantly sped up the execution with Google’s GPUs as Runtime Type. Also potential future work for this project may include:

616

G. Kabra and M. Jadeja

Fig. 9 Output of an existing method

1. The duration of video and audio must be the same so that both conclude at the same time in final resultant video. The trade-off between time efficiency and quality of output can be adjusted. 2. Ensuring that in no case should the style transfer leave any geometric effect (e.g. the window pane in some image should remain as window pane and should not get distorted on applying Style Transfer). 3. The quality of audio may be improved further. 4. In our final output the frames appear to be cropped. This can be taken care of.

References 1. Huang X, Liu MY, Belongie S, Kautz J (2018) Multimodal unsupervised image-to-image translation. In: Proceedings of the European conference on computer vision (ECCV), pp 172– 189 2. Karayev S, Trentacoste M, Han H, Agarwala A, Darrell T, Hertzmann A, Winnemoeller H (2014) Recognizing image style. In: Proceedings of British machine vision conference. BMVA Press 3. Sun T, Wang Y, Yang J, Hu X (2017) Convolution neural networks with two pathways for image style recognition. IEEE Trans Image Process 26(9):4102–4113 4. Leon AG, Ecker AS, Bethge M (2016) A neural algorithm of artistic style. J Vision 16(12):326 5. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, pp 818–833. Springer 6. Princeton university homepage. https://www.cs.princeton.edu/courses/archive/spring18/cos 598B/public/projects/LiteratureRview/COS598B_spr2018_NeuralStyleTransfer.pdf. Last accessed 2020/4/14 7. Stanford homepage. https://nips2017creativity.github.io/doc/Neural_Style_Spectograms.pdf. Last accessed 2020/4/14

48 Style Transfer for Videos with Audio

617

8. Grinstein E, Duong ND, Ozerov A, Pérez P (2018) Audio style transfer. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 586–590. IEEE 9. Semantic scholar homepage. https://www.semanticscholar.org/paper/Autoencoder-Based-Arc hitecture-For-Fast-%26-Real-Time-RamaniKarmakar/c2788e40f0cdd81b352cdc9a748675 ec061c03e9. Last accessed 2020/4/14 10. Tomczak M, Southall C, Hockman J (2018) Audio style transfer with rhythmic constraints. In: Proceedings of international conference on digital audio effects, pp 45–50. Portugal

Chapter 49

Development of Antennas Subsystem for Indian Airborne Cruise Missile Ami Jobanputra, Dhruv Panchal, Het Trivedi, Dhyey Buch, and Bhavin Kakani

1 Introduction Advancing progress in wireless communication and the ever-increasing number of all the navigational services have led to the invention of microstrip antenna in the 1950s. In the last two decades, there has been enormous growth in antenna designing for its promising application in the wide range of wireless communications incorporating global positioning systems and real-time device tracking [1, 2]. The airborne antennas must be designed to comply with the operational requirements while providing the pattern characteristics which support reliable communication with the ground station. The various airborne antennas located on the co-vehicle must be communicated continuously with the station while providing sufficient RF isolation to each other [3]. GPS is an open-source license to anyone carrying a GPS receiver. India has developed its own domestic navigational system in 2010 named Indian Regional Navigational Satellites System (IRNSS). IRNSS is based on Constellation of four GSO (geosynchronous) and three geostationary satellites. It provides independent position, navigation and timing services over the Indian Territory and also the surrounding is up to 1500 km beyond the Indian geographical boundary [4]. IRNSS communication subsystem consists of mainly 3 segments—space segment, ground segment, user segment. The ground segment consists of a central navigation center and control facility. This segment is responsible for the maintenance, operation and controllability of IRNSS satellites. IRNSS navigation processing unit performs precise orbit determination and onboard clocking characterization and generates correction messages and commands to be uplinked to the satellites via the telemetry tracking and telecommand stations (i.e. through IRNSS uplink stations) [5]. A. Jobanputra · D. Panchal · H. Trivedi · D. Buch · B. Kakani (B) Institute of Technology, Nirma University, Ahmedabad, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_49

619

620

A. Jobanputra et al.

This paper proposes Microstrip patch antennas (MSA) for three applications namely altimetry, satellite telemetry and for IRNSS receiver application. The most suitable antenna for this purpose is the patch antenna because of its low profile, easy installation, and integration with other subsystems. Out of the various geometries available for designing, the one used in this article is rectangular MSA geometry because of its symmetry that allows easy analysis of radiation and other parameters. The design parameters for RMSA are taken standard [6].

2 Radio Altimeter Radio altimeter is used to measure height above ground required during the approach, terrain awareness, etc. Its frequency band for operation is 4.2–4.4 GHz. Generally, an accuracy of about 0.9 m or 3 ft is considered satisfactory [7, 8]. For the proposed design air gap of 2 mm is introduced between the substrate made of RT/Duroid 5880 ( r = 2.2) and the ground plane (upper face of the lower substrate) to increase the gain of the antenna. While practically (in fabricated design), to support the substrate and the ground plane, an approximately 2 mm size foam sheet is used (glued) between the substrate and the ground plane just around the inside edges. Figure 1 represents the simulated and fabricated design structure.

2.1 Radio Altimeter Results The design of Radio Altimeter antenna has a simulated gain and bandwidth of 9.19 dB and 170 MHz (4.15–4.36 GHz) respectively with center around 4.29 GHz and an

Fig. 1 Shows radio altimeter antenna design (simulated) and (fabricated)

49 Development of Antennas Subsystem …

621

approximate beamwidth of 35°. The bandwidth is 30 MHz less than the required one but the simulated gain also falls in the required range (Figs. 2 and 3; Table 1).

Fig. 2 Comparison of return loss between simulated and fabricated radio altimeter antenna

Fig. 3 Shows radiation pattern w.r.t. ‘ø’ for a radio altimeter

Table 1 Comparison between specifications and simulation results for radio altimeter antenna

Design parameters

Design specification [13]

Simulation result

Operating frequency

4.29 GHz

4.29 GHz

Bandwidth required

200 MHz

170 MHz (4.15–4.36 GHz)

Gain

8–13 dBi

9.19 dBi

Return loss

< −10 dB

−14 dB

622

A. Jobanputra et al.

3 Radio Telemetry Telemetry means to take measurements from a remote location. These measurements can be anything from internal temperature, velocity, current and voltages in various onboard instruments, health monitoring of subsystem, etc. Here the antenna may be mounted on a Gyro stabilized mount inside the missile if there is considerable banking and yawing to keep the bore-sight of the antenna facing upwards towards the satellite. There is a tradeoff over this, in which the bore-sight cannot be kept too thin else it would require a highly sensitive stabilization mount, nor it can be too blatant else the average power level would decrease by wastage in undesirable directions [9]. Square notches are introduced on the vertically opposite diagonals of a square microstrip patch design to achieve the desired circular polarization bandwidth [10]. Figure 4 shows the simulated design and fabricated structure.

3.1 Radio Telemetry Results Simulated design of the telemetry antenna achieved a gain of 7.71 dB at a bandwidth of 230 MHz (3.1–3.24 GHz) centered around 3.15 GHz which is better than our required gain of 5 dBi. The fabricated antenna has its center frequency exactly at 3.15 GHz with a return loss of -17 dB which is quite more than the simulated return loss of −12 dB (Figs. 5 and 6; Table 2).

Fig. 4 Shows radio telemetry antenna design (simulated) and (fabricated)

49 Development of Antennas Subsystem …

623

Fig. 5 Comparison of return loss between simulated and fabricated radio telemetry antenna Fig. 6 Shows radiation pattern w.r.t. ‘ø’ for a radio telemetry

Table 2 Comparison between specifications and simulated results for radio telemetry antenna

Design parameters

Design specification [13]

Simulation result

Operating frequency

3.15 GHz

3.15 GHz

Bandwidth required

200 MHz

230 MHz (3.1–3.24 GHz)

Gain

4–5 dBi

7.71 dBi

Polarization

Circular

Circular

Return loss

< −10 dB

−12 dB

624

A. Jobanputra et al.

Fig. 7 Shows IRNSS L5 band antenna design (simulated) and (fabricated)

4 IRNSS Antenna Design (L5 Band) The IRNSS receiver’s antennae (Navigation using Indian Constellation NAVIC) will be used to supplement the INS or Inertial Navigation System for en-route and terminal guidance with a location accuracy of approximately 2 m. The frequency used falls into S and L5 band [5, 11]. Square notches are introduced on the vertically opposite diagonals of a square microstrip patch design to achieve the desired circular polarization bandwidth. FR4 is used as a substrate with a 4.8 mm thickness to increase the bandwidth of the antenna also to keep the design cost low. Figure 7 shows the simulated and fabricated IRNSS antenna structure on the FR4 substrate.

4.1 IRNSS L5 Band Results The simulated design of the L5 band IRNSS antenna had a gain of 20 dBi and more than satisfactory bandwidth of 40 MHz as compared to required bandwidth of 24 MHz. Return Loss of simulated and fabricated antennas is nearly the same. The simulated and fabricated designs meet the specification criteria (Figs. 8 and 9; Table 3).

49 Development of Antennas Subsystem …

625

Fig. 8 Comparison of Return Loss between simulated and fabricated IRNSS L5 Band antenna

Fig. 9 Shows horizontally polarized radiation pattern of IRNSS L5 Band antenna

5 IRNSS Antenna Design (S Band) The frequency of operation for the S-band IRNSS antenna is 2.4 GHz. Dual-band (L5 and S) antenna is used to mitigate the effect of interference caused by the earth’s ionosphere on the IRNSS signal. The effect of the ionosphere on this RF signal is

626

A. Jobanputra et al.

Table 3 Comparison between specifications and simulated results for IRNSS L5 antenna Design parameters

Design specification [13]

Simulation result

Operating frequency

1.176 GHz

1.176 GHz

Bandwidth required

24 MHz

40 MHz (1.16–1.2 GHz)

Gain

15 dBi

20 dBi

Polarization

Right hand circularly polarized

Right hand circularly polarized

Return loss

< −10 dB

−22.5 dB

a frequency-dependent phase shift commonly known as group delay caused by the dispersive behavior of the environment at such height. Also, the effect of Faraday rotation may lead to the use of an antenna that has circular property. This adversity can be effectively reduced by using two widely spaced frequency bands [12]. Square notches are introduced on the vertically opposite diagonals of a nearly square microstrip patch which will cause orthogonal modes to interfere in the farfield and yield circular polarization. The diagonal slit was also realized but couldn’t fulfill the required specifications. The location of the feed had determined the right or left-handedness of the circular polarization. RT/Duroid of 6.4 mm thickness is used to increase the bandwidth of the antenna to achieve better radiation characteristics. The following Fig. 10 represents the design of an antenna for S-band.

5.1 IRNSS S Band Results S-band antenna for IRNSS application has quite promising results. The simulated return loss bandwidth is 340 MHz (2.21–2.55 GHz) with a gain of 20 dBi. The simulation also achieves an axial ratio bandwidth of 80 MHz (2.48–2.56 GHz), thus

Fig. 10 Comparison of return loss between simulated and fabricated IRNSS S band antenna

49 Development of Antennas Subsystem …

627

satisfying all the requirements. VSWR correspond to S-band is found to be 1.72 which is reasonable less than 2 dB requirement (Figs. 11 and 12; Table 4).

Fig. 11 Comparison of return loss between simulated and fabricated IRNSS S band antenna

Fig. 12 Shows horizontally polarized radiation pattern of IRNSS S band antenna

628

A. Jobanputra et al.

Table 4 Comparison between specifications and simulated results for IRNSS S Band antenna Design parameters

Design specification [13]

Simulation result

Operating frequency

2.4 GHz

2.4 GHz

Bandwidth required

200 MHz

340 MHz (2.21–2.55 GHz)

Gain

10 dBi

20 dBi

Polarization

Right hand circularly polarized

Right hand circularly polarized

Return loss

< −10 dB

−20 dB

6 Conclusion Antennas for radio altimetry, satellite telemetry, and IRNSS receiver were designed (simulatedand fabricated). Radio altimeter antenna designed at 4.29 GHz includes an air gap between the substrate and the ground plane done to improve the gain. It has a simulated gain and bandwidth of 9.19 dB and 170 MHz respectively. The bandwidth of 420 MHz was obtained also the return loss improved when compared to the simulated design. Simulated design of the telemetry antenna achieved a gain of 7.71 dB with a bandwidth of 230 MHz. Fabricated telemetry antenna has a bandwidth of 284 MHz. Two antennas operating on L5 and S-band were designed and fabricated for the IRNSS receiver application. For the L5 band antenna, the simulated design had a gain of 20 dBi and bandwidth of 40 MHz. After fabrication, return loss improved by 3 dB with a bandwidth of 80 MHz. S-band antenna has a simulated bandwidth of 340 MHz with a gain of 20dBi. The radiation pattern of the fabricated antenna closely resembles the simulated one. Thus, for three individual airborne applications, antennas were designed, fabricated and studied. The overall summary of the results are described in the Table 5 shown as follows: Table 5 Comparison between specifications and simulated results for IRNSS S Band antenna Antenna parameter

Altimeter

Telemetry

IRNSS (L5 Band)

IRNSS (S Band)

Desired frequency

4.29 GHz

3.15 GHz

1.176 GHz

2.4 GHz

Measured frequency

4.29 GHz

3.15 GHz

1.176 GHz

2.4 GHz

Simulated RL

−14 dB

−12 dB

−22.5 dB

−20 dB

Measured RL

−20 dB

−16 dB

−25.5 dB

−22 dB

Simulated BW

170 MHz

230 MHz

40 MHz

340 MHz

Measured BW

420 MHz

284 MHz

80 MHz

472 MHz

Gain

9.19 dBi

7.71 dBi

20 dBi

20 dBi

Polarization

Linear

RHCP

RHCP

RHCP

49 Development of Antennas Subsystem …

629

References 1. So KK, Wong H, Luk KM, Chan CH (2015) Miniaturized circularly polarized patch antenna with low back radiation for gps satellite communications. IEEE Trans Anten Propag 63(12):5934–5938 2. Rao KS, Jahagirdar DR, Ramakrishna D (2017) Compact broadband asymmetric slit circularly polarized microstrip patch antenna for GPS and GLONASS applications. In: IEEE International conference on antenna innovations and modern technologies for ground, aircraft and satellite applications, pp 1–3. IEEE 3. Chen X, Yang L, Zhao JY, Fu G (2016) High-Efficiency Compact Circularly Polarized Microstrip Antenna with Wide Beamwidth for Airborne Communication. IEEE Antennas Wirel Propag Lett 15:1518–1521 4. Majithiya, P., Khatri, K., Hota, J. K.: Indian Regional Navigation Satellite System. Inside GNSS, (2011) 5. Thangadurai, N., Vasudha, M.P.: A review of antenna design and development for Indian regional navigational satellite system. In: Proceedings of International Conference on Advanced Communication Control and Computing Technologies,pp. 257–264. IEEE, (2016) 6. Balanis, C. A.: Modern antenna handbook. Wiley, (2007) 7. RamaDevi K, Prasad AM, Rani AJ (2012) Design of A Pentagon Microstrip Antenna for Radar Altimeter Application. International Journal of Web and Semantic Technology 3(4):31 8. Sudhakar, A., Prakash, M. S., Satyanarayana, M.: Compact Microstrip Antenna for Radar Altimeter Applications.In: IEEE Indian Conference on Antennas and Propagation (InCAP), pp. 1–3. IEEE, (2018) 9. Pant, N.: ISRO Telemetry, Tracking and Command Network. In: DFVLR Colloquium about Joint Projects within the DFVLR/ISRO Cooperation, pp. 113–120. (1986) 10. Pozar DM, Schaubert DH (2010) Microstrip Antennas. John Wiley and Sons, New Jersey 11. Karthick, M., Kashwan, K. R..: Design of IRNSS receiver antennae for smart city applications in India. In: Global Conference on Communication Technologies, pp. 277–280. IEEE, (2015) 12. Khaleeq, M. M., Saadh, A. W. M.,Ali, T., Reddy, B. S.: A Corner truncated circularly polarized antenna for IRNSS application. In: Proceedings of the International Conference on Smart Technology for Smart Nation (SmartTechCon), pp. 148–152. IEEE, (2017) 13. Capelli M (1954) Radio Altimeter. Transactions of the IRE Professional Group on Aeronautical NavigationalElectronics 2:3–7

Chapter 50

A Literature Survey on LEACH Protocol and Its Descendants for Homogeneous and Heterogeneous Wireless Sensor Networks Anish Khan and Nikhil Marriwala

1 Introduction WSN has become most interesting research area nowadays, due to low price, low power, flexible wireless sensor nodes that sense the desired information and forward that data to the controlling station for processing. WSN has widely utilized in everyday life because of its numerous applications in the field of healthcare, military battlefield, surveillance, intelligent highways, environment monitoring, etc. [1]. When these kinds of applications are implemented in a real-life environment, the sensors devour a lot of energy for detecting, processing, transferring the data. As the energy consumption is the major concern in the WSN so, how to effectively utilize the limited power and refines system life durability is the bottleneck of the whole scenario [2]. Various kinds of approaches are introduced in the literature by the researchers in the field of WSN to minimize the energy drainage and to enhance the lifetime of the network are either to confine the measure of information to be transmitted or to limit the distance between the sink and the source node [3]. The hierarchical protocols emphasize the concern in WSN. The fundamental quality of the various levelled/hierarchical protocol is to bunch the nodes and the nodes of a specific group converse with their group heads. Furthermore, the group head conveys the computed details to the controlling base station [3]. The paper structuring is as follow Sect. 2 briefing basic LEACH protocol. Section 3, sum up’s the work carried out by other researchers to amplify the potential of essential LEACH. Section 4, describes the successors of LEACH protocol. A. Khan (B) · N. Marriwala University Institute of Engineering and Technology, Kurukshetra University, Kurukshetra, Haryana, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_50

631

632

A. Khan and N. Marriwala

Section 5, analyze the comparison between descendants of basic LEACH routing protocol. At last, the verdict is exhibited in Sect. 6.

2 LEACH Protocol “Low-Energy Adaptive Clustering Hierarchy Protocol” (LEACH) is proposed by Heinzelman et al. [4] is popular weighted protocol with the clustering hierarchy. The prime benefit of this routing protocol, it improves the network duration by diminishing the energy utilization of the whole wireless sensor network. LEACH protocol, organizes sensor nodes as clusters. Clusters communicate with their respective group heads. The sensor nodes sense the details of interest and forward it to their cluster heads. In addition to this, the group head assembles information from all nodes at that point amasses and packs them to send to the base station [5]. LEACH is established in rounds and every round includes two stages. The first stage refers to the set-up stage and the second stage refers to the steady-stage. In the set-up stage, cluster head selection and cluster formation take place. It also is known as the initialization phase, every node produces an arbitrary number somewhere in the range of 0 and 1. On the off chance that the arbitrary number created by a node is under the set limit T (n), the node communicates to all nodes that it’s the cluster head for the current round [6]. T (n) is given as:  T (n) =

 p   , 1− p rmod 1p

n∈G

0,

otherwise

(1)

where p r G n

the current round, the current number of rounds, the set of nodes that have not been selected as CHs in the last 1/P rounds, the threshold value

In the steady stage, the information passes on starts. Cluster head nodes make a TDMA plan The TDMA schedule tells nodes present in the cluster when to start communicating their data. When the TDMA schedule is fixed and broadcast in the respective cluster, the nodes start transmitting their sensed data in the allocated scheduled transmission time [4]. Basic diagram of the LEACH protocol is shown in the Fig. 1.

50 A Literature Survey on LEACH Protocol and Its Descendants …

633

BS

Cluster head Sensor node

Fig. 1 Clustered framework

3 Related Work Many researchers already did their research in the field of WSN. Nikhil Marriwala et al. [26], conclude the working of basic LEACH Protocol. Some advanced versions of this protocol are discussed in brief. Mohammad NajmudDoja et al. [3], wind up that in H-LEACH sharing location information using GPS is an important factor to improve energy efficiency. Jie Chen et al. [6], I-LEACH is introduced based upon the residual and balanced energy. The entire network works for a long time. Xu-Xing Ding et al. [7], Dynamic k value leach is proposed to optimize the cluster structure. The network is divided into an even and uneven network. Tanushree Agarwal et al. [8], this paper ensures excellent formula for extending the duration of the network. Vishal Kumar Arora et al. [9], make a survey that gives us a birds-eye view of different kinds of protocols. The main focus is on hierarchical protocols like LEACH and its variants, for example, C-LEACH, MODLEACH, Stable Election Protocol (SEP), MH-LEACH, T-LEACH, and V-LEACH. In addition to this, other hierarchical routing protocols like PEGASIS, TEEN, APTEEN are also discussed. Sudhanshu Tyagi et al. [10] gives a nitty-gritty survey on clustering and routing strategies dependent on LEACH protocol for remote sensor network.

4 Descendants of Leach Routing Protocol: Overview Ben alla et al. [11], introduces IB-LEACH for heterogeneous WSN which decreases the probability of node failure and makes the whole network reliable, stable, and an improved lifetime of the whole network. Some more power level nodes are added in a network known as NCG nodes (Normal, cluster, gateway) nodes that will perform the role of cluster head to amass features from respective cluster nodes and forward

634

A. Khan and N. Marriwala

it to “gateways” that requires less energy to talk with BS. As the proposed protocol is a descendant of LEACH protocol, it also works in rounds as there are two main rounds [26]. In the set-up phase, gateways are elected by using gateway algorithm and groups are framed with the help of a group selection algorithm. In the steady phase, details are passed on from cluster nodes via cluster heads to gateways that utilize minimum energy to transmit data to BS. At last, a comparison is made between LEACH, SEP and IB-LEACH protocols that show higher quality results. Guijum Chen et al. [12], introduces an improved LEACH algorithm name LEACH-HEM algorithm, which is based upon the energy heterogeneity of nodes that have identical beginning energy and transmission of messages are in multi-hop fashion among the cluster heads (CH). A new threshold value introduces the concept of current energy and standard energy of nodes. Hence, nodes with higher residual energy have more chances to be appointed as a cluster head. Cluster head election equation is described as: E(r ) =

N 1  E i (r ) N i=1

(2)

Here, E(r ) Average network energy in round r E i (r ) Node Current energy for round r The optimal percentage of the CH, the equation becomes:  pi = popt 1 −

E(r ) − E i (r ) E(r )

= popt

E i (r ) E(r )

(3)

Here, popt pi =

1 ni

optimal percentage of the CH the average probability to become CH

In this scenario, two kinds of propagation models are used one is free space ε f s and multi-path fading εmp . At last, the aftereffect of suggested calculation is contrasted with LEACH, LEACH-DCHS, ALEACH. Naveen Kumar et al. [13], have structured rules of conduct named universal LEACH, the selection for CH’s lies on prime energy and extant energy of nodes. This protocol mixed the concept of PPEGASIS and I-LEACH protocol. Exchange of messages within the cluster is done with the help of a chain, in addition to this chain begins from the farthest node in distinction to the base station. This protocol uses the concept of Master Cluster Head (MCH), CH’s transmit aggregated data to the MCH which further sends it to BS. Nodes that are far off from the BS send their detected information to their neighbour nodes that are nearer to them and in the same fashion information is sent to BS. Chain formation is a very effective mechanism for energy saving because nodes need not to directly communicate with BS.

50 A Literature Survey on LEACH Protocol and Its Descendants …

635

Manzoor et al. [14], introduces Quadrature-LEACH for uniform wireless sensor networks. This proposed routing protocol shows eye-catching results in terms of strength, network life existence of the network. As the network is homogeneous, the author subdivides the whole network within four equal divisions name as (a1, a2, a3, a4) for better coverage. Then two algorithms are applied, algorithm 1 defines the CH selection procedure. Algorithm 2 defines the association of nodes with their respective cluster heads. In algorithm 1, the node picks an irregular number somewhere in the range of 0 and 1. If the chosen is lower than the threshold value T(n) and the provision for the desideratefigure of CHs is not obtained then the node becomes CH. Algorithm 2 describes the association of nodes with their clusters which are based upon the Received Signal Strength Indicator (RSSI). Versha Sharma et al. [15], investigates the effectiveness of single-bounce and multi-bounce LEACH protocol that consists of heterogeneous nodes. In advanced single-bounce LEACH protocol, the author takes heterogeneous nodes having different energy levels named as standard nodes and high energy nodes. The power level of high energy nodes is improved by a factor of 1 in contrast with standard nodes. Sensor nodes that are having more remaining energy will become the CH for the particular round either node is a standard node or high energy node. CH collects data, made aggregation and forwards it to the BS. Further, the whole operation is the same as the LEACH protocol. The threshold value is calculated as Eq. (1). In advanced multi-hop LEACH protocol, the transmission of details is made by opting prompt neighbour node. Multi-hop LEACH prolongs the lifetime of the network by an optimal number of clusters and the optimal number of hops to avoid overlapping and collision. The equation for Energy depletion in CH:

E CH =



N N 4 − 1 L ∗ E elec + L ∗ E proc + L ∗ E elec + L ∗ εmp ∗ DBS k k

(4)

where, E elec L 4 DBS εmp

energy for electronic circuitry packet size distance between CH to BS multipath energy

Equation for optimum number of hops:

H=

Mk 2Tx



−1 k

(5)

Average energy calculation: E avg (r ) =

r 1  E 1− N R

(6)

636

A. Khan and N. Marriwala

Amr Amwary et al. [16], introduces a modified LEACH protocol for heterogeneous networks. In the M-LEACH protocol, the main change is done in the setup phase, only the advanced nodes turn into CH for particular spin and steady phase, in which data transmission occurs, will remain the same. Considering energy calculation, the radio energy model is adopted. Energy exhausted by transmitter: E Tx (k, d) = E Tx−elec (k) + E Tx−amp (k, d) E Tx (k, d) = k ∗ E elec + k ∗ E amp ∗ d 2

(7)

where, k packet length in bits E Tx energy consumed in transmitting k - bit message d distance The threshold value is calculated as,  T (n) =

 ,  p∗a  1 1−a∗ p r mod p∗a

n∈G

0,

otherwise

(8)



1 if node is normal 5 if node is advanced Shweta Gupta et al. [17], In this paper, a new approach introduced for the homogeneous & heterogeneous WSN name as IDE-LEACH. In this approach, IDE-LEACH utilizes the primary energy, space between cluster nodes & sink node and the enduring energy of the node for CH determination. The author takes many assumptions to design the network model as 100*100 sq. Meter area. Both networks consist of 100 sensors. In the homogenous model, the sensors are having the same energy level but in the heterogeneous model the energy level of nodes are the same but 10% of the total nodes are having the higher power. The author used a radio energy model. Future work: we can merge both the heterogeneous and homogeneous networks enhance the existing pattern of the system. Ahmed al-baz et al. [18], newly designed protocol entitled as NR-LEACH. Energy load is scattered in the group of nodes with the help of the node rank algorithm. Moreover, the entire activity is partitioned into cycles. In the set-up phase, CH’s are elected, and in the steady phase, aggregated data is transmitted within CH and BS. Appointment of CH depends on node rank calculation; the weight of a node is affected by three main elements i.e. acquired signal quality, leftover energy of every node in the network, and connection association number among different nodes. Node Rank Score is calculated by: where a =

NR(n i ) = PO(n i ) ∗ α ∗

j  0

  NR n j 

1 ji dout jk

k∈NH

dout

+ (1 − α)

(9)

50 A Literature Survey on LEACH Protocol and Its Descendants …

637

Here, ji dout

distance of out link from node j to node i P O(n i ) current energy of node i α damping ratio Performance is compared with classical LEACH protocol, LEACH-E, I-LEACH. Mohammad NajmudDoja et al. [3], the study of this paper shows the hybrid approach in leach. The author has offered a change in the fundamental LEACH protocol by considering factors i.e. utilizing the area in which sensor nodes to be deployed and by considering area coordinates. On the bases of these factors, clustering is done. In this approach, the cluster is stabilized however the group head is picked progressively. First, the area is divided into zones according to parameters and then one node from each zone is picked as group supervisor as done in traditional LEACH.Some assumptions are taken for designing the H-LEACH protocol is as follows:-1. Nodes that are deployed in the zone are very much informed of their area correspondent and the location coordinates are shared with the BS.2. Energy exhausted by the CH to move the assembled data received from nodes to the BS is not taken into account because it will affect the protocol in long terms. Amer O. Abu Salem et al. [5], An Enhanced LEACH protocol is recommended; Author identifies CH according to the distance between node and CH and from CH to BS to decrease the power consumption of CH node. The main change is done in the setup stage. After the node selects the CH, the node sends the message to the CH that through it the distance is minimum to the BS & hence, it will become the member of that CH. Steady-state remains the same. Xu-Xing Ding et al. [7],This paper introduced a reformed cluster construction method recognized as Dynamic K value LEACH. The main motive to design this protocol is to diminish the energy utilization of the unequal distribution of energy in WSN. The remaining energy of the CH is considered. The node that has more residual energy is elected as CH. DK-LEACH is operated in phase’s i.e. setup and steady phase. Jie Chen et al. [6], Proposed a clustering algorithm name Improved Leach (ILEACH). In this algorithm, the choice of CH is done on the bases of the unused energy of node and gap with other CH. The comparison is done with the Basic LEACH and P-LEACH. In the improved algorithm, advancement is done in the cluster head selection and data communication. 1. To lessen energy utilization, every node includes its remaining energy to the envelope and forwards it to the BS. 2. If the node is near to the CH, the probability of a node to become CH is very less. The threshold value of the distance is set as d0/2. 3. Data Communication between node and cluster head. 4. Data communication among the CH. JyotiBhola et al. [19],designed an advanced variant of LEACH protocol with the use of the genetic algorithm by using its fitness function. The energy consumption rate is increased by 17.39% of the total. In this paper, the main key aspect is the

638

A. Khan and N. Marriwala

amount of CH’s present in the scenario. When the amount of CH is less, they have to cover more areas, causes more energy consumption. GA selects the CH according to their remaining energy. Ravi Kishore Kodali et al. [20], DD-TL-LEACH is invented. The simulation is done in NS-3 simulator. Comparison is done in between Direct Transmission, MTE, LEACH, Directed Diffusion protocol. To beat the constraints of the LEACH a few enhancements are done by introducing the TL-LEACH, DD-LEACH, M-LEACH. The proposed protocol is the mixture of the TL-DD-LEACH. Abdul Razaque et al. [21], In this paper, the author designed a dynamic protocol with the help of PEGASIS. The author named this protocol as PEGASIS-LEACH Protocol (P-LEACH). The author combines basic LEACH with chain cased construction of PEGASIS. In PEGASIS protocol, a string of nodes are constructed and a chief node is chosen randomly. Chief node assembles data, fuse and forward it to BS. PEGASIS reduces power consumption, reduces traffic overload and builds the system lifetime and cost-effectiveness. Mustafa A et al. [22], analysis new protocol that uses the basic LEACH protocol with the three layers. Every layer has individual CH’s. Layers are introduced with diminishing the separation between the sink node and the CH’s. Layer three came in play if the distance is larger as compared to the threshold value. The author named it as the LEACH Three (LEACH-T). The first layer nodes gather data from the sensor nodes. Further, second layer CH gathers information from first layer CH & forward it to BS. The third layer CH’s are used when the distance between the second layer CH & BS is more. The proposed protocol works in the three phases. Chinchu T Sony et al. [23], In this paper, the author introduces some improvements in the basic leach protocol for making the convention substantially more energy proficient. The advancements are done in the cluster head selection, assigning the TDMA schedule. Some new parameters are introduced i.e. node energy level for the CH selection. The primary thought is to dodge the sensor node with lessen remaining energy as the CH. If the altered limit esteem is more than the arbitrarily produced number between the extents 0 to 1, that specific node can be proclaimed itself as the CH. Mu Tong et al. [24], the author has presented a new version known as LEACH-B (LEACH- Balanced) protocol. In this proposed algorithm the author concern basic drawbacks in the essential LEACH protocol i.e. selection of group heads is done without considering lingering vitality of hubs. Group chief selection is done in rounds. In the first round, group chief choice is identical as in basic LEACH protocol. The main change is brought in the second round. To keep up the steady number of group heads n * p, where p signifies the ideal level of the group heads and n is meant as the aggregate number of hubs. Leftover energy of hubs is advertised in the network through advertisement messages from cluster heads. Saravanakumar et al. [25], proposed algorithm in which sensor nodes form clusters and the process of group chief selection depends on the enduring vitality of the nodes. Hub planning strategy is maintained in each group of the WSN’s. In the hub arrangement plan, the concept of alive mode and snooze mode is introduced; it expands the vitality effectiveness of the system up to 50%.

50 A Literature Survey on LEACH Protocol and Its Descendants …

639

Nikhil Marriwala [26], In this chapter, the author describes the meaning of routing protocols and how we can classify various kinds of routing protocols on the bases of their network structure and their mode of operations like flat routing, location-based routing, hierarchical routing. Moreover, there are many design constraints for routing in WSN’s. The study gives us an idea about the constraints and how can we overcome those constraints. In addition to this, the study focuses on the LEACH protocol and various kind of Hybrid-LEACH protocols or H-LEACH. A variety of H-LEACH protocols are discussed in this chapter like PEGASIS, HEED, V-LEACH.

4.1 Performance Comparison Between LEACH Protocol and Its Descendents See Table 1.

5 Conclusion Extending the network lifetime by efficiently utilizing energy is the bottleneck in wireless sensor networks. As LEACH protocol is first and energy-saving hierarchical protocol which is widely used for prolonging the network life has some limitations, hence many upgraded variants are proposed to remove those limitations. Therefore a brief analysis is done on how to overcome the constraints in basic LEACH. On the bases of this analysis, the comparison is made to contrast the performance of derived LEACH protocols as shown in Table 1. The whole survey windups that for more energy-efficient and reliable network we need to model systematic, structured and well-organized energy-saving protocols.

2011

2012

2012

2013

2015

2015

2015

2016

2016

2016

2017

2017

2018

2018

2019

U-LEACH

Q-LEACH

SINGLE-HOP & MULTI-HOP LEACH

Modified LEACH & Modified Multi-Hop LEACH

DD-TL-LEACH

M-LEACH

P-LEACH

LEACH-T

IDE-LEACH

DK-LEACH

NR-LEACH

H-LEACH

Enhanced LEACH

2010

LEACH-B

LEACH-HEM

2010

IB-LEACH

I-LEACH

Year

Clustering routing protocol

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Fixed Bs

Mobility

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Improved

Energy efficiecy

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Self oganization

Table 1 Comparison between various improved LEACH protocol

Single Hop

Single Hop

Single Hop

Single Hop

Single Hop

Multi-Hop

Multi-Hop

Single Hop

Multi-Hop

Single Hop & Multi-Hop

Single Hop & Multi-Hop

Single Hop

Multi-Hop

Multi-Hop

Single Hop

Single Hop

Single Hop

Hop count

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Yes

Homogeneous

Homogeneous

Homogeneous

Heterogeneous

Yes

Yes

Yes

Yes

(continued)

Use of residual energy and distance b/w nodes

Heterogeneous & Homogeneous Yes

Homogeneous

Homogeneous

Heterogeneous

Homogeneous

Homogeneous

Heterogeneous

Homogeneous

Heterogeneous

Heterogeneous

Homogeneous

Homogeneous

Heterogeneous

Homogeneous/heterogeneous

640 A. Khan and N. Marriwala

Year

2019

Clustering routing protocol

LEAH with GA

Table 1 (continued)

Fixed Bs

Mobility

Improved

Energy efficiecy

Yes

Self oganization

Multi-Hop

Hop count

Homogeneous

Homogeneous/heterogeneous

Yes

Use of residual energy and distance b/w nodes

50 A Literature Survey on LEACH Protocol and Its Descendants … 641

642

A. Khan and N. Marriwala

References 1. Guleria K, Verma AK (2019) Comprehensive review for energy efficient hierarchical routing protocols on wireless sensor networks. Wireless Netw 25(3):1159–1183 2. Zhang H, Li X, Fan X (2013) An optimal solution for round rotation time setting in LEACH. In: International conference on wireless algorithms, systems, and applications. Springer, Berlin, Heidelberg, pp 366–376 3. Gupta V, Doja MN (2018) H-leach: modified and efficient leach protocol for hybrid clustering scenario in wireless sensor networks. In: Next-generation networks. Springer, Singapore, pp 399–408 4. Heinzelman WR, Chandrakasan A, Balakrishnan H (2000) Energy-efficient communication protocol for wireless microsensor networks. In: Proceedings of the 33rd annual Hawaii international conference on system sciences pp. 10-pp. IEEE (2000) 5. Salem AOA, Shudifat N (2019) Enhanced LEACH protocol for increasing a lifetime of WSNs. Pers Ubiquit Comput 23(5–6):901–907 6. Chen, J.: Improvement of LEACH routing algorithm based on use of balanced energy in wireless sensor networks. In: International Conference on Intelligent Computing, pp. 71–76. Springer, Berlin, Heidelberg (2011) 7. Ding, X. X., Ling, M., Wang, Z. J., Song, F. Lou.: DK-LEACH: An Optimized Cluster Structure Routing Method Based on LEACH in Wireless Sensor Networks. Wireless Personal Communications96(4), 6369–6379 (2017) 8. Agarwal, T., Kumar, D., Prakash, N. R.: Prolonging network lifetime using ant colony optimization algorithm on LEACH protocol for wireless sensor networks. In Recent trends in networks and communications, pp. 634–641. Springer, Berlin, Heidelberg. (2010) 9. Arora VK, Sharma V, Sachdeva M (2016) A survey on LEACH and other’s routing protocols in wireless sensor network. Optik 127(16):6590–6600 10. Tyagi S, Kumar N (2013) A systematic review on clustering and routing techniques based upon LEACH protocol for wireless sensor networks. Journal of Network and Computer Applications 36(2):623–645 11. Said, B. A., Abdellah, E., Hssane, A. B., Hasnaoui, M. L.: Improved and Balanced LEACH for heterogeneous wireless sensor networks. International Journal on Computer Science and Engineering (IJCSE) 2, (2010) 12. Chen, G., Zhang, X., Yu, J., Wang, M.: An improved LEACH algorithm based on heterogeneous energy of nodes in wireless sensor networks. In: 2012 International Conference on Computing, Measurement, Control and Sensor Network, pp. 101–104. IEEE, (2012) 13. Kumar, N., Sandeep, Bhutani, P., Mishra, P.: U-LEACH: A novel routing protocol for heterogeneous Wireless Sensor Networks. In: International Conference on Communication, Information and Computing Technology, ICCICT, pp. 1–4. IEEE, (2012) 14. Manzoor B, Javaid N, Rehman O, Akbar M, Nadeem Q, Iqbal A, Ishfaq M (2013) Q-LEACH: A new routing protocol for WSNs. Procedia Computer Science 19:926–931 15. Sharma, V., Saini, D. S.: Performance Investigation of Advanced Multi-Hop and SingleHop Energy Efficient LEACH Protocol with Heterogeneous Nodes in Wireless Sensor Networks. In: Second International Conference on Advances in Computing and Communication Engineering, pp. 192–197. IEEE, (2015) 16. Amwary A, Maga D, Nahdi T (2016) Modified LEACH protocol for heterogeneous wireless networks. In: 2016 New trends in signal processing (NTSP), pp 1–4. IEEE 17. Gupta S, Marriwala N (2017) Improved distance energy based LEACH protocol for cluster head election in wireless sensor networks. In: 2017 4th International conference on signal processing, computing and control (ISPCC), pp 91–96. IEEE 18. Al-Baz A, El-Sayed A (2018) A new algorithm for cluster head selection in LEACH protocol for wireless sensor networks. Int J Commun Syst 31(1):e3407 19. Bhola J, Soni S, Cheema GK (2020) Genetic algorithm based optimized leach protocol for energy efficient wireless sensor networks. J Ambient Intell Humaniz Comput 11(3):1281–1288

50 A Literature Survey on LEACH Protocol and Its Descendants …

643

20. Kodali RK, AVSK, Bhandari S, Boppana L (2015) Energy Efficient m—level LEACH protocol. In: International conference on advances in computing, communications and informatics (ICACCI), pp 973–979. IEEE 21. Razaque A, Abdulgader M, Joshi C, Amsaad F, Chauhan M (2016) P-LEACH: energy efficient routing protocol for wireless sensor networks. In: IEEE long island systems, applications and technology conference (LISAT), pp 1–5. IEEE 22. Al Sibahee MA, Lu S, Masoud MZ, Hussien ZA, Hussain MA, Abduljabbar ZA (2016) LEACH-T : LEACH clustering protocol based on three layers. In: 2016 International conference on network and information systems for computers (ICNISC), pp 36–40. IEEE 23. Sony CT, Sangeetha CP, Suriyakala CD (2015) Multi-hop LEACH protocol with modified cluster head selection and TDMA schedule for wireless sensor networks. In: Global conference on communication technologies (GCCT), pp 539–543. IEEE 24. Tong M, Tang M (2010) LEACH-B: an improved LEACH protocol for wireless sensor network. In: 6th International conference on wireless communications networking and mobile computing (WiCOM), pp 1–4. IEEE 25. Saravanakumar R, Susila SG, Raja J (2010) An energy efficient cluster based node scheduling protocol for wireless sensor networks. In: 10th IEEE international conference on solid-state and integrated circuit technology, pp 2053–2057. IEEE 26. Marriwala N (2013) Routing protocols. Wirel Sensor Netw Theory Appl 11(6):6–28

Chapter 51

Performance Study of Ultra Wide Band Radar Based Respiration Rate Measurement Methods P. Bhaskara Rao, Srinivas Boppu, and M. Sabarimalai Manikandan

1 Introduction Respiration (breathing) and heartbeat rates are vital signs for observing physiological health conditions of human subjects [1–8]. Many contact and non-contact biosensors are being developed for measuring vital signs of subjects [1–14]. Although the contact based vital signs monitor is more accurate and reliable, non-contact monitoring is highly demanded due to the comfortability in daily activities, ease of measurement for infant/children or wound subjects (burn victims) and a wide range of applicability (mounting on the wall, attached with computer in working places, mounting on the driver’s cabin etc.) [1–5]. Among non-contact sensors, the non-invasive microwave radiation based sensors are mostly preferred because of its flexible applicability for remotely extracting the both respiration rate (RR) and heart rate (HR) parameters from the subject behind the wall, the subject buried in debris and the wounded subject [1, 2]. The microwave Doppler radar is usually considered as the non-invasive detection of vital signs. In the past studies, the Doppler radar was used to detect human activities by analysing the Doppler shifts associated with the breathing, heartbeat, walking, arm waving [11]. However, it was observed that the Doppler radar cannot have better capability of penetrating materials [6]. ComThis research work is carried out with part of “Prototype of Imaging Radar in UWB” under support of IMPRINT-II and MHRD Grant, Government of India. P. B. Rao (B) · S. Boppu · M. Sabarimalai Manikandan Biomedical System Lab, School of Electrical Sciences, Indian Institute of Technology Bhubaneswar, Jatani, Khordha 752050, India e-mail: [email protected] S. Boppu e-mail: [email protected] M. Sabarimalai Manikandan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_51

645

646

P. B. Rao et al.

pared with Doppler radar, the ultra-wideband (UWB) radar is extensively explored for remote sensing methodology of life detection or a non-contact monitor of the vital signals of human subjects at short distances through walls and debris because of major advantages of UWB signals: (i) its better capability of penetration, and (ii) better localization of human subjects [3, 6]. The UWB radar usually transmits lowintensity electromagnetic waves with a large bandwidth (i.e., a short pulse period, usually a nano-second or a pico-second). The backscattered waves from the human subject are modulated by the small body vibrations induced by the heartbeat and breathing [1]. The received UWB pulse echoes reflected may consist of signal components related to breath and heart activities that are corrupted by body motion and the environment noises [3]. Moreover, the heartbeat signal is very small that may be masked by breathing harmonics and clutters. Past studies showed that detection of human activities using through-wall UWB radar has many important applications such as security, vital monitoring, tracking and rescue in complex environments [4]. Some of the UWB based life detection methods were studied to rescue living persons trapped in destroyed buildings ruined by natural disasters.

1.1 Existing Methodologies In the past studies, various signal processing techniques were explored to extract RR and/or HR parameter(s) for a single person or multiple persons in both normal and complex environments [1–14]. Some of the UWB radar based vital signs extraction methods are summarized in this subsection. In [2], Chen et al. (1986) presented the X-band microwave life-detection system to detect the heartbeat and breathing rates of the subject lying on the ground at a distance of 30 m or sitting behind a cinder block wall over 3 m away. In [3], Shy et al. (2019) presented simultaneous heartbeat and breath activity extraction method based on the first valley-peak of intrinsic mode functions (IMF) energy function (FVPIEF) with the pseudo-bi-dimensional ensemble empirical mode decomposition (EEMD) to remove the static clutter from original data of UWB receiver and to obtain the feature time index (FTI). The EEMD of a signal of second level PBDIMF and the Fast Fourier Transform (FFT) were used to extract the breathing and heartbeat rates. The FVPIEF based two-layer EEMD method was evaluated under both hold-breathing and breathing conditions. In [4], Wang et al. (2019) presented through-wall detection for determining moving paths and vital signs of human subjects using moving target model, finite element method to simulate single-input multiple-outputs (SIMO) data, multivariate EMD and FFT to separate and extract the respiratory characteristic frequencies. Results demonstrated that the combination of SIMO radar and MEMD can determine a moving path of human subjects behind the wall and estimate respiration rates effectively. In [5], Yang et al. (2019) presented a multi-breath framework for respiration monitoring for multiple persons using commercial UWB radar with spatial-temporal information and RGB image processing techniques, including image smoothing, edge detection, dilation and erosion for identifying the breathing cycles. In [8], Li et al. (2014) employed the

51 Performance Study of Ultra Wide Band Radar Based …

647

Curvelet transform to remove the direct coupling wave and background clutters, the singular value decomposition for denoising, and the Hilbert-Huang transform (HHT) and FFT for extracting vital sign from trapped victims in complex environment and the micro-Doppler shift characteristics. In [9], Li et al. (2012) used FFT and S-transform for extracting respiration rate and locating the position of static human subjects. In [10], H. Wang et al. (2009) employed the back projection (BP) algorithm to obtain the images of moving targets. In [11], Lai et al. (2008) performed the HHT analysis of human activities using the through-wall radar. In [7], Shen et al. (2018) proposed PulsOn410 UWB radar based method to measure RR and HR by using the autocorrelation, FFT and variational mode decomposition (VMD) methods. In [14], Venkatesh et al. (2005) presented the impulse-based UWB for detection of chest cavity motion and estimation of RR and HR using the concept of the phase modulation (PM) or Frequency modulation (FM) modulating signals.

1.2 Key Contributions of This Paper In this paper, we investigate the performance of ultra wide band (UWB) impulse radar based respiration rate (RR) measurement methods for estimating RR from a single person behind four types of wall, including concrete-wall, wood-wall, glass-wall, and brick-wall. Key contributions of this paper are summarized below: • We present two-stage variational mode decomposition with different values of data fidelity constraint (α) for suppressing the effects of unwanted baseline drifts and extracting respiratory signal. • Three approaches such as mode center frequency (MCF), fast Fourier transform (FFT) and autocorrelation function (ACF) are explored for extracting RR parameter accurately. • Validation UWB radar based respiration signal database is created under different obstacle scenarios such as concrete-wall, wood-wall, glass-wall, and brick-wall. • Evaluated three RR measurement methods in terms of absolute error(AE), root mean-square error (RMSE) and processing time.

1.3 Organization of the Paper The rest of this paper is organized as follows. Section 2 describes experimental setup for creation of UWB radar signal database for performance validation. Section 3, three VMD based signal processing methods are presented for estimating RR parameter. Section 4 presents evaluation results and performance comparison of the three methods. Finally, conclusions are drawn in Sect. 4.

648

P. B. Rao et al.

2 Experimental Set-Up and Signal Database Creation To have fair method’s performance evaluation and comparison, the signal database is created with different types of wall, which is our primary goal in this study.

2.1 Experimental Set-Up In this study, the commercial XeThru X4 sensing module is used for acquiring UWB signals that includes an impulse radar transceiver system on chip operating at 7.29 GHz frequency [18]. The frequency band of the radar is 7.25–10.20 GHz. The pulse repetition frequency (PRF) of the sensor is set to 15.1875 MHz [18]. The sensing module consists of a direct RF-sampling receiver, a fully programmable system controller and advanced power management functions. The detection range of this radar is 5 m. The detection zone of the radar is divided into the range bins of 5.14 cm. In the receiver section, the receiving antenna receives a reflected signal and passed through a low noise amplifier. The received data is represented in the form 2D matrix where the rows indicates slow time index (frames) and the column indicates fast time index (range bins) [18]. By using the XeThru X4 sensing module, we created validation UWB signal database under different types of wall, including wood-wall, glass-wall, concretewall and brick-wall. Table 1 summarizes specifications of wall, distance between subject and wall, distance between wall and sensor, wall thickness, and number of subjects. Figure 1 demonstrates the UWB radar signal recording scenarios.

Table 1 Specifications of experimental set-up Parameters/Wall material Wood Wall width (cm) Wall height (cm) Wall length (cm) Radar distance from obstacle (cm) Radar height from ground (cm) Chest height from ground (cm) Person distance from obstacle (cm) Single/Multiple persons Number of participants Operating frequencya (GHz) Operating banda (GHz) Powera (mW)

4.5 90 200 40 40 38–42 100 Single 10 7.29 6.02–8.5 600

Glass

Concrete

Brick

1 300 150 40 90 88–92 100 Single 10 7.29 6.02–8.5 600

15 300 200 20 90 88–92 50 Single 10 7.29 6.02–8.5 600

11 70 116 40 40 38–42 100 Single 10 7.29 6.02–8.5 600

Note a This information from the data sheet of the XeThru X4 sensing module

51 Performance Study of Ultra Wide Band Radar Based …

649

Fig. 1 Experimental set-up for UWB radar signal databases. a brick-wall, b wood-wall, c concretewall and d glass-wall

2.2 Validation Signal Database Creation In this study, ten subjects were participated voluntarily for creating the UWB radar signal database. Subjects were detailed the purpose of this study. For each of subjects, we recorded 4 min UWB radar signal with different obstacles and recording specifications as mentioned in Table 1. Each record contains 17 samples per second. For each of the obstacles, UWB radar signals are recorded with and without subjects to understand the patterns of UWB radar signals. The experiments are carried out in indoor environments. In addition, we also recorded respiration signals by using the chest-belt respiration sensor. Each chest-belt respiration signals are sampled at 500 Hz. The recorded signals are down-sampled for a required sampling rate as per the design.

650

P. B. Rao et al.

Fig. 2 Received signals: a through wood-wall without subject, b through glass-wall without subject, c through concrete-wall without subject and d through brick-wall without subject

2.3 Characteristics of UWB Radar Signals For the recording scenario of behind the wall, Fig. 2 shows the extracted signals by using the UWB radar with and without subjects. It is noted that the magnitude of the extracted signals without subject is smaller than the magnitude of signals recorded with subject. Further, it is observed that the signals recorded without subject are having more turning points (i.e., abrupt variations in the amplitude) as compared to that of the signals recorded with subject. Recordings show that the signals are corrupted with very-low frequency components (or baseline drifts).

3 Respiration Rate Measurement Methods Figure 3 illustrates a simplified block diagram of respiration rate measurement methods using two-stage variational mode decomposition (VMD), fast Fourier transform (FFT) and auto-correlation function (ACF) techniques. We describe each of the stages of the proposed block diagram in detail.

51 Performance Study of Ultra Wide Band Radar Based …

651

Fig. 3 A simplified block diagram of respiration rate measurement methods using two-stage variational mode decomposition (VMD), fast Fourier transform (FFT) and autocorrelation function (ACF) techniques

3.1 Respiration-Related Bin Selection In real-time sensing, the radar produces sets of reflected waves. Some of the reflected waves correspond to the chest movement of the person. Let us assume that the received radar matrix includes K bins (or range bins). Each bin includes L samples. Among K bins, some of the bins exhibit high amplitude due to the motions of chest of a person. In this study, variance feature is used for selecting dominant range bin which corresponds to the person behind the wall. For each of the bins, the variance of k th bin is computed as L  (Rl,k − R¯k )2 (1) Vk = l=1

where Rl,k represents sample value in radar matrix of lth frame and kth range bin, R¯k is the mean of all samples of a particular range bin. After computing variance for each of the bins, best range bin with maximum variance value is selected as follows. [BinMax BinLoc] = max(V)

(2)

where BinMax denotes the maximum variance of the bin and BinLoc denotes the location of the maximum variance of the bin. The signal related to the maximum variance bin is further processed to compute respiration rate.

3.2 Variational Mode Decomposition To overcome the drawbacks of wavelet transform (WT) and empirical mode decomposition (EMD) based decomposition techniques, the variational mode decomposition (VMD) technique is investigated to decompose a multicomponent signal x(t) into a finite number of band-limited intrinsic mode functions (modes) [15–17]. The

652

P. B. Rao et al.

decomposition is performed by using the constrained variational optimization problem. Please refer Ref. [15] for more details about variational mode decomposition. The variational mode decomposition technique has the following input parameters: data fidelity constraint (α), number of modes (K), initialization of the center frequency (ω), and tolerance of convergence criterion (). In this study, the two stage VMD strategy is used to remove low-frequency baseline drifts and high-frequency noises. The first-stage of the VMD is designed to remove trend or baseline drifts from the respiration-related signal x[n]. The first-stage VMD is designed with P=2 modes, and α = 100, 000. Based on the respiration frequency range, mode selection is performed and then subtracted from the original signal x[n]. In the second stage, the new candidate signal f [n] is decomposed using the VMD with P = 2 modes and α = [1000 10]. The main objective of this study is to find a simple method to estimate RR more accurately and reliably. We present three respiration rate (RR) measurement methods for extracting the RR from the received signal. For all these methods, the received signal is decomposed with VMD with parameters of data fidelity constraint (α) = [1000 10] and number of modes (P) = 2. Suitable modes are selected within the range of RR from 0.05 to 0.8 Hz which covers different age groups and recording scenarios such as resting, ambulatory and exercise conditions. Three VMD based RR estimation methods are summarized below: • VMD-MCF based RR Estimation Method: The RR is directly estimated from the mode center frequencies of second-stage of the VMD. • VMD-FFT based RR Estimation Method: The RR is computed from the FFT magnitude spectrum of a selected mode from the second-stage VMD. • VMD-ACF based RR Estimation Method: The RR is computed from the autocorrelation of a selected mode from the second-stage VMD.

3.2.1

Spectral Peak Finding Logic

In this study, we assume that the RR varies from 6 to 48 breaths/min which equals to the frequency range between 0.1 and 0.8 Hz. Thus, we find a location of the dominant spectral peak FR within the Fourier spectrum ranging from 0.1 to 0.8 Hz. The respiratory RR is computed as RR = FR ∗ 60 (breaths/min).

(3)

Preliminary evaluation results of the VMD-based RR estimation method are shown in Figs. 4 and 5 for the subject behind the wood-wall and concrete-wall.

51 Performance Study of Ultra Wide Band Radar Based …

653

Fig. 4 Performance of the VMD scheme for extracting candidate respiratory signals and comparing the extracted RR with reference RR using contact-sensor

Fig. 5 Performance of the VMD scheme for extracting candidate respiratory signals and comparing the extracted RR with reference RR using contact-sensor

654

P. B. Rao et al.

4 Results and Discussion This section presents evaluation results of three respiration rate measurement methods. In this study, respiration rates estimated from the chest-felt sensor signals are used as the reference for validating the RR to be estimated from the UWB radar signals. The validation database contains 40 records with a total duration of 160 min. The measurement accuracy is computed in terms of absolute error (AE) and root mean square error (RMSE) (breaths/min). In the past studies, the RR estimation method’s performance was assessed using the root mean square error (RMSE) and absolute error (AE) metrics. The RMSE metric is computed as   N 1   2 RRref (i) − RRest (i) . RMSEi =  N i=1

(4)

The RMSE is computed for each subject. The AE was used for estimating the discrepancy between reference and derived BR. The AE metric is computed as: AEi = |RRref (i) − RRest (i)|,

(5)

where RRref (i) denotes the RR of the original respiratory signal and RRest (i) denotes the RR of the extracted respiratory signal for ith observation. In this study, RR measurement from the chest-belt sensor is used as the reference RR.

4.1 Performance for Different Block Duration In this study, we investigate the RR estimation performance of the three methods for different block durations of 20 and 30 s. Evaluation results of this study are presented in Table 2. Results show that VMD-MCF and VMD-FFT based RR estimation methods had lower RMSE for the block duration of 30 s as compared to that of the block duration of 20 s except for the brick-wall of VMD-MCF method. It is further noted that VMD-ACF based method had lower RMSE for the block duration of 20 s as compared to that of the block duration of 30 s, except for the wood-wall of VMD-MCF method. We consider the block duration of 30 s for further performance comparison.

51 Performance Study of Ultra Wide Band Radar Based …

655

Table 2 Performance comparison for different block durations: RMSE (breaths/min) represented as median (25th, 75th percentiles) Wall Block Methods Material Second VMD-MCF VMD-FFT VMD-ACF Wood Glass Concrete Brick

20 30 20 30 20 30 20 30

1.751(1.285, 2.3308) 1.366(0.946, 1.574) 2.999(1.583, 3.198) 1.881(1.318, 3.269) 1.772(1.532, 2.093) 1.585(1.455, 2.258) 1.854(1.696, 2.233) 2.480(1.638, 3.232)

0(0, 1.060) 0.816(0.755, 0.816) 0.530 (0, 1.060) 0.816(0, 1.154) 0.5303(0, 1.5) 0.816(0, 1.154) 1.060(1.0607, 1.50) 0.816(0, 1.154)

0.731(0.624, 1.380) 0.700(0.637, 3.922) 2.362 (0.515, 4.731) 4.937(0.768, 5.738) 2.958(0.654, 4.739) 3.842(0.832, 4.936) 4.504(1.089, 6.078) 4.632(0.695, 7.490)

Fig. 6 Performance of the three RR estimation methods for each of the subjects behind different walls

4.2 Subject-Wise RR Estimation Performance The performance of the three methods are evaluated using signals recorded from ten subjects. For each of subjects, we recorded 4 min UWB radar signal under recording scenarios of wood, concrete, glass and brick walls. Results of this study are presented in Fig. 6 for block duration of 30 s. For most subjects, three methods had acceptable error of RR estimation. As compare with type of walls, all methods had better performance, expect for the subjects behind the glass-wall.

656

P. B. Rao et al.

Table 3 Performance comparison: AE (breaths/min) and RMSE (breaths/min) represented as median (25th, 75th percentiles). Block duration: 30 s Wall Method RMSE AE Processing material time (s) Wood

Glass

Concrete

Brick

MCF FFT ACF MCF FFT ACF MCF FFT ACF MCF FFT ACF

1.3661 (0.9463, 1.5744) 0.8165(0.7559, 0.8165) 0.7008 (0.6376, 3.9229) 1.8819 (1.3188, 3.2694) 0.8165 (0, 1.1547) 4.9375 (0.7680, 5.7385) 1.5853 (1.4556, 2.2585) 0.8165(0, 1.1547) 3.8424 (0.8323, 4.9360) 2.4800(1.6381, 3.2321) 0.8165 (0, 1.1547) 4.6321 (0.6955, 7.4901)

0.8391 (0.2552, 1.6753) 0 (0, 0) 0.5357 (0.3064, 0.7981 ) 1.1984 (0.6464, 2.1457) 0 (0, 0) 0.6304 (0.3202, 1.0048) 1.2002 (0.5444. 2.0050) 0 (0, 0) 0.5073(0.3038,1.7642) 1.3739 (0.6347, 2.8892) 0 (0, 0) 0.7278 (0.2176, 2.7391)

0.1081 0.1107 0.1223 0.1156 0.1183 0.1301 0.1095 0.1140 0.1250 0.1090 0.1132 0.1270

4.3 Performance Comparison Table 3 summarizes the performance of three RR estimation methods developed based on the mode center frequency (MCF), fast Fourier transform (FFT) and autocorrelation function (ACF). Results show that the VMD-FFT based RR estimation method outperforms the VMD-MCF based and VMD-ACF based methods in terms of absolute error (AE) and root mean square error (RMSE) represented as median (25th, 75th percentiles). Based upon our studies, it is noted that the RR can be estimated with acceptable error for a single subject behind the brick-wall, concrete-wall, glasswall and wood-wall. Further, it is noted that the VMD-MCF based method had lower processing time as compared to that of the VMD-FFT and VMD-ACF based methods. However, the processing time is in the order of ms for processing of 30 s data.

5 Conclusion In this paper, we studied the performance of three RR estimation methods that are based on the variational mode decomposition (VMD) technique combined with mode center frequency (MCF), fast Fourier transform (FFT), and autocorrelation function (ACF) by using the UWB signals recorded from subjects behind four walls such as wood, brick, concrete and glass. In terms of AE and RMSE values, the VMDFFT based method outperforms the other two methods. In terms of processing time, the VMD-MCF based method performs better than other two methods. This study showed that the UWB sensing module is capable of estimating the RR for the subjects

51 Performance Study of Ultra Wide Band Radar Based …

657

behind the wood, brick, concrete and glass. In the future directions, we further study the performance for recording scenarios of multiple subjects and moving subject(s) behind the wall.

References 1. Pedersen PC, Johnson CC, Durney CH, Bragg DG (1976) An investigation of the use of microwave radiation for pulmonary diagnostics. IEEE Trans Biomed Eng BME-23:410–412 2. Chen K, Misra D, Wang H, Chuang H, Postow E (1986) An X-Band microwave life-detection system. IEEE Trans Biomed Eng BME-33(7):697–701 3. Shyu KK, Chiu LJ, Lee PL, Tung TH, Yang SH (2019) Detection of breathing and heart rates in UWB radar sensor data using FVPIEF-based two-layer. IEEE Sens J 19(2):774–784 4. Wang K, Zeng Z, Sun J (2019) Through-wall detection of the moving paths and vital signs of human beings. IEEE Geosci Remote Sens Lett 16(5):717–721 5. Yang Y, Cao J, Liu X (2019) Multi-breath: separate respiration monitoring for multiple persons with UWB radar. In: 2019 IEEE 43rd annual computer software and applications conference, vol 1. IEEE, pp 840–849 6. Liang X, Wang Y, Wu S, Gulliver TA (2018) Experimental study of wireless monitoring of human respiratory movements using UWB impulse radar systems. Sensors 18(9):3065 7. Shen H et al (2018) Respiration and Heartbeat Rates measurement based on autocorrelation using IR-UWB radar. IEEE Trans Circ Syst II Express Briefs 65(10):1470–1474 8. Li J, Liu L, Zeng Z, Liu F (2014) Advanced signal processing for vital sign extraction with applications in UWB radar detection of trapped victims in complex environments. Appl Earth Observations Remote Sens J 7(3):783–791 9. Li J, Zeng Z, Sun J, Liu F (2012) Through-wall detection of human being’s movement by UWB radar. IEEE Geosci Remote Sens Lett 9(6):1079–1083 10. Wang H, Narayanan RM, Zhou ZO (2009) Through-wall imaging of moving targets using UWB random noise radar. IEEE Antennas Wirel Propag Lett 8:802–805 11. Lai CP, Narayanan RM, Ruan Q, Davydov A (2008) Hilbert-huang transform (HHT) analysis of human activities using through-wall noise and noise-like radar. IET Radar Sonar Navig 2(4):244–255 12. Yarovoy AG, Zhuge X, Savelyev TG, Ligthart LP (2007) Comparison of UWB technologies for human being detection with radar. In: European radar conference, pp 295–298 13. Bugaev AS, Vasil’ev IA, Ivashov SI, Chapurskii VV (2006) Radar methods of detection of human breathing and heartbeat. Commun Technol Electron J 51(10):1154–1168 14. Venkatesh S, Anderson CR, Rivera NV, Buehrer RM (2005) Implementation and analysis of respiration rate estimation using impulse-based UWB. In: IEEE Military communications conference (MILCOM) 2005, vol 5. IEEE, pp 3314–3320 15. Dragomiretskiy K, Zosso D (2014) Variational mode decomposition. IEEE Trans Sign Process 62(3):531–544 16. Simhadri V, Sabarimalai Manikandan M (2016) Effective systolic peak detection algorithm using variational mode decomposition and center of gravity. In: Proceedings IEEE region 10 conference (TENCON). IEEE, Singapore, pp 2711–2715 17. Deshpande PS, Sabarimalai Manikandan M (2018) Effective glottal instant detection and electroglottographic parameter extraction for automated voice pathology assessment. IEEE Biomed Health Inf J 22(2):398–408 18. X4M200 Respiration Sensor. https://www.codico.com/shop/media/datasheets

Chapter 52

Secure Architecture for 5G Network Enabled Internet of Things (IoT) Voore Subba Rao, V. Chandra Shekar Rao, and S. Venkatramulu

1 Introduction The 5G network enabled internet of things (IoT) is an important technology includes component for developing the concept of industrial internet of things (IIoT). Present system architecture could not be support for upcoming new IoT applications. The proposed next-generation IoT architecture for 5G network enabled IoT is design for upcoming applications service support and secure involvement in internet of things [1]. This paper proposed 5G based on new technologies like machine-to-machine (M2M) communication, 5G-IoT, multi-access edge computing (MEC), network functions virtualization (NFV), and mobile cloud computing (MCC). This security architecture design to manage un-believable network attacks that protect the layers of 5G-IoT architecture. The proposed 5G enable IoT architecture is flexible, simple, effective, layered and more secure for upcoming application technology support, user service demand support, huge data support, and also upcoming industrial internet technology application support [2]. Fourth industrial revolution (4IR) represents industry 4.0 for popular and innovative technology services and huge data management and highly demand services [3]. Industry 4.0 technically represents for recent applications like industrial internet of

V. Subba Rao Dayalbagh Educational Institute (Deemed University), Agra, India e-mail: [email protected] V. Chandra Shekar Rao (B) · S. Venkatramulu Kakatiya Institute of Technology & Science, Warangal, India e-mail: [email protected] S. Venkatramulu e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_52

659

660

V. Subba Rao et al.

things (IIoT) [4], cyber physical systems (CPS) [5] and big data and data analytics [6], and many more innovative applications. The importance and enormous services IoT supports the communication of connected-oriented nodes, that should be planning to configure near about for future in the year 2025, will planning to enhance greater quantity than 75 billion nodes [7]. According to upcoming applications, the present IoT architecture should be modified for reliable and responsive for future challenges. In IoT-based security issues, such as authentication, authorization, data confidentiality, and secure data protections of clients are the major challenges. A secure taxonomy must be implemented to manage cyber-attacks to prevent un-authorized issues [8]. This research paper proposed security architecture for the next-generation 5G enabled IoT architecture for manage billions and millions network devices. The authors also introduced layer security design protect from cyber attacks that affect different layers of the architecture. This paper is organized as follows. In Sect. 2, overview of fourth generation (4G) and fifth generation (5G) enable IoT Applications. In Sect. 3, literature review of authors about various architecture. In Sect. 4, comparison of architectures with proposed architecture. In Sect. 5, proposed architecture for next generation. In Sect. 6, conclusion and future work followed by references of this paper.

2 Overview of Fourth Generation (4G) and Fifth Generation 5(G) Enabled IoT Applications The features of present fourth generation (4G) technology represents wireless communication support with long term evolution (LTE) technology. The innovative characteristics of the 4G networks are accessing information with a flawless connection anytime, anywhere with a wide range of services and also receiving greater amounts of information, pictures, data, video, etc. 4G services are support user traffic, air interfaces, quality of service, radio environment are major characteristics. 4G support with end-to-end QoS and high security services with effective and efficient connection with the network applications with different levels capable to provide 100 Mbps for high mobility and 1 Gbps for low mobility in anytime and anywhere with affordable cost. But, there will be a high volume of data increasing day by day that occurs in wireless communicative technology for future coming year’s discussions by annual visual-network-index (VNI). The same information is informed by CISCO [9]. As per VNI report, it is stated that 4th generation of networking is not support for future applications and for handling network-load, application load, computer system load with incremental approach for upcoming years. There must be introduced new technology for fast, reliable, and affordable for services. 5G Network is upcoming fifth-generation wireless broadband technology based on the IEEE 802.11ac standard. Long term evolution (LTE) technology of 4G becomes advanced or LTE-A for 5G is the evolution of the original 4G LTE technology

52 Secure Architecture for 5G Network Enabled …

661

and design and implemented for higher bandwidths. 5G is the next generation of mobile broadband that is being true enabler of the internet of things (IoT), artificial intelligence (AI), industry 4.0. 5G uses higher radio frequencies to achieve speeds up to 1000 times faster than its predecessor, 4G. Ex:- Downloading a two-hour movie would have taken 26 h on 3G, and 6 min on 4G. Now, it will take just 3.6 s on 5G. [10]. Another big difference is the number of devices 5G can support. Current 4G networks support around 4000 devices per square kilometer where in comparison, 5G can support up to 1 million [11]. Latency for 4G is around 20–30 ms, whereas but for 5G, it will reach well below 10 ms [11]. Wireless networks till 4G mostly focused on the availability of raw bandwidth, while 5G is aiming on providing connectivity fast and resilient access to the Internet users. The 5G networks will be built a combination of technologies like 2G, 3G, LTE, LTE-A, Wi-Fi, M2M, etc. In other words, 5G will be designed to support a variety of applications such as the IoT, connected wearable, high graphical video gaming support. 5G network will offer the ability to handle a various types of connected devices and a support different traffic types. For example, 5G will provide ultra-high-speed links for HD video streaming as well as low-datarate speeds for sensor networks. And also, 5G will support use of cognitive radio techniques dynamically to allow the infrastructure to automatically decide about the type of channel to be offered and adapt conditions in a reasonable given time. 5G is the next generation of wireless communication. The most experienced wireless communication professionals agree that when 5G replaces 4G LTE, it should support three key needs: A decreased latency of less than one second, increased data rates of at least one gigabit per second for tens of thousands of users simultaneously and increased energy efficiency, fast, and reliable connectivity.

3 Literature Review In the literature review of various authors about existing IoT architectures are reviewed. There are as follows. 1. Three-level architecture—In this, IoT architecture is having sensing-layer, network-layer, and application-layer. The perception-layer or physical-layer for sensing objects is the started from layers, i.e., bottom-layer of IoT architecture. Of all these layers, network-layer level is the middle-layer in internet of things architecture [12] for accessing network. The application layer is the top layer in IoT architecture [13] for user support layer. 2. Three-Level Architecture—In this IoT architecture having sensing-layer, network-layer, and application-layer. The perception-layer or physical-layer for sensing objects is the started from layers, i.e., bottom-layer of IoT architecture. Of all these layers, Network-layer level is the middle-layer in Internet of Things architecture [12] for accessing network. The application layer is the top layer in IoT architecture [13] for user support layer.

662

V. Subba Rao et al.

3. SDN-Based Architecture—Qin et al. [14] design IoT architecture having heterogeneous wireless network environments provide reliable quality of service (QoS) to manage internet of things processes. 4. Quality of Service-Based Architecture—Jin et al. [15] proposed 4 types of internet of things architecture about smart-city innovative applications. First one is autonomous, which used for internet disconnected networks; Second one is ubiquitous, in this, smart-things-networks (STN) are related to super-network, i.e., internet; Third one is application-level-overlay used for network functions virtualizations (NFV) to reduce latency and congestion in the network for all of nodes present in the network [16]. Fourth type is service-oriented-task, in that, specific-gateways are communicated to the internet of things for heterogeneity features. 5. SOA-Based Architecture–This is a service oriented architecture (SOA) that is four layered architecture are 1—perception-layer, 2—network-layer, 3—servicelayer, and 4—application-layer. In SOA-based architecture, the perception-layer service is for sense, storage, analyzation, and finalize the process and that data attached with physical-devices [17]. 6. Mobility-First-Architecture—J. Li, Y. Zhang [18] introduced upcoming of future-internet-architecture (FIA), it is known as mobility-first architecture. It is for smartphones belongs to gateway of WSANs in internet of things systems. 7. Cloud-Things-Architecture—Zhou et al. [19] proposed cloud of things architecture for the cloud-type IoT application area. In [20], research author Hao et al. proposed data clouds architecture for the purpose of information-centric networking (ICN) to improve application-oriented-services for the upcoming generation of the internet. 8. Internet of Things Architecture—Pohls et al. [21] got encouraged by IoT-A to designing a platform-oriented-framework for RERUM-FP7 European-Unionproject [15] which will accept for internet of things applications processes for authentication procedure and also security for first design. Social internet of things (S—IoT) architecture. Atzori et al. [22], proposed and merge internet of things of social networks and also given information about actually what the social internet of things (S-IoT). This S-IoT allows the combined of things in a social network that process by software-simulations analyzes-execute the components for proposed network-structure. As per literature given above for implementation of various IoT architectures are used present in industries. But they are not supported for future upcoming challenges and problems arises in new applications and user demand service support in Internet of Things (IoT).

52 Secure Architecture for 5G Network Enabled …

663

4 Comparison of Architectures with Proposed Architecture In Sect. 4, comparison of various literature of reviewers about the literature survey of architectures. The current IoT architectures not affordable for support upcoming application IoT service requirements [15]. 5G communication technology focus on important things like having simple of manage-capability, most-reliable, mostsecurable, high bandwidth, flexible for fast for trouble-shooting, support for wide area network-coverage, low-deployment-capability for costing, and reliability. As per literature survey of architecture of IoT, in Table 1 shows the comparison criteria of various architectures with proposed-architecture that described in graph representation in Fig. 1 for bar graph representation.

5 Proposed Architecture for Next Generation 5G network enabled IoT proposed security architecture—The 5G architecture for applying security methods, security analysis, security services, and the attacks to protect internet of things applications for upcoming 5G technology architecture [23]. Application Layer—Applications layer performs on heterogeneous applications. The application-layer services like user access, network access that process heterogeneous applications. It provide security features like authentication, authorization, trust establishment, manage various types of resource methods, etc. Service-based architecture (SBA) has been proposed for the 5G core network for essential security services. Security services by application layer in 5G as … The security anchor function (SEAF) is in a serving network and is a “middleman” during the authentication process between a UE and its home network. It can reject an authentication from the UE, but it relies on the UE’s home network to accept the authentication. The authentication server function (AUSF) is in a home network and performs authentication with a UE. It makes the decision on UE authentication, but it relies on backend service for computing the authentication data and keying materials when 5G-AKA or EAP-AKA is used. Network Layer—The current IoT systems based on generalized architecture for that reason attacker can easily threat devices in a network. In 5G establish a particular authentication mechanism and key adjustment mechanism, wireless public key infrastructure (WPKI), private routing, interference detection, etc. Network functions virtualization (NFV) addressing for dedicated network of virtualization for utilizing of operate on hardware services. By applying NFV, network routing, load balancing, firewall security is useful for virtual machines (VMs). Next generation network (NGN) is a packet-based network that can be used for both telephony and data and that supports mobility. Sometimes a NGN is referred

Not-support

Not-support

Security

Not-support

Re-configurability

Not-support

Not-support

Data types-support

S-IoT-architecture

Not-support

Wide coverage

Not-support

Robustness of connection

Three level architecture

Not-support

Not-support

Support

Support

Not-support

Support

Not-support

SDN architecture

Architecture-type

Low latency

Application-type

Not-support

Not-support

Support

Support

Not-support

Support

Not-support

Qos architecture

Not-support

Not-support

Not-support

Support

Not-support

Not-support

Not-support

SoA-based architecture

Table 1 Comparison between various embedded os-architecture with proposed architecture

Not-support

Not-support

Support

Not-support

Not-support

Support

Support

IoT-A-based-architecture

Support

Support

Support

Support

Support

Support

Support

Proposed-architecture

664 V. Subba Rao et al.

52 Secure Architecture for 5G Network Enabled …

665

Fig. 1 Bar chart representation for comparison of various architectures

to as an all-IP network. The fundamentals aspects of NGN is packet-based transfer, generalized mobility, broadband capabilities with end-to-end QoS, etc. (Fig. 2). Data link layer—In data link layer, applications are available via shared resource. The security measures in data link layers are trust and privacy protection for personnel users’ data protection is essential. Communication Layer—It is the main channel between the application layer and different operating activities in the IoT system. The communication layer is considered as the backbone of the IoT systems. The whole physical system is loaded with amounts of data and information that need to be shared with other computer nodes in a network. Key Management—The key management security is an important concept of wireless sensor networks for authentication. The exchange of shareable key between wireless sensor networks and cloud environment. Secret Key Algorithms—In wireless sensor network, the frequently used symmetric key algorithms. The useful symmetric key and asymmetric keys algorithms used are skipjack, RC5 and RAS, ECC. Security Routing Protocol—The popular routing protocols for security routing purpose the techniques are using as algorithms like data fusion mechanism, multiple hopes mechanism, and also key mechanism algorithms. Secure network encryption protocol (SNEP) guarantees the integrity, freshness, and point to point authentication. Micro timed efficient streaming loss-tolerant authentication protocols (µTESLA) is a time-based protocol that supplies multipoint broadcast authentication. The lightweight public key authentication technology, pre-shared key (PSK), and random key pre-distribution authentication technology are the main authentication techniques. Perception Layer—The perception layer also known as physical layer in IoT. First layer denoted for the strength of wireless signals. The signals are transmitted

666

V. Subba Rao et al.

Fig. 2 Proposed architecture for 5G enable

between sensor nodes of internet of things using wireless technology by disturbing waves. Second layer, the sensor node in the IoT operate in external and outdoor environments, leading to physical attacks in which an attacker can tamper the hardware components of the device. Third layer is the inherent nature of topology that is moved around different places. The IoT perception layer mostly consists of sensors and RFIDs. These sensors and RFID are less storage capacity and less power consumption and less computation capability for that frequently threats and attacks generally happened [23, 24]. Radio frequency in perception layer [25] is to identification of security measure such as access control, data encryption, and cryptography technology. This layer’s main responsibility is to collect useful information/data from things or the environment. Access control is the part of security that constraint the actions that are performed in a system based on access control rules. As access control part of view, IoT devices preserve the information by RFID tags like processor certainty, analysis of antenna energy and label failures, etc. Data encryption is the process of translation of data into secret key code. Encryption is the effective way to achieve data security. To read an encrypted file, must have access to secret key or password that enables to descript it. The unencrypted data is plain text, and encrypted data is referred as cipher text. The data encryption in RFID by using secure non linear

52 Secure Architecture for 5G Network Enabled …

667

Fig. 3 Proposed IoT layers with security

key algorithm for security [26]. By using RFID system supports for user privacy protection like authenticity, integrity, and confidentiality security features (Fig. 3).

5.1 Security Services of 5G—IoT The data in the network that sent by 5G radio network security possible by international mobile subscriber identity (IMSI) encryption method. The data encryption, integrity, and protected in the form of device to network. The most security impact on 5G technologies like software defined network (SDN), network function virtualization (NVF), and edge computing are integrated with 5G technologies for further enhance of security levels. 3rd generation partnership project (3GPP) is an interface to communicate physical devices as well as virtual devices to communicate with radio access network (RAN), core networks are used to remotely connected devices for core network technologies.

668

V. Subba Rao et al.

In 5G, network slicing [27] is used manage user network traffic and smooth flow of dedicated path for routing and enable virtual connectivity over physical connection. The subnetwork, core networks are being created by network slicing. The network slicing useful for providing security channels for shareable resource technologies. Every slice having its manageable security policies for providing authentication for shareable resource technologies. The network slices are designed like that it can manage shareable resources in critical services as per conditions. The most design aspects of critical services access for reliability, safety, timeliness, security, and also privacy. The security policies approaches autonomously must protect the communication devices for their secure connections. 3GPP’s technology of 5G are proved best services for security in 4G systems. The security mechanisms of 3GPP system are encryption, authorization, authentication, and user integrity. The 3GPP is useful for security purpose, but these networks provide reliable access links for non-threaten security data. But especially protect from distributed denial-of-service (DDoS) attacks by implementation services and deployment services. For example distributed denial-of-service (DDoS) and jamming attacks happened these 3GPP services managing like re-routing traffic through other base stations whether jamming traffic. 5G applies a security mechanism privacy is threat to end users that protect user identifiers. The feature of 5G network is protect the authentication and privacy of enduser system by using internet applications. The 3rd generation partnership project (3GPP) cannot solve every privacy threats of the outside of the 5G network. Even, the 5G network can protects whatever the messages have been send the social media user. The 5G network protect from these threats when social media user traversing through mobile radio access network (RAN) and also 5G network system. At this movement, social media know that messages have been protected end-2-end service and it goes through internet and once it leaves the 5G network system. The social media know that the protected privacy of the user’s data reached their network servers and ultimately stored and further processed. In Fig. 4 shows providing the importance of horizontal security system. This system provides security controls for various domains like telecommunicationnetworks, radio-unit, baseband-units, and transport-networks, network support services like domain name system, dynamic host configuration protocol, security

Fig. 4 System-wise security by 5G

52 Secure Architecture for 5G Network Enabled …

669

management systems. 5G network horizontal security systems provide a target of services available and also confidentiality, integrity of data send by network.

5.2 Quality of Services (QoS) of 5G—IoT 5G requires higher values of QoS parameters to meet the objectives of 5G mobile networks to fullfil increased speeds, capacity of mobile networks, and growth of wireless devices. QoS refers to the technologies that manage data traffic by reducing latency, packet loss, and jitter on the network. Role of QoS in 5G network services are according to the professional leading to 5G area, video-technologies, like HD and UHD videos, are the dominate services among all services of 5G technologies. Additionally, the quantity of mobile devices, YouTube and CCTV-monitoring for M2M communications improves near about billion in 2016 to 6.1 billion in 2021 shows in Fig. 4. By the year 2022, mobiledevices users are approximately improves 2.6-billion machine-to-machine (M2M) connections. There must be an importance of improving the quality management mechanism and algorithms that affects video-technologies and machine-to-machine (M2M) network communication size. Considering the growth in video and M2M connections, quality of service (QoS) in 5G-networks will be able to prioritize video and voice over internet protocol (VOIP) traffic over web-based searching and also applications used to improve to quality to maximization of packet-delay with confidence. The main idea of this paper is to focus on internet of things security architecture design for the purpose of upcoming applications support and client demand services in heterogeneous environment. In this focusing on upcoming architecture for 5 g enabled internet of things (IoT) and also proposes the security services and quality of

Fig. 5 Quantity of M2M connectivity in mobile

670

V. Subba Rao et al.

services (QoS) for full-fledged layered technology for newly upcoming applications services and also customer satisfaction service oriented architecture.

6 Conclusion and Future Work 5G network technology based on IoT architecture is upcoming security that need for upcoming processes of high process tasks as per industry internet of things (IIoT). These technology supports for client demandful services for time to time providing and availability of services with effectively. And also, this new architectural model support new technologies for 5G enable internet of things (IoT) like device to device (D2D) connectivity for communication oriented network, machine type communication (MTC), wireless network function virtualization (WNFV), wireless software defined networks (WSDN), mobile edge computing (MEC), mobile cloud computing (MCC), machine-to-machine (M2M)communication, etc. The proposed architecture design for 5G enabled IoT security platform for protecting various attacks for data protection, system protection and various wireless sensor network tiny devices. The proposed architecture is flexible, efficient, layered, reliable and most effective and capable to interact, enhance, and execute future large processing of applications that contains demand of data and also technology support. The authors also added the security and quality of service (QoS) mechanisms for 5G–IoT layered technology that gives end-to-end security and quality of service (QoS) features to make fullfledged layered architecture to enhancement and enable services for IoT systems as per the need of time-to-time customer oriented satisfactory service support. In future work, we are planning to develop efficient algorithms for security services and quality of service (QoS) for 5G network enabled IoT for enhance the services for upcoming applications support. Acknowledgments Thanks for blessing of God during the period of preparing of my research paper with innovative thoughts to complete my work successfully. I want to express my sincere regards to my senior faculty member, friends and colleagues for their valuable guidance and moral support and help to submit my paper.

References 1. Li S, Da Xu L, Zhao S (2015) The internet of things: a survey. Inf Syst Front 17(2):243–256 2. Seven things to know about the Internet of Things (IoT) and Industry 4.0. https://www. mmsonline.com/articles/7-things-to-know-about-the-internet-of-things-and-industry-40. Last accessed 30 Jan 2015 3. Tseng ML, Tan RR, Chi AS, Chien CF, Kuo TC (2018) Circular economy meets industry 4.0: Can big data drive industrial symbiosis? Resour Conserv Recycl 131:146–147 4. Zanero S (2017) Cyber-physical systems. Computer (Long. Beach. Calif) 50(4): 14–16 5. Miorandi D, Sicari S, De Pellegrini F, Chlamtac I (2012) Internet of things: vision, applications and research challenges. Ad Hoc Netw 10(7):1497–1516

52 Secure Architecture for 5G Network Enabled …

671

6. Wang Y, Kung L, Wang WYC, Cegielski CG (2018) An integrated big data analytics-enabled transformation model: Application to health care. Inf Manag 55(1):64–79 7. Internet of Things (IoT) connected devices installed base worldwide from 2015 to 2025 (in billions), https://www.statista.com/statistics/471264/iotnumber-of-connected-devices-worldw ide/ 8. Conti M, Dehghantanha A, Franke K, Watson S (2018) Internet of Things security and forensics: challenges and opportunities. Future Gener Comput Syst 78(2):544–546 9. Da Xu L, He W, Li S (2014) Internet of things in industries: a survey. IEEE Trans Industr Inf 10(4):2233–2243 10. What is 5G, and How Fast Will It Be, https://www.howtogeek.com/340002/what-is-5g-andhow-fast-will-it-be/ 11. G vs 4G: what is the difference?, https://www.raconteur.net/technology/4g-vs-5g-mobile-tec hnology 12. Leo M, Battisti F, Carli M, Neri A (2014) A federated architecture approach for Internet of Things security. In: Euro Med Telco conference (EMTC). IEEE, pp 1–5 13. Al-Fuqaha A, Guizani M, Mohammadi M, Aledhari M, Ayyash M (2015) Internet of things: a survey on enabling technologies, protocols, and applications. IEEE Commun Surv Tutor 17(4):2347–2376 14. Qin Z, Denker G, Giannelli C, Bellavista P, Venkatasubramanian N (2014) A software defined networking architecture for the internet-of-things. In: Network operations and management symposium (NOMS). IEEE, pp 1–9 15. Abreu DP, Velasquez K, Curado M, Monteiro E (2017) A resilient Internet of Things architecture for smart cities. Ann Telecommun 72(1–2):19–30 16. Matias J, Garay J, Toledo N, Unzilla J, Jacob E (2015) Toward an SDN-enabled NFV architecture. IEEE Commun Mag 53(4):187–193 17. Atzori L, Iera A, Morabito G (2010) The internet of things: a survey. Comput Netw 54(15):2787–2805 18. Li J, Zhang Y, Chen YF, Nagaraja K, Li S, Raychaudhuri D (2013) A mobile phone based WSN infrastructure for IoT over future internet architecture. In: IEEE international conference on green computing and communications and IEEE internet of things and IEEE cyber, physical and social computing. IEEE, pp 426–433 19. Zhou J, Leppanen T, Harjula E, Ylianttila M, Ojala T, Yu C, Jin H, Yang LT (2013) Cloudthings: A common architecture for integrating the internet of things with cloud computing. In: Proceedings of the IEEE 17th international conference on computer supported cooperative work in design (CSCWD). IEEE, pp 651–657 20. Yue H, Guo L, Li R, Asaeda H, Fang Y (2014) DataClouds: enabling community-based datacentric services over the Internet of Things. IEEE Internet Things J 1(5):472–482 21. Pöhls HC, Angelakis V, Suppan S, Fischer K, Oikonomou G, Tragos EZ, Rodriguez RD, Mouroutis T (2014) RERUM: building a reliable IoT upon privacy and security-enabled smart objects. In: Wireless communications and networking conference workshops (WCNCW). IEEE, pp 122–127 22. Atzori L, Iera A, Morabito G, Nitti M (2012) The social internet of things (siot)–when social networks meet the internet of things: Concept, architecture and network characterization. Comput Netw 56(16):3594–3608 23. Will 5G wireless networks make every internet thing faster and smarter?, https://qz.com/179 794/will5g-wireless-networks-make-every-internetthing-faster-and-smarter/. Last accessed 14 Jan 2018 24. Wen Q, Dong X, Zhang R (2012) Application of dynamic variable cipher security certificate in internet of things. In: 2nd International conference on cloud computing and intelligent systems (CCIS), vol 3. IEEE, pp 1062–1066 25. Zhao K, Ge L (2013) A survey on the internet of things security. In: 9th International conference on computational intelligence and security (CIS). IEEE, pp 663–667

672

V. Subba Rao et al.

26. Yang Y, Wu L, Yin G, Li L, Zhao H (2017) A survey on security and privacy issues in internet-of-things. IEEE Internet Things J 4(5):1250–1258 27. Ericsson Homepage, slicingttps://www.ericsson.com/en/digital-services/trending/network-sli cing

Part III

Data Sciences

Chapter 53

Robust Image Watermarking Using DWT and Artificial Neural Network Techniques Anoop Kumar Chaturvedi, Piyush Kumar Shukla, Ravindra Tiwari, Vijay Kumar Yadav, Sachin Tiwari, and Vikas Sakalle

1 Introduction The web, generally, is an easy to understand place where individuals are keen on downloading pictures, music, and recordings. The web gives an effective conveyance framework that is highly inexpensive. Acquiring different digital content (Image, video, audio) by means of the web requires a small amount of the time it would take to go to a physical store to buy said digital content (Image, video, audio). Additionally, when one buys digital content (Image, video, audio) over the web, one would just require virtual space to store the digital content (Image, video, audio) being referred to rather than putting away it on a rack or any place such digital content (Image, video, audio) may be put. Then again, such prepared accessibility spaces attract individuals with the plausibility of copyright intrusion [1]. The innovation that digital content (Image, video, audio) proprietors connected to secure their substance is cryptography. Since cryptography was utilized, this is the most widely recognized technique for assurance just as the most created. The accumulation of records would be scrambled utilizing an encryption key. The documents would then be dispersed to paying clients [2]. At last, the client would utilize an unscrambling key, given by the distributer, to get to the arrangement of records. The danger of somebody obtaining the arrangement of encoded documents is viewed as satisfactory, gave that the unscrambling key is just accessible to paying clients. Nonetheless, what is to prevent the paying client from conveying the arrangement A. K. Chaturvedi (B) · R. Tiwari · V. K. Yadav · S. Tiwari · V. Sakalle Lakshmi Narain College of Technology, Bhopal, India e-mail: [email protected] P. K. Shukla University Institute of Technology, RGPV, Bhopal, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_53

675

676

A. K. Chaturvedi et al.

of documents once it has been unscrambled? When the paying client obtains the unscrambling key, that client would then be able to convey the arrangement of documents freely by means of the web [3]. At the end of the day, while cryptography can shield records from block attempt the innovation won’t shield documents from the end client. Numbers of approaches were proposed by researcher for protecting the data owner privacy. Out of those invisible watermarking was adopted by various researcher, where carrier or sensitive data embedded by some secret signature content while carrier signal may be image, video, audio, etc. [4, 5]. One of the crucial causes for the copyright concern is the simplicity, easily reached to the global computer network and some creation that can modify the substance according to the client requirement. Rest of this document was organized into the following section: Sect. 2 shows various methods adopt by the researcher for watermarking [18]. Section 3 involves proposed methodology with block diagram. Section 4 involves experiment and results where values obtained from different evaluation parameters [17] were compared. Finally, whole work gets concluded in Sect. 5 of this paper.

2 Literature Review Yamato et al. in [6] has projected a between class change proposal for implanting image in frame setting of the image. Inventor has first applied discriminant assessment approach for altering over image into matching pattern than apply BCV method which organize pixel into edge and non-edge area. Alteration in spatial locale of the image was proficient for hiding information into photograph edge district. Here low space reachable for watermark while inserting in edge area is anything but tricky to chase the mystery data. Piper et al. in [7] generate watermark from the information picture just and insert in low repetition band of picture. In this document, frail watermarking practice was projected which guard pictures against JPEG force assault. Here document has not wrap other sort of assault, while implementation of time of this work was similarly stopped high. Abdullah et al. in [8] has projected a watermarking approach by inserting dual information in DCT center band recurrence district. In this document picture was obstructed into fix size despite the fact that per watermark bit DCT center band fix co-ordinates esteems were interchange. Interchanging of this was depends on precise circumstances, so alike understanding of circumstances were reserved up at recipient side for watermark mining. This effort faces low watermark incorporation with low conflict against spatial assaults. Khalilian et al. [9], projected a fractal cryptogram-based identity reproduction calculation where information image was sent in intensely hilarious land. So, failure of data was received which was recouped by further bundles of fractal code. Hardening of image was similarly protected to by hashing hash key as mystery data. This

53 Robust Image Watermarking Using DWT and Artificial …

677

term paper has perk up the enthusiasm in lousy circumstance yet it required further data broadcast with computational complexity. Huang et al. [12] have projected a novel visually impaired watermarking formula utilizing Back dissemination neural system in wavelet space. In this document, a diverse watermark is entrenched using the advantage of human being Vision System (HVS) to finish better subtleness and strength. Neural system is used to preserve the link between the rooted watermark and comparing watermarked pictures. Huang et al. [12] have similarly projected a energetic policy in undedicated distinct wavelet change (UDWT) area utilizing fluffy SVM for geometric twisting modification. Despite the fact that the process gives acceptable strength, yet it requires over the peak computational time, and moreover it isn’t enthusiastic to district geometric contortions.

3 Proposed Methodology This section gives a description of methodology for embedding secret data into carrier or sensitive image. Figure 1 block diagram gives a graphical flow of proposed work. Embedding of secret data and extraction of that data are the section of proposed model. Security of secret data against intruders was enhanced by using data scrambling method known as Inverse S-order [7, 8]. While swapping concept for embedding improves the security against spatial attacks.

3.1 Pre-processing Set of pixels in image was pre-processed for fetching the Discrete Wavelet Transform (DWT) frequency values. Here input was transformed into two-dimension matrix. Conversion of image matrix into square matrix was also performed in this step. Conversion of image into Image pixel color value were estimate in this section as per the input image format.

3.2 DWT Feature In this work DWT frequency feature was used. Work has embedded watermark in LL region of the image. This block of image is obtained by filtering the image rows from the low pass filter then pass same to the low pass filter but here column are filter for the analysis. This block contains flat region of the image which do not have any edge information, so this is term as approximate version of the image. Here attack operation effects are very low.

678

A. K. Chaturvedi et al.

Fig. 1 Block diagram of proposed work

3.3 Watermark Binary Conversion In this step, watermark data is read and pre-process into fix dimension than data will be converted into binary format where each bit is either black or white.

53 Robust Image Watermarking Using DWT and Artificial …

679

3.4 Inverse S-Order In this step, all the color channels Red, Green, and Blue are merged into single dimension matrix or vector S. Here as per inverse s- order sequence of pixel value are insert in the S vector. This can be understood by below example, where Fig. 2a represent original image and Fig. 2b represent s order of the matrix. In this way, whole image pixel values are arranged in the single vector S where order of the pixel values are inverse s-order. In case of color image, first red matrix is inserted than green is insert and at last blue matrix is insert.

4

5

6

4

5

6

6

6

4

7

8

9

5

4

6

9

(a) 4

5

6

4

6

6

6

5

4

7

8

9

9

6

4

5

(b) S = [4, 5, 6, 4, 6, 6, 6, 5, 4, 7, 8, 9, 5, 4, 6, 9] Fig. 2 Inverse S-order representation

Fig. 3 Dataset representation

680

A. K. Chaturvedi et al.

3.5 Embedding of Watermark As per input matrix of watermark either black pixel or 0 OR white pixel 1 is represent. So each shows one class of the watermark, now input S-order matrix is broken into N × 8 matrix such that N × 8 = LL. Now if black pixel comes for hiding than read one row from 8 × N and increase pixel values of left-hand side of four values. In other case, if white pixel comes for hiding than read next row from N × 8 and increase pixel values of right-hand side of four values. So if watermark have 1024 pixels than total 1028 × 8, pixel values of LL band of cover image get affected.

3.6 Training of EBPNN Neural Network Here each affected row which take part in hiding of watermark is used for training where output is corresponding watermark pixel. So this set of input and output vector is pass in the spiking neural network [13–15]. Here input of embedded vector was passed in the three-layer neural network. At the same time output vector, which is watermark bit, is also passed, so perfect training of neurons will be done. Now consider one neural network with three layers. Here input layer is represented by notation i while next or hidden layer is denote by j so weight values between these layers were represented by wij . In a similar fashion output layer represent by k notation. So weight of j and k layer is denoted by wjk . Hence output of one layer to another is depended on below equation where x act as input w is weight between layers and θ is biasing value. Xi =



xi .wi j − θi

(1)

In above equation i ranges, 1 ≤ i ≤ n, while n is the number of nodes at layer i. Now as per output value error value were evaluated to adjust the weights of the neurons. This adjustment is termed as learning of the neural network. Now calculation of this error and reverse calculation is done by following equation where y is resultant value while o is desired output value. So partial value which increase this learning need ti calculate as per all node in output layer. So error corresponds to the input data was estimate by differencing desired output obtain from output layer. ek (n) = dk (n) − yk (n)

(2)

∂(−1 ∗ ((yi ∗ log(Oi )) + (1 − yi ) ∗ log(1 − Oi ) ∂ Ei = ∂ Oi ∂ Oi

(3)

∂ Ei = ∂(−1((yi ∗ log(Oi )) + (1 − yi ) ∗ log(1 − Oi ) ∂ Oi

(4)

53 Robust Image Watermarking Using DWT and Artificial …

681

By following the above equation one can get partial derivative value for other nodes as well. Derivative value was estimated with respect to neural network layer weight value as well. So this was calculated for hidden layer output and weight between hidden and output layer. n

∂ Hi

i=1 ∂ W i( j,k)

=

∂(Hi(output) ∗ W i( j,k) ∂ Wi( j,k)

(5)

Final derivative was obtained by utilizing chain rule, this is shown in below equation. So different derivative were estimated by multiplying each derivatives values. ∂ E i ∂ Oi ∂ Hi ∂ Ei = ∗ ∗ ∂ Wi ∂ Oi ∂ Hi ∂ Wi

(6)

Therefore, estimation of W i obtained by arranging each calculated partial derivative values in below matrix. ⎡ ∂ E1 ∂ E2 ∂ E3 ⎤ ∂ W1,1 ∂ W1,2 ∂ W1,3 ∂ E2 ∂ E3 ∂ W2,2 ∂ W2,3 ∂ E1 ∂ E2 ∂ E3 ∂ W3,1 ∂ W3,2 ∂ W3,3

⎢ 1 ∂ Wi = ⎣ ∂∂WE2,1

⎥ ⎦

(7)

Weight values between layers were updated by adding Wi , this is shown in below equation. wi j = wi j + wi j

(8)

These updates of weight value were over when obtained error is either zero or very close to it. Sometime iterations were also set to stop these updates.

3.7 Embedded Image Hereafter embedding the watermark in LL portion of image NxN matrix is resized into LL dimension. Now inverse discrete wavelet transform was done by taking LH, HL, HH matrix as an input.

682

A. K. Chaturvedi et al.

4 Experiment and Result All calculations and measurement of proposed approach was executed by utilizing the MATLAB platform. This software has compatible inbuilt functions for performing image processing operations. Experimental setup has Intel Core i3 processor with 4 GB random access memory.

4.1 Dataset Real set of image dataset obtained from reliable resource https://sipi.usc.edu/dat abase/?volume=misc. Images in dataset have color and gray format. Experiment was done on 256 × 256 dimension images.

4.2 Evaluation Parameter 4.2.1

4.2.2

Peak Signal to Noise Ratio (PSNR)

Max_pixel_value PSNR = 10log10 Mean_square_error

Signal to Noise Ratio (SNR)



SNR = 10log10

4.2.3

Signal Noise

(9)

(10)

Extraction Rate n=

nc X 100 na

(11)

Here nc is count of correct watermark information bits. Here na is count of total number of watermark bits while embedding.

4.2.4

Normalized Correlation (NC)

The Normalized Correlation (NC) between the images which are of size m x n is given by the following expression. It is value ranges in the interval [0 1], closer the NC value to 1 indicates higher is the correlation between two images.

53 Robust Image Watermarking Using DWT and Artificial …

N M i=1

j=1

Org2

i=1

j=1

Wt 2

NC = N M

683

(12)

4.3 Results From Table 1 PSNR values show that under no-attack environment proposed neural network work is better as compare to previous work in [8]. Here use of DWT position for embedding with swapping concept has increased PSNR value by reducing number of modifications. From Table 2 SNR values show that under no-attack environment proposed neural network work is better as compare to previous work in [8]. Here use of DWT position for embedding with swapping concept has increased PSNR value by reducing number of modifications. From Table 3 data extraction values shows that under no-attack environment proposed neural network work is better as compare to previous work in [8]. Here use of neural network for extraction reduces the extraction time of the proposed model. As single pass of feature vector in trained neural network gives output of watermark bit correctly. From Tables 4, 5, 6 and 7 it has been observed that under various attacks of spatial as well as geometrical, proposed model has improved all set of evaluation parameters as compared to existing work [8]. In the proposed work use of neural network for Table 1 PSNR based comparison between proposed and existing work

Table 2 SNR based comparison between proposed and existing work

Table 3 Data extraction time comparison between proposed and existing work

Digital images

Proposed work

Existing work [8]

Mandrilla

91.5686

49.5745

Tree

88.7644

49.7886

Lena

92.7047

49.6257

Digital images

Proposed work

Existing work [8]

Mandrilla

33.9577

2.41747

Tree

32.2287

2.63152

Lena

35.663

2.46871

Digital images

Proposed work

Existing work [8]

Mandrilla

2.21446

12.6163

Tree

2.562

13.964

Lena

2.49373

15.3259

684 Table 4 PSNR based comparison between proposed and existing work using various attack

A. K. Chaturvedi et al. Digital images

Proposed work

Existing work [8]

Noise attack Mandrila

71.9962

0.074046

Tree

72.0123

0.067616

Lena

72.0381

0.034046

Filter attack Mandrila

74.3093

47.1842

Tree

81.2922

47.157

Lena

83.1182

47.157

Mandrila

61.7811

15.7975

Tree

59.6248

15.7975

Lena

61.4197

15.8604

Digital images

Proposed work

Existing work [8]

Mandrila

14.3852

−47.2311

Tree

15.4766

−47.0894

Lena

14.9964

−47.137

16.6984

0.027143

Tree

24.7564

0

Lena

26.0765

0

Geometric attack

Table 5 SNR based comparison between proposed and existing work using various attack

Noise attack

Filter attack Mandrila

Geometric attack Mandrila

4.17014

−84.2025

Tree

3.08901

−84.2025

Lena

4.37801

−84.1396

extraction reduces the extraction time of the proposed model and use of DWT feature for embedding in LL band, reduce chance of attack on watermark data. Training of neural network while embedding increase detection ratio of watermark even under different type of attacks.

5 Conclusions The work proposed in this paper has developed an innovative watermarking technique which utilizes frequency feature of the cover image. In order to increase the

53 Robust Image Watermarking Using DWT and Artificial … Table 6 Extraction rate comparison between proposed and existing work using various attack

Digital images

685

Proposed work

Existing work [8]

98.6111

43.0556

Tree

77.7778

43.0556

Lena

81.9444

43.0556

Mandrila

97.2222

58.3333

Tree

97.2222

43.0556

Lena

94.4444

48.6111

Mandrila

54.1667

0

Tree

55.5556

0

Lena

55.5556

0

Digital images

Proposed work

Existing work [8]

Mandrila

0.989

0.177187

Tree

0.802

0.177187

Lena

0.854

0.177

0.977

0.19935

Tree

0.977

0.226375

Lena

0.952

0.204058

Mandrila

0.725

0.215597

Tree

0.738

0.209357

Lena

0.738

0.175402

Noise attack Mandrila

Filter attack

Geometric attack

Table 7 NC based comparison between proposed and existing work using various attack

Noise attack

Filter attack Mandrila

Geometric attack

robustness of the work, inverse s-order technique were also used before embedding the watermark in LL band of DWT feature. In the proposed scheme each embedded watermark vector was used to train the neural network. Training of neural network increase watermark detection accuracy under various spatial and geometrical attacks. Experiment was done on real dataset and results show that proposed work has improved the PSNR values under no attack condition by 45.43%. While SNR values were raised by 31.44 dB. In the proposed scheme, it was also observed that under spatial attack average normalized correlation value was 0.9251 and under geometrical attack average normalized correlation value was 0.7336.

686

A. K. Chaturvedi et al.

References 1. Cao X, Du L, Wei X, Meng D, Guo X (2015) High capacity reversible data hiding in encrypted images by patch-level sparse representation. IEEE Trans Cybern 46(5):1132–1143 2. Su Q, Liu D, Yuan Z, Wang G, Zhang X, Chen B, Yao T (2019) New rapid and robust color image watermarking technique in spatial domain. IEEE Access 7:30398–30409 3. Porwal P, Ghag T, Poddar N, Tawde A (2014) Digital video data hiding using modified LSB and DCT technique. Int J Res Eng Technol 3(4):630–634 4. Chimanna MA, Khot SR (2013) Digital video data hiding techniques for secure multimedia creation and delivery. Int J Eng Res Appl (IJERA) 3(2):839–844 5. Zhu C, Sun K (2018) Cryptanalyzing and improving a novel color image encryption algorithm using RT-enhanced chaotic tent maps. IEEE Access 6:18759–18770 6. Yamato K, Hasegawa M, Tanaka Y, Kato S (2012) Digital image watermarking using betweenclass variance. In: International conference on image processing, vol 21, no 5, pp 2185–2188 7. Piper A, Safavi-Naini R (2013) Scalable fragile watermarking for image authentication. IET Inf Secur 7(4):300–311 8. Abdullah MA, Dlay SS, Woo WL, Chambers JA (2016) A framework for iris biometrics protection: a marriage between watermarking and visual cryptography. IEEE Access 4:10180– 11093 9. Khalilian H, Bajic IV (2013) Video watermarking with empirical PCA-based decoding. IEEE Trans Image Process 22(12):4825–4840 10. Xiaochun C, Ling D, Xingxing W, Dan M, Xiaojie G (2016) High capacity reversible data hiding in encrypted images by patch-level sparse representation. IEEE Trans Cybern 46(5):1132–1143 11. Sun Y, Sui X, Gu G, Liu Y, Xu S (2016) Compressive super-resolution imaging based on scrambled block Hadamard ensemble. IEEE Photonics J 8(2):1–8 12. Huang S, Zhang W, Feng W, Yang H (2008) Blind watermarking scheme based on neural network. In: Proceedings of the 7th IEEE world congress on intelligent control and automation, pp 5985–5989 13. Abd El-Latif AA, Abd-El-Atty B, Hossain MS, Rahman MA, Alamri A, Gupta BB (2018) Efficient quantum information hiding for remote medical image sharing. IEEE Access 6:21075– 21083 14. Nagai Y, Uchida Y, Sakazawa S, Satoh SI (2018) Digital watermarking for deep neural networks. Int J Multimedia Inf Retrieval 7(1):3–16 15. Shareef AQ, Fadel RE (2014) An approach of an image watermarking scheme using neural network. Int J Comput Appl 92(1):44–48 16. Haribabu K, Subrahmanyam GR, Mishra D (2016) A robust digital image watermarking technique using auto encoder based convolutional neural networks. In: IEEE workshop on computational intelligence: theories, applications and future directions (WCI), pp 1–6 17. Chaturvedi AK, Shukla PK, Yadav VK, Tiwari S, Tiwari R (2019) Skew tent map based secure non-separable reversible data hiding on histogram modification in inverse S-order. Int J Sci Technol Res (IJSTR) 8(10):2962–2969 18. Chaturvedi AK, Shukla PK (2018) Non-separable histogram based reversible data hiding approach using inverse S-order and skew tent map. Int J Comput Sci Inf Secur (IJCSIS) 16(1):112–127 19. Singh P, Shivani S, Agarwal S (2014) A chaotic map based DCT-SVD watermarking scheme for rightful ownership verification. In: 2014 Students conference on engineering and systems. IEEE, Allahabad, pp 1–4

Chapter 54

Fraud Detection in Anti-money Laundering System Using Machine Learning Techniques Ayush Kumar, Debachudamani Prusti, Daisy Das, and Shantanu Kumar Rath

1 Introduction Globally, financial Institutions are influenced diversely by money laundering activities and implementing numerous ways to combat them [1]. ML has become a highly troublesome activity as it finances terrorism and mafia indirectly [2]. The process of intercepting these money laundering transactions is called anti-money laundering and is difficult with the advancement of technology in financial institutions. The number of transactions occurring day by day is immense and difficult to identify the fraud manually. This study recommends an architecture of how financial institutions can implement a system to discover fraudulent behaviors among historical data or transactions. After distinguishing the pattern of transactions suspicious transactions can be flagged and then corrective actions can be initiated. Broadly the AML activities are divided into two parts primarily, the suspected ML activities are identified with continuous monitoring and secondly, various measures have been imposed to stop or intercept these ML incidents. These two activities go hand in hand. ML as a process having multiple steps, which need to be performed in a way to evade any attraction of persons involved in the process. The process of ML consists of three main activities such as inducing, layering, and assimilation, showcased in A. Kumar (B) · D. Prusti · D. Das · S. K. Rath National Institute of Technology, Rourkela, India e-mail: [email protected] D. Prusti e-mail: [email protected] D. Das e-mail: [email protected] S. K. Rath e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_54

687

688

A. Kumar et al.

Fig. 1 Steps of money laundering

Fig. 1 [2]. In the inducing step, the dirty money or the wealth acquired by any illegal means is brought into the channel. The complete process is performed to reduce the suspiciousness of the origin of the fund. In the next step, the funds being a suspicious one, are reduced by juggling it across numerous administrations and institutions. This last activity can be accomplished by using wire transfers, banks, and international money exchanges. The process demands utmost attention as if to evade the attention of the persons involved. The latter step is the most important one where the funds are invested in growing and non-developed economies so that these funds can be interpreted as a legitimate one. After being undetected by the relevant adjudicators this transformed money is transferred to its master. ML ventures may occur at anyplace because there are numerous techniques to launder money [3]. In this study, AML detection issues for doubtful and suspicious transactions in financial institutions or organizations, particularly retail or monetary one has been analyzed. AML solution comprises of fraud control methods that support reducing the old-fashioned practice of a checking process. Presently, most of the financial institutions depend upon rule-based systems to separate any doubtful transactions based on earlier constituted static rules. It is difficult to be convinced of the legitimacy and the intent behind the occurrence of transaction. Hence, this study focuses on detecting a reliable and cost-effective option in order to determine fraudulent transactions from the given historical data using machine learning techniques. With the advancements in technologies, the lawbreakers often explore innovative and multifaceted means to deceive the system [3]. These intentions are accomplished by connecting multiple nations and territories, counterfeit or shell businesses creation, stolen individualities, and misuse of the monetary institutions to fulfill their intentions to carry out wrong activities in society. The steps of AML manifested in Fig. 2 are precaution, detection, and enforcement. The first one is the precaution or anticipation in which various institutions and organization have to ensure various guidelines (directives) in doing business and dealings

54 Fraud Detection in Anti-money Laundering System Using …

689

Fig. 2 Anchors of anti-money laundering

with the customer. Well-known routines such as detailed profile of customers are being widely practiced these days where the customer’s document verification and background inspection are done before any business can take place. After proper verification by the institution, the particular individual can utilize or enjoy the amenities. Understanding the features in the earlier occurred transactions patterns or exemplars are recognized to flag the new transactions in the real-time. The ultimate step of the AML process is enforcement in which the flagged transactions as positive for the ML are forwarded for inspection. An investigation is performed and if observed to be fraudulent, the capital is seized and a hearing is put upon the particular personnel. These three activities act as anchors for the comprehensive AML process.

2 AML System Architecture In this study, architecture for the proposed software to detect the AML activities has been proposed which is presented in Fig. 3. Suspicious transactions are being processed where a circumstantial machine learning algorithm has been chosen. The parameters of this algorithm are turned on the basis of the proportion between the training and the evaluation dataset such that it gives the most reliable accuracy. The incoming transaction data in the real-time is scrutinized by the current algorithm applied such as to flag the transaction into a fraudulent or legitimate one. The flagged transactions amongst which illicit once are noted and sent for further scrutiny in order to authenticate ML aspects.

690

A. Kumar et al.

Fig. 3 AML system architecture prototype

Fig. 4 SVM and logistic regression example

3 Machine Learning Techniques The data set identified in Kaggle repository has a Fraud parameter that is a two-class Boolean variable and can attain states as 0 or 1 shown in Fig. 9 [2]. 0 stands for the transaction is not to be a fraud and 1 for the transaction to be a fraudulent one. This feature being binary or only two class, thus machine learning techniques such as SVM, Logistic Regression, Average Perceptron, neural networks, decision trees, random forest help to yield better performance [3].

54 Fraud Detection in Anti-money Laundering System Using …

691

Machine Learning techniques broadly can be classified into two categories that are supervised and Un-supervised learning. Primarily, in the supervised category, everything is predefined or well defined. Data are well-labeled, which implies that some of the data are previously designated with the right results. Following that, the system is fitted with past occurred transactions so that supervised learning algorithm considers the training data (collection of training samples) and produces a correct outcome from the labeled data. SVM, Logistic Regression, decision trees come under this category. In Unsupervised Learning category, the training of system is performed using information that is neither classified nor labeled and enabling the algorithm to operate on that information without supervision [4]. The program intends to group un-ordered information according to similarities, relationships, patterns and variations without any former knowledge of training data. Implemented techniques are discussed below.

3.1 Support Vector Machine (SVM) It is a discriminative classifier formally represented by a separating hyperplane [5]. It indicates that the algorithm outputs an optimal hyperplane, which classifies new examples into separate classes. In two-dimensional space, this hyperplane is a line distributing the plane into two portions wherein each class lay on either side [6].

3.2 Logistic Regression It is a classification algorithm utilized to distribute observations to a discrete collection of classes [3]. Unlike linear regression which gives continuous values, logistic regression moulds its output utilizing the logistic Sigmoid function to deliver a probability value, which can be mapped to two or more discrete classes later [7].

3.3 Average Perceptron This approach is a simplistic version of neural network. Here inputs are categorized into several possible outputs based on a linear function and then connected with a collection of weights that are obtained from the feature vector [1]. The model is befitted in determining linearly separable patterns, whereas neural networks (particularly deep neural networks) can model more complicated class boundaries. Nevertheless, a perceptron is more active as they do process facts serially and can be utilized with continuous training [8].

692

A. Kumar et al.

3.4 Neural Network It is a collection of algorithms which attempts to understand the underlying connections between a collection of data by means that simulates the behavior of an individual mind [7]. Thus, it relates to arrangements of neurons, which may be natural or man-made in nature. They help to bring out alternating information, so as to produce the most favorable outcome without intending to change the output. In this study, multiple types of Neural Networks like Artificial Neural Network (ANN), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have been implemented [7, 9–11]. Artificial Neural Networks (ANN). It is known as the connectionist system that is motivated by the operation of a biological working brain [12]. A brain that does its computation learns the work it has to do by examining and studying various examples that have been provided to it. These examples are programmed with task-specific rules. Thus, an ANN depends upon a combination of associated members called as artificial neurons, which models the neuron of the biological brain of the animal that is discussed earlier. Each of the connection is pretty comparable to the synapse in an actual brain, which can transfer or convey signals to the next neuron. Thus, the neuron can obtain a signal, process it, and forward it to the next one. An example of ANNas shown in Fig. 5b is illustrated that has the three layers which are input, hidden, and output having reciprocity between them. The Input nodes contribute information from the external environment to the network and are collectively referred to as the “Input Layer” [11]. Computation at the input nodes are not done, they are passed to the next set of nodes known as the hidden nodes. These nodes are not directly connected with the outside world. Collection of these types of nodes forms the hidden layer. The number of hidden layers is not fixed. Computations which are performed by the hidden layers are passed to the output layers. These output layers pass on the learning of the network to the output world. A feed-forward network has multiple hidden layers and a single input and output layer.

Fig. 5 Perceptron and artificial neural network example

54 Fraud Detection in Anti-money Laundering System Using …

693

Fig. 6 Convolutional neural network composition

Convolutional neural network (CNN). It is substantially related to the conventional Neural Network as described in the earlier section [13]. These networks are composed of neurons that have learnable weights plus biases. Neurons receive data after which they perform a dot product and optionally do a non-linear computation. The total network demonstrates a single differentiable score function from the raw data having various classes in it. Convolutional is a specific kind of linear method where one function affects the other. These are the basic neural networks, which utilize the convolution matrix instead of the general matrix multiplication in one of the layers [14]. In Fig. 6, a basic convolutional neural network is shown. The result is simply a two-class variable that is identical to the dataset that is utilized in this study. Recurrent neural network (RNN). It intends to use the output from the preceding step and inputs the current step. But in the conventional neural network, there is no dependence of the input and the output [15]. This special feedback scenario in RNN is helpful in the cases where the prediction of the following thing is required while observing the previously occurred information. An important feature of RNN is the hidden state that acts like a memory cell to store some information. This memory of the RNN retains almost all of the knowledge that has been anticipated by it. All of the hidden layers perform the same task as the parameters of the input is the same for each case which in turn makes this neural network less complicated as compared to its other counterparts. A basic RNN structure can be seen in Fig. 7, where the second hidden layer output is going as an input to the first hidden layer. This combination of the hidden layers act as a memory cell as explained above.

3.5 Decision Tree It has a basic tree-type structure or quite similar to a graph where each node acts as an unbiased decision. It is a special type of arrangement where each of the nodes signifies utility, resource expense, and a test on a particular feature of the dataset [16]. Each part of the decision tree signifies a particular class, which is unique in

694

A. Kumar et al.

Fig. 7 A simple recurrent neural network

Fig. 8 Decision tree and random forest example

the dataset. A decision tree is quite suitable for fraud detection in the incoming transactions that are taking place [16]. An example of the decision tree is illustrated in Fig. 8a. Each node of the decision tree acts as a conditional statement.

3.6 Random Forest A single decision tree sometimes overfits the data, thus not presenting out the best outcome, hence using a collection or accumulation of decision trees are called

54 Fraud Detection in Anti-money Laundering System Using …

695

Fig. 9 Synthetic dataset from kaggle

Random forest or random decision forest [16, 17]. They are operated as an ensemble or collection and are used for classification, regression, and other jobs, operated by accumulating plenty of decision trees during training time and outputting the state that is the mode of the classes. A sample of the random forest has been shown in Fig. 8b, where each of the N trees predicts the result and then the means of them are taken into account to yield most dependable decision.

4 Experimental Work 4.1 Dataset In this study, the dataset being utilized is from Kaggle repository [18]. The dataset has 11 features and 6362,620 rows illustrated in Fig. 9. Each row demonstrates a transaction that the user has performed. The dataset of the banking transactions are very personal and secretive thus the Kaggle dataset is a synthetic one. The most important features of the dataset are ‘amount’, ‘type’, ‘isFraud’, ‘isFlaggedFraud’. ‘Amount’ and ‘type’ are of numeric in nature while ‘isFraud’ and ‘isFlaggedFraud’ are of Boolean in nature.

4.2 Experimental Setup The computing system used to run algorithms for this study has i5 9300H central processing unit, NVidia 1660Ti 6 gigabytes graphics card, 16 gigabytes of random access memory, 1 Terabyte of secondary storage.

696

A. Kumar et al.

5 Result Analysis To measure the performance of algorithms, several metrics such as accuracy, F1score, precision, sensitivity, specificity, Matthews’s correlation coefficient (MCC) have been considered. Amongst all of them, accuracy is the most prevailing or significant one. Accuracy =

TP + TN TP + TN + FP + FN

(1)

TP TP + FP

(2)

Precision =

SensitivityorRecall = F1-Score = MCC =

TP TP + FN

2 ∗ (Precision ∗ sensitivity) (Precision + sensitivity)

2 ∗ (TP ∗ TN − FP ∗ FN) ([(TP + FP) ∗ (FN + TN) ∗ (FP + TN) ∗ (TP + FN)](1/2) Specificity =

TN TN + FP

(3) (4) (5) (6)

Accuracy is a performance parameter of classification technique that intends to discover correctness of relationships and patterns amongst the parameters in the dataset based on the input, or training data (1). Precision hints on the preciseness of the model of those predicted positive classes, which are positive in existence (2). Recall determines as to how many of the Actual Positives, model captures through labeling it as Positive (True Positive) (3). For example, in fraud detection, if a fraudulent transaction (Actual Positive) is predicted as non-fraudulent (Predicted Negative), the outcome can be pretty critical. Recall is also termed as Sensitivity. F1-Score is the harmonic mean of the Precision and the recall (4). MCC acts as a means of the quality of binary and multi-class classifications (5). Specificity is interpreted as the relationship of actual negatives, which were predicted as the negative (or true negative) (6). This signifies that there is a different proportion of actual negative, which got predicted as positive and could be termed as false positives. This proportion could likewise be called a false positive measure. Various classification techniques are implemented in order to critically assess the performance of them on the dataset identified and the results are displayed in Fig. 10, and Table1. The table displays the four proposed techniques and the comparison with the state of the art. The proportion of the training to the evaluation dataset is varied from 0.5 till 0.9, where 0.7 signifies that the training data is 70% of the complete data set while the evaluation dataset is the rest 30%.

54 Fraud Detection in Anti-money Laundering System Using …

697

Fig. 10 Accuracy versus training data

Table 1 Calculated parameters of the technique applied Parameters/techniques

Accuracy

F1-score

Precision

Sensitivity

MCC

SVM (Proposed)

0.9675

0.0182

0.8366

0.0092

0.0858

SVM [7]

0.9568

0.0105

0.7933

0.0899

0.2001

Average perceptron (Proposed)

0.7632

0.0024

0.7908

0.0012

0.0247

Average perceptron [1]

0.7711

0.0031

0.6787

0.0011

0.0321

Logistic regression (Proposed)

0.9597

0.0155

0.8614

0.0078

0.0799

Logistic regression [3]

0.9601

0.0211

0.8111

0.0069

0.0687

Decision tree (proposed)

0.9820

0.0378

0.9324

0.0193

0.1328

Decision tree [1]

0.9745

0.0299

0.9199

0.0271

0.1198

Random forest (Proposed)

0.9982

0.3968

0.9237

0.2527

0.4828

Random forest [5]

0.9422

0.3546

0.9122

0.2112

0.4765

ANN

0.8417

0.0032

0.7033

0.0016

0.0282

CNN

0.8575

0.0041

0.8423

0.0021

0.0374

RNN

0.8175

0.0033

0.8761

0.0017

0.0335

In Fig. 10, it may be observed that Logistic Regression and SVM are exhibiting better accuracy when the ratio of the training and the evaluation dataset is 0.7 while other techniques expose it at 0.8. The average perceptron technique shows the least accuracy of 0.7632 while the random forest (RF) presents the highest with 0.9982. The accuracy of the ANN, CNN, and RNN are quite close to each other being at 0.8417, 0.8575, and 0.8175, respectively. Several above discussed parameters have been computed and tabularized in Table 1. On computing, the specificity of all the discussed techniques was coming alike with a value of 0.9999, which is not displayed in the table to omit the data redundancy.

698

A. Kumar et al.

6 Conclusion In this study, eight machine learning techniques are empirically studied on the AML system. On witnessing the performance of these techniques, it can be concluded that RF technique delivers the best accuracy as contrasted to all the other techniques. Averaged perceptron technique resulted in the least accuracy. For future work, on collaborating with a financial institution, real-time transaction data may be utilized to get more insights into the viable types of money laundering cases and incidents. A distinct technique can be exploited so as to get an extensive analysis of the numerous ways of money laundering that can be done in the current scenario.

References 1. Chen Z, Teoh EN, Nazir A, Karuppiah EK, Lam KS (2018) Machine learning techniques for anti-money laundering (AML) solutions in suspicious transaction detection: a review. Knowl Inf Syst 57(2):245–285 2. Palshikar GK, Apte M (2014) Financial security against money laundering: a survey. In: Emerging trends in ICT security. Morgan Kaufmann, pp 577–590 3. Zhang Y, Trubey P (2019) Machine learning and sampling scheme: An empirical study of money laundering detection. Comput Econ 54(3):1043–1063 4. Duhart BA, Hernández-Gress N (2016) Review of the principal indicators and data science techniques used for the detection of financial fraud and money laundering. In: 2016 International conference on computational science and computational intelligence (CSCI). IEEE, pp 1397– 1398 5. Keyan L, Yu T (2011) An improved support-vector network model for anti-money laundering. In: 2011 Fifth international conference on management of e-commerce and e-government. IEEE, pp 193–196 6. Tang J, Yin J (2005) Developing an intelligent data discriminating system of anti-money laundering based on SVM. In: 2005 International conference on machine learning and cybernetics, vol 6. IEEE, pp 3453–3457 7. Álvarez-Jareño JA, Badal-Valero E, Pavıa JM (2017) Using machine learning for financial fraud detection in the accounts of companies investigated for money laundering. Tech. Rep 8. Lv LT, Ji N, Zhang JL (2008) A RBF neural network model for anti-money laundering. In: 2008 International conference on wavelet analysis and pattern recognition, vol 1. IEEE, pp 209–215 9. Wang S, Liu C, Gao X, Qu H, Xu W (2017) Session-based fraud detection in online e-commerce transactions using recurrent neural networks In: Joint European conference on machine learning and knowledge discovery in databases. Springer, Cham, pp 241–252 10. Fu K, Cheng D, Tu Y, Zhang L (2016) Credit card fraud detection using convolutional neural networks. In: International conference on neural information processing. Springer, Cham, pp 483–490 11. Roy A, Sun J, Mahoney R, Alonzi L, Adams S, Beling P (2018) Deep learning detecting fraud in credit card transactions. In 2018 Systems and information engineering design symposium (SIEDS). IEEE, pp 129–134 12. https://developers.google.com/machine-learning/crash-course/classification. 13. https://cs231n.github.io/convolutional-networks/. 14. https://www.deeplearningbook.org/contents/convnets.html.

54 Fraud Detection in Anti-money Laundering System Using …

699

15. https://www.geeksforgeeks.org/introduction-to-recurrent-neural-network. 16. Wang S-N, Yang J-G (2007) A money laundering risk evaluation method based on decision tree. In: 2007 International conference on machine learning and cybernetics, vol 1. IEEE, pp 283–286 17. Breiman L (2001) Random forests: machine learning, vol 45, no 1, pp 5–32 18. https://www.kaggle.com/ntnu-testimon/paysim1

Chapter 55

A Smart Approach to Detect Helmet in Surveillance by Amalgamation of IoT and Machine Learning Principles to Seize a Traffic Offender Gaytri, Rishabh Kumar, and Uppara Rajnikanth

1 Introduction With the growing economy, there is immense production and adaption of twowheelers by general public. Along with it comes the upholding of traffic rules as well as saving precious human life. Helmet is one of most significant device for a biker as two-wheelers are more susceptible to fatal injury than a four-wheeler. When considering a bike or a scooter, it is inefficient to stipulate full body protection and so it is mandatory for a biker to wear a helmet. It is even mandatory by the law that pillion rider as well as the rider should wear the helmet. Helmet is nothing but a sort of a shell that protects the brain as well as the face of the rider. Helmet construction goes under specific standards as well as design criteria considering the comfort and protection hand in hand. The various design principles considered in construction of helmets are: (a) Relegate the risk of (MTBI): MTBI stands for mild traumatic brain injury and to reduce it following parameters are notified: • Facial structure maintenance—the construction is in such way that with the head it protects the chin as well, thus maintaining the configuration of the face. • Helmet Shell—is usually of round shape, but to design it the offset is measured that means the distance from the bearer’s head and the shell to avoid MTBI. • Shell Shape—the shape of the helmet is a computerized design where the anatomical gravity of the head, considered as the main parameter that deals with the head-neck complexity. Gaytri (B) · R. Kumar University of Petroleum & Energy Studies, Bidholi, Dehradun 248007, India e-mail: [email protected] U. Rajnikanth University of Petroleum and Energy Studies, Dehradun 248007, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_55

701

702

Gaytri et al.

(b) Permanency and firmness: discussing about this parameter includes the stability of helmet on the rides head in other words a loose helmet is always dangerous. (c) Ease and luxury: As many companies in the market are designing helmets the parameters that are taken are to include in helmets to provide comfort and ease are: • • • • •

Layered design Tough material and extra padding Lightweight in design Ear channel to be provided in helmet Ventilators for fresh air.

As it is human tendency to break the law, so this paper portrays a smart system that detects the rider without helmet. The system can also report the concerned authority about the offender. The process of detection which this paper talks about is that it distinguishes the offender from the moving traffic through camera and in the next moment it captures the number plate of the vehicle and send information regarding the vehicle to the traffic controller by displaying on LCD as well as send a mail. To make a working prototype of a system the requisites required are as under: The various configuration required to form the device are Hardware configuration (a) Raspberry Pi-3 (b) Raspberry Pi camera module. Software configuration (a) Python 3.6 (b) Open CV license plate recognition python master (c) Open CV face recognition.

2 Literature Review This section, briefly describes the existing works about the helmet detection and number plate extractions using Internet of Things and wireless communication technology [1]. Lahiru Dinalankara et al. projected a method to identify face using HaarCascades. Eigenfaces, Fisherfaces and Local binary pattern histograms were used. The suggested system consists of eye’s detection using openCV. A Haar wavelet is based on a mathematical calculation which make out rapid changing signal by creating box shaped patterns. Eigenface classifies the images to abstract the features on the dataset of images but with the biggest limitation of illumination as well as eyes condition in the entire dataset. Another limitation is related to the similar number of pixels and grayscale within the dataset [2]. Wuming Zhang et al. in their paper deduced a method to decrease the head pose disparity.

55 A Smart Approach to Detect Helmet in Surveillance …

703

To model the core part of the our paper which is the extraction of the number plate we came across Chetan et al. [3], who proposed morphological operation-based approach to extract number plate. M, Characters were segmented and extracted using histogram and template matching technique but the biggest limitation was with the variance in the length of the character. The above individual systems have limitations in extracting the plates as well as detecting face together. Not a single system has taken into consideration for detecting, extracting and transmitting done. Understanding of the OpenCV modules for number plate extraction and face detection algorithm. Training the system to recognize the alphabets as well as numerals. Converting the data into grayscale and making the thrash image. Configuring the mail module and extracting the data from the database are the main knowledge drawn. The inference obtained is an innovative communication technology is assimilated into the automotive sector, IoT and Machine learning generates opportunity for better assistance to the public and private organizations to reduce the mishaps occurs by not wearing the helmet and increase the information they have about the vehicle in and out timing into organizations. Apparently this paper showcases a prototype for automatic detection of helmet and notify through mail, if rider is not wearing the helmet and assist on V2I communications. However, the efficiency of this technology can be upgraded with the assistance of smart, central system, which can perform automatic decision-making process.

3 Methodology With the moving time, science has given its fruits to human for his intelligence and he in turn has made it his comfort and style to carry. Transportation is one of the field into which varied designs of vehicles are produced where luxury, ease and security is considered on priority out of which this paper is concerned about the safeguarding two-wheelers. Various designs of helmets are there in current market, to protect the human skull as well as face configuration but people try to avoid the usage of it despite of realizing the significance. This paper is depicting a smart system that can take hold of a traffic lawbreaker.

3.1 System Overview This section of the paper describes about the outline functioning of the proposed project. As depicted by the flow diagram given below, this is an arrangement of a setup, which consists of two cameras one of which detects face and other captures the number plate of the vehicle whose rider is not wearing a helmet by approximating the distance. The system self-sufficiently trains itself from the pre-existing number plates.

704

Gaytri et al.

It recognize the number present on the number plate whose image is taken by the second camera and checks whether the rider have a registered vehicle or not. The system first recognize the plate size to detect the number plate. In general, number plates are rectangular. As the rider is not wearing a helmet, it reports the administrator as well as the rider by sending a mail and if the vehicle is not registered, it is directly reported to the administrator. This process keeps on going as the described setup is having an eye on the moving traffic and differentiated the vehicle with the rider without helmet (Fig. 1).

3.2 System Architecture This section describes about the hardware as well as software architecture of the proposed system. Figure 2 describes the hardware structure of the setup. The arrangement revolves around raspberry pi board. Various other peripherals, which are attached: (1) Cameras—for detection as well as reading (2) Ultrasonic Sensors—for approximating the distance between the rider and the camera. Fig. 1 Installed prototype

Fig. 2 Block diagram of proposed architecture

55 A Smart Approach to Detect Helmet in Surveillance …

705

Fig. 3 Software architecture/layered modules

(3) Wi-Fi module—This module is used for the communication purpose, to provide the message to the raspberry pi for displaying the result on the LCD as well as sending the mail. (4) LCD—display the image that is captured and read (Figs. 3 and 4). With the defragmentation of the setup into various modules, gives a deep understanding of the software design of the project. Paper gives the description of the layered structure of the setup, which are: • Camera capturing layer—this layer includes Image capturing and reading module, which comprises of camera. • Object and Helmet detection layer—under this layer comes two modules: • The Object detection module: distinguishes whether the vehicle is a two-wheeler or four-wheeler.

706

Fig. 4 Functional flow diagram

Fig. 5 Object & helmet detection

Gaytri et al.

55 A Smart Approach to Detect Helmet in Surveillance …

707

• Helmet detection module: To detect the rider and pillion rider wearing a helmet [4] • Number plate detection layer—in this layer the digits on the number plate are detected and recognized by the system. • Data Base layer—This layer has the database maintained for the registered vehicle. The database holds the information of the owner of the vehicle, his/her address and email id. With the scrutiny, checking of the details the mail is sent to the vehicle owner as well as the administrator.

3.3 System Details The project rendered in this paper has three modules: • Object detection module • Number Plate Detection Module • Mail module. Object Detection Module: The Object detection module comprises of two separate module combined. The first part deals with identification of two-wheeler vehicle on the road out of pedestrians and four-wheeler vehicles. If a two-wheeler vehicle is detected then in that case second module, which detect the helmet out of the image will be enables and it will show the chances of the rider-wearing helmet in the rectangular box [5]. The value of helmet prediction can change according to the angle of camera and its resolution (Fig. 5). The pictures above show the distinction between operation of object detection and person wearing helmet. Number Plate Recognition Module: As the Object is been detected by the Object detection module and the system checks whether the rider is wearing helmet or not, camera takes the image of the number plate of the vehicle. Step-1The image of the number plate is taken by the camera and the license plate is detected in a image subregion. is taken by the camera as shown in the image in Fig. 6. Step-2The subregion is identified using the CNN Algorithm. Step-3 After finding the subregion from the image a sliding window is used to filter convolution and sample the images [6]. Fig. 6 Extracted number plated

708

Gaytri et al.

Fig. 7 Feature image

Step-4 This will be followed by feature extraction of the number plate from that region as shown in Fig. 7. Step-5 Reading the characters form the features [7]. Step-6identify the characters matching with the trained dataset in the system. Step-7The system identifies the font type used as a training set to relate it with the image fed [8] into to system. Step-8 The classifier classifies the characters written on the number plate. Step-9 The recognized characters of the number plate are displayed on the frame as shown in Fig. 8. Mail Module: After finding all the characters and numerals on the number plate the detected number is sent with, a message to the traffic controller as well as it is represented on the LCD. Mail outputs: Sender: Rajanikanth Uppara 11:31 AM Information is existing in the database. Rider is not wearing the helmet on the bike bearing the number “UP 11 BB 7985” at 11:30 AM on Friday 23 2019 Pseudo-Code: This depicts the framework of the logic executing in the entire design with various modules. Pseudocode for framework: Fig. 8 License number identification

55 A Smart Approach to Detect Helmet in Surveillance …

709

710

Gaytri et al.

4 Archetype Pre and Post Validation To examine the accuracy of the purposed design, an archetype is built for autonomous organizations, which abide by the traffic rules using advanced technology in nominal cost. This design defines the system functionality of all modules with pre and post test cases. The product has been determined as to which categories of requirements are to be validated such as technical, non-technical, requirements related to product maintenance and support.

55 A Smart Approach to Detect Helmet in Surveillance …

711

4.1 System Integration The data capturing and processing unit has a quad core 64-bit ARM cortex with the processing speed of 1.2 GHz. It has dual-band of 5 GHz and 2.4 GHz, Lan 300 mbps Ethernet, 4xUSB 2.0, Camera Serial Interface (CSI) and Display serial interface (DSI). This device has 40 General input and output pins which are handled by operating system. The firmware will collect and process the data based on the ultrasonic sensor by calculating the distance [9] of the bike while approaching. The sensors will help us to identify and get the distance to trigger the object detection module-using camera. The object detection module is the phase where we have to separate the two-wheeler vehicles from the moving traffic. After that if, the twowheeler is detected then we have to extract the feature of the helmet form the frame. Which depicts the condition that rider is wearing helmet or not. If the prediction of the helmet detection does not lay within the given constraints then Number plate detection module will be enabled. Conclusively the Number plate module will extract the license plate from the image and will extract the feature of license plate numbers. In addition, obtained number plate is sent using mail module via Wi-Fi connected in the intranet. The prototype for the initial investigation was build using the software modules and with minimal cost. All the modules are implemented using Python programming Language with tensor flow, OpenCV and related libraries. The modules will be continuously running using loop. The Database MySQL is used to store the bike number and respective mail-ids.

4.2 Product Validation The system was validated at university campus. The test system includes core i3 HP laptop with camera and python installed in it. The other extended road side unit (ERS)is Raspberry Pi 3 with LCD and camera. All the images are recorded and analyzed with algorithms that are part of firmware in the device. We have manually collected 3000 different images of number plates under unconstrained natural situations. They are expanded into 6012 images by different camera angles, light intensities and resolutions. All images are labeled manually. The whole dataset is randomly separated into training set and test sets. Training set has 5448 pictures and test set has 564 images. All the images are pushed on our git repository for references. We have also used font classifiers for identifying the font of the characters to make it more accurate [10]. The testing were performed with three different objective provided to the unit and also ensured the proper functionality of the system under the realistic conditions with various types of vehicles passing by with types of number plates [11]. The test cases were executed and the results were evaluated. The detection of helmet were performed by the helmet detection module which performed several test cases and concluded weather person wearing the helmet or not. The main module

712

Gaytri et al.

will take the snapshot of the vehicle at certain distance with camera have several test cases with distance calculations and types of numbers plates in two layers and single layer. The last module is mail module that will send mail to various mails servers depending on the server configurations, user name and password. The validation is done with pre and post test cases that were written and with the priority of bug fixes are assigned, out of which the conditions validating the design are as follows: (a) Climatic conditions: This the one of the most prominent parameter to be considered when designing a system comprising of hardware components which reacts to the variation of temperature. Here are the conditions considered wile testing the system: (1) Temp range from 3 to 40 °C—The system was tested in this given range of temperature as it could vary from place to place in seasons such as summers and winters. Even with the extreme change in temperature the system worked efficiently. (2) Rain light, heavy moderate, stormy—Rain is one of the obstacle, which leads to the drawback for the proposed design as when clarity of the image is considered it is hard to achieve with these extreme climatic conditions such as heavy rainfall and storm. (b) Light: This is too one of the constraint to be considered as according to the design of the system two cameras are used one for the object and helmet detection and another to capture the image of the number plate. In view of daylight the images can be captured with full clarity but with the dusk, proper lightening needs to be done. Even in foggy weather conditions light is a big limitation. (c) Creative styled number plate’s detection: Implementing this design for Indian traffic system is one of the most appropriate methodology as number plates put on the vehicles are illegally designed. Few of scenarios are considered and the system is efficient enough to give results. (1) Plates written with regional language or Calligraphic number plates: Considering Indian traffic system, riders opt different styles of number plates which could be one of the drawbacks for the system which shows a test case for calligraphic number plate which is not detected properly because of different writing style (Fig. 9). (2) Misprinting on number plate: Another scenario, which is often seen with the running vehicles on the roads, is the faded alphabets on the number plate. For such a test case, the system is unable to search any number plate as shown in the image [12] (Figs. 10 and 11). (d) Multiple vehicles of different type: The detection plate mechanism works in same criteria for heterogeneous vehicle models [13]. The test case taken here is an old version of painted number plate on to the vehicle. As shown in the images (Fig. 12).

55 A Smart Approach to Detect Helmet in Surveillance …

Fig. 9 Error identifications in module Fig. 10 Input image

Fig. 11 Output Generated

713

714

Gaytri et al.

Fig. 12 Plate detection for heterogeneous vehicles

5 Conclusion and Future Work This paper presents a system employing OpenCV library for computing the helmet detection and number plate extraction. The conditional probability will identity and send the mail using API’s. To this end, the designed archetype for automatic helmet detection and assisting the offender details to the respective identity is based on IoT principles. Most of the existing works focus on Object detection, Helmet detection and number plate extraction using CNN, where limited processing and extraction were performed. After suitably analyzing and designing, we have come out with a solution for commercial and government organizations. The crucial factors for this are climate, speed and inclination of the camera. The License plate module also need to be modified for detecting different fonts, and erased images on the number plate. On the hardware side, we would like to add more RAM for fast processing and higher pixel camera for sharp and fast image capturing.

References 1. Dinalankara LAHIRU (2017) Face detection & face recognition using open computer vision classifies 2. Kaur S, Kaur S (2014) An efficient approach for number plate extraction from vehicles image under image processing. Int J Comput Sci Inf Technol 5(3):2954–2959 3. Chetan et al (2017) Morphology based approach for number plate extraction. In: Proceedings of the international conference on data engineering and communication technology 4. Tai Y, Yang J, Zhang Y, Luo L, Qian J, Chen Y (2016) Face recognition with pose variations and misalignment via orthogonal procrustes regression. IEEE Trans Image Process 25(6):2673– 2683 5. Zhao W, Chellappa R, Rosenfeld A, Phillips PJ (2003) Face recognition: a literature survey. ACM Comput Surv 35(4):399–458

55 A Smart Approach to Detect Helmet in Surveillance …

715

6. Zhao X, Zhang W, Evangelopoulos G, Huang D, Shah SK, Wang Y, Kakadiaris IA, Chen L (2013) Benchmarking asymmetric 3D-2D face recognition systems. In: 10th IEEE international conference and workshops on automatic face and gesture recognition (FG) 7. Gilly D, Raimond K (2013) License plate recognition—a template matching method. Int J Eng Res Appl (IJERA) 3(2):1240–1245 8. Kukreja A, Bhandari S, Bhatkar S, Chavda J, Lad S (2017) Indian vehicle number plate detection using image processing. Int Res J Eng Technol (IRJET) 9. Kodwani L, Meher S (2013) Automatic license plate recognition in real time videos using visual surveillance techniques. ITSI Trans Electr Electron Eng (ITSITEEE) 10. Lekhana GC, Srikantaswamy R (2012) Real time license plate recognition system. Int J Adv Technol Eng Res (IJATER) 11. Vargas M, Milla JM, Toral SL, Barrero F (2010) An enhanced background estimation algorithm for vehicle detection in urban traffic scenes. IEEE Trans Veh Technol 12. Selmi Z, Halima MB, Alimi AM (2017) Deep learning system for automatic license plate detection and recognition. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR), vol 1. IEEE 13. Puarungroj W, Boonsiri Sumpun N (2018) Thai licence plate recognition based on deep learning. In: 3rd International conference on computer science and computational intelligence

Chapter 56

Botnet Detection Using Machine Learning Algorithms Chirag Joshi, Vishal Bharti, and Ranjeet Kumar Ranjan

1 Introduction In the last era Internet has grown rapidly and now it is one of the important parts of day to day life. Smart phones, Computers, Laptop and different gadgets everything is equipped with Internet. Increasing rate of Internet has helped human being but it also carries lot of threats. Internet threats are increasing day to day and damaging the economy of this world. Threats refer to anything which has the potential to cause serious harm to the computer system. Common threats of computer system are:-Botnets, Distributed Denial of Service (DDoS), Hacking, Malware, Pharming, Phishing, Ransom ware, Spam, Spoofing, Spyware, Trojan Horses, Viruses, Worms, and Eavesdropping. Out of all these threats Botnet is the most dangerous and difficult to detect. Botnet can consist of Malware, Viruses, Worms, and Trojan Horses and can be used to spread any of the above mentioned threats. Botnet is a collection of software bots that creates huge number of different bots which can be controlled centrally or each one of them can be a master [1]. Computer Emergency Resource Team (CERT) has identified that botnets with more than 100,000 members, and almost 1 million bot infected hosts have been reported [2]. Botnet is the latest version of malware. It integrates advanced malicious techniques likes viruses, trojans, worms etc. One of the key features of bot malware is to speak

C. Joshi (B) · V. Bharti · R. K. Ranjan DIT University, Dehradun, India e-mail: [email protected] V. Bharti e-mail: [email protected] R. K. Ranjan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 M. Dave et al. (eds.), Proceedings of the International Conference on Paradigms of Computing, Communication and Data Sciences, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-15-7533-4_56

717

718

C. Joshi et al.

with an assailant through a specific Order and Control correspondence channel [3– 5]. When bot malware attacks a computer it uses Command and Control (C&C) communication channel to give the access of the computer to remote attacker. The attacker is known as Botmaster or Botherder and the virus, Trojan, worms used is known as Bots or Zombies [6]. A portion of the digital security considers [7, 8] guarantee that over 16% PC’s have been affected some or any kind of bot malware. The rest of the paper is organized as follows. Section 2 briefly reviews the related work done in the detection of the botnet. Section 3 describes the architecture and working of the botnet. Section 4 describes the detail of the dataset used in this paper. Section 5 presents the experimental and results and conclusion and future work are presented in Sect. 6.

2 Related Work There are lots of useful and important amount of work has been done in the field of detection of botnet using ML techniques. Majority of literature work is on the detection of P2P botnet. Gu et al. presented a general botnet detection framework which is referred as BotMiner that is independent of the botnet C&C protocol, its structure and infection model [9]. This framework targets both centralized and P2P botnet. This method assumes that bots are coordinated malware which exhibit similar communication patterns. So according to this clusters are formed using A-plane (activity traffic) and C-plane (C&C communication traffic), for botnet and similar performing malware. The only limitation of BotMiner is that it targets only groups within a monitored network. Giroire et al. [10] has work on detecting botnet C&C communication on an end host. All these irregularities have been captured by a detector which monitors outgoing traffic and identifies the malicious destination. Evaluation in this approach results in a low false rate. Strayer et al. [11] has detected botnet using network behavior and for this they have made a detection system. This method is divided into four stages. In the first stage all the network flows which has been collected are filtered to eliminate the flow which doesn’t contain C&C bots. In second stage remaining flows are clustered and analyzed using different ML techniques. In next stage again flows are grouped into network on the basis of same characteristics. In the last stage the flow which share a common controller are investigated by an analyst and this is the limitation of this approach as the last stage is done in an offline mode. Liu et al. propose a method to detect P2P botnet using data-mining techniques [12]. Analysis is done on the basis of the behavior of bot traffic. They have used Bots traffic, normal P2P traffic, Gaming traffic and general Internet traffic. Limitation of this approach is that it doesn’t clarify that this can be work on other botnets. Dewas et al. [13] proposed a method for identifying chat traffic. The method uses service port number, packet size distribution and packet content for the detection. Roughan et al. [14] uses traffic classification scheme to identify the class of service (COS)

56 Botnet Detection Using Machine Learning Algorithms

719

for different level of Quality of Service (QoS). The authors investigate the packet size, RMS packet size and average flow duration for the effectiveness of the method. On the basis of these characteristics simple classification methods gives very accurate results. Moore and Zuev [15] apply a similar approach using variant of naive Bayesian classification scheme to classify the flow into a distinct group of 10. Sen et al. [16] use a method based on the signature to discriminate the traffic produced by several P2P application.

3 Botnet Architecture Botnet communication architecture is quite unique. All the bots uses communication channels for interaction. Bot first infect a system and then try to connect with an IRC server. The person who initiated and controls all the bots are called Botmaster [17]. Botnet life cycle has been defined by different researchers, like Silva et al. [18], Feily [19] and Z. Zhu [20]. All these researchers divide life cycle of botnet in three phases:(i) infection phase (ii) the communication phase (iii) the attack phase. The botmaster controls the bot in three ways (i) Centralized Network (ii) Peer to Peer (P2P) (iii) Hybrid.

3.1 Centralized Network A centralized architecture has a central point from where all the other bots are controlled. Botmaster is placed at this point and manage all other bot members. The advantage of this architecture is that it is very easy to implement but on the contrary entire botnet is failed if the center botnet (Botmaster) is failed or removed. Figure 1a show the Centralized Network.

(a)

(b)

Fig. 1 a Centralized nnetwork. b P2P network. c Hybrid network

(c)

720

C. Joshi et al.

3.2 Peer-to-Peer (P2P) Network Peer-to-Peer communication has several different points from where the messages are sent to bot members. There is no central point to control other bot members. P2P architecture is very hard to distort. This architecture continues to work if a bot member is removed. Designing of this architecture is complex and there is no guarantee of message delivery. Figure 1b shows the Peer-to-Peer Network.

3.3 Hybrid Network This architecture combines the functionality of both centralized and P2P network. Figure 1c shows the hybrid network. There are large numbers of botnets which are working on this technology [21].

4 DataSet In this paper dataset has to be selected with great care that it should contains both labeled malicious and non-malicious traffic. Our main aim is to come up with a dataset that contains malicious and non-malicious traffic. Dataset that has been selected is MCFP (Malware Capture Facility Project) dataset [22]. Out of these we have used CTU-13 dataset. It contains different botnet traffic and out of these we have used Neris Botnet for our experiments.

4.1 Feature Selection Before processing the data in ML techniques, first the dataset is normalized and then we extract features which will be helpful in analysis. First Wireshark is used to PCAP files and then we extracted those data into xls format. After that dataset is normalized and features have been selected. We have selected following features for the detection of botnet using ML techniques. In the data pre-processing step we have labeled categorical features before using them for implementation. Table 1 shows the features that have been used from this dataset. Dataset contains 11,526 tuples out of which 2693 are botnet traffic and 8833 are normal traffic. Two classes which were used for prediction one is Botnet and other is No mal.

56 Botnet Detection Using Machine Learning Algorithms Table 1 Selected Features

721

Feature

Description

Protocol

Protocol which is used during communication

Source address

IP address of source

Destination address

IP address of destination

Total packets

Total number of packets

Total bytes

Total number of bytes

Source bytes

Total number of bytes at source

5 Experimental Results After pre-processing and selecting the features of Dataset. We have applied different Machine Learning Techniques on the dataset. The Machine Learning Techniques used are: K-Nearest Neighbor (KNN), Logistic Regression, Support Vector Machine (SVM) and Decision Tree Classifier.

5.1 Performance Evaluation and Results We have used the different measures for evaluating the performance of the classification algorithms. Table 2 detailed the information of all the metrics. The methodologies we have used for this classification have two variant. First we apply all the algorithms (KNN, Logistic Regression, SVM and Decision Tree) on the dataset and obtain different results for each of the algorithm. In our model we also improve the accuracy of each of the algorithm by applying different measures like Feature Scaling, Pre-Pruning, etc. First we apply the algorithms without using K-fold-cross validation. In this case we find different evaluation metrics and plot the ROC and CAP Curve. Table 3 shows the evaluation metrics obtained after applying the algorithms. After applying the algorithm without K-fold cross validation, we apply K-fold cross validation on all the algorithms. In K-fold cross validation (=10). K-fold cross validation is a re-sampling procedure used for evaluation of machine learning model on a new dataset. The procedure has a single parameter K and a dataset is divided into K number of partition. Now, if we take K = 10 then the complete dataset will be divided into 10 partition. Out of these 10 partitions randomly 1 partition will be selected for test data and remaining 9 will be in training data. Each of the partition will get a chance to be in test and training data. After K iteration (K = 10), output contains K accuracies and we can find the mean of the accuracies which shows the accuracy of an algorithm. Table 4 shows the performance of the algorithms with K ( = 10)-fold cross validation.

722

C. Joshi et al.

Table 2 Performance metrics Metric

Description

True positive (TP)

The number of observations correctly detected as Botnet

True negative (TN) The number of observations correctly detected as Normal False positive (FP)

The number of normal observations detected as Botnet

False negative (FN) The number of Botnet observations detected as normal Precision

Precision tells us the percentage of observations classified correctly as positive instances Precision = TP/TP + FP

Recall

Recall tells us the percentage of botnet observations classified correctly as botnet. It is also called as Detection Rate Recall = TP/TP + FN

F-measure

F-measure tells us a value which represents both precision and recall F-measure = (2 * Recall * Precision)/(Precision + Recall)

Accuracy

It indicates the percentage of all the observations that are correctly predicted Accuracy = TP + TN/TP + TN + FP + FN

ROC curve

Receiving Operating Characteristics (ROC) Curve is the curve of probability. It is plotted between True Positive Rate (TPR) or Recall and the False Positive Rate (FPR)

CAP curve

Cumulative Accuracy Profile (CAP) curve shows the discriminative power of a model. It is plotted between cumulative number of positive outcomes (y-axis) and the corresponding cumulative number of a parameter (x-axis)

AUC

Area under the Curve. It measures the area under the ROC Curve. It ranges from 0 to 1. A value of AUC near to 1 tells that the model is better whereas the value near to 0 tells that the model should be corrected or not giving accurate prediction

5.2 ROC Curve We have also plot the ROC curve for each of the algorithm. Figure 2a–d shows the ROC curve of KNN, Logistic Regression, SVM and Decision Tree Classification respectively. From the above ROC curve we can see KNN and SVM are performing best with respect to other two algorithms. Out of all four algorithms KNN has the highest AUC (0.99).

5.3 CAP Curve We have also plot the CAP curve for each of the algorithm. Figure 3a–d show the CAP curve of KNN, Logistic Regression, SVM and Decision Tree Classification respectively.

56 Botnet Detection Using Machine Learning Algorithms

723

Table 3 Shows the related work done data from [23] Name of detection method

Different performance metrics evaluated

Livadas et al. [23]

FPR (10–20%), FNR (30–40%)

Strayer et al. [11]

FPR (2.17%)

Gu et al. [9]

TPR (99%), FPR (1%)

Husna et al. [24]

Precision (88%)

Noh et al. [25]

TPR (>95%), FPR (87.56%)

Liu et al. [12]

TPR (53–100%)

Liao et al. [27]

Accuracy (>92%)

Yu et al. [28]

TPR (100%), FPR (