New Trends in Computational Vision and Bio-inspired Computing: Selected works presented at the ICCVBIC 2018, Coimbatore, India 3030418618, 9783030418618

This volume gathers selected, peer-reviewed original contributions presented at the International Conference on Computat

939 77 52MB

English Pages 1752 [1663] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Preface
Acknowledgments
Contents
3-Dimensional Multi-Linear Transformation Based Multimedia Cryptosystem
1 Introduction
2 Three Dimensional Multi Linear Transformation (3D-MLT)
2.1 Inverse 3D-MLT
3 Proposed Scheme
3.1 Transformation (Mapping) Phase
3.2 Substitution (Saturation) Phase
3.3 Decryption Phase
4 Experimental Results
5 Conclusion
References
A Computer Vision Based Approach for Object Recognition in Smart Buildings
1 Introduction
1.1 Overview
1.2 Feature Detectors
2 Related Work
2.1 Gradient Feature Descriptors
2.2 Binary String Descriptor
2.3 Color Feature Descriptors
3 Proposed Framework
4 Dataset Details
5 Experimental Results
6 Conclusion
References
A Cascade Color Image Retrieval Framework
1 Introduction
2 Dataset
3 A Cascade Color Image Retrieval Framework
4 Performance Evaluation
5 Conclusion
References
Enhanced Geographical Information System Architecture for Geospatial Data
1 Introduction
2 State of Art
3 Problem Statement
4 Proposed Architecture Model
5 Conclusion
6 Future Work
References
IoT Based Power Management and Condition Monitoringin Microgrid
1 Introduction
2 Proposed Methodology
3 Block Diagram of Proposed System
4 Procedure for IoT Based Condition Monitoring System in Micro Grid
5 Simulation of Proposed System
6 Development of Fuzzy Controller
7 Simulation Results
8 Hardware Implementation
9 Conclusion and Future Work
Bibiliography
A Comparative Performance Study of Cloud Resource Scheduling Techniques
1 Introduction
2 Proposed Work
2.1 System Overview
2.2 Methodology
2.3 Simulation Scenario
2.4 Simulation Setup
3 Result Analysis
3.1 CPU Utilization
3.2 Average Processing Time
3.3 Average Waiting Time
3.4 Average Processing Cost
4 Conclusion
4.1 Conclusion
4.2 Future Work
Bibliography
Image Context Based Similarity Retrieval System
1 Introduction
2 Related Works
2.1 WordNet Distance (WD)
2.2 Flickr Distance (FD)
2.3 Context-Based Image Semantic Similarity (CISS)
2.4 Fuzzy String Matching
2.5 Proposed Distance Method
3 Proposed Context Retrieval Method
4 Results and Discussion
5 Conclusion
References
Emotions Recognition from Spoken Marathi Speech Using LPC and PCA Technique
1 Introduction
2 Design and Development of Artificial Emotional Speech Database for Marathi Language
3 Feature Extraction
4 Linear Predictive Coding (LPC)
5 Speech Recognition Based on PCA
6 Experimental Analysis and Results
7 Conclusion
References
Implementation of Point of Care System Using Bio-medical Signal Steganography
1 Introduction
2 Related Work
3 Materials and Methods
3.1 Fast Walsh Hadamard Transform (FWHT)
3.2 AES Encryption and Decryption
3.3 Haar DWT
3.4 Hardware Involved
4 Proposed Methodology
4.1 Embedding Process
4.1.1 AES Encryption
4.1.2 Applying FWHT on the Bio-medical Signal
4.1.3 Embedding Using Haar DWT
4.2 Extraction Process
4.2.1 Extraction Using Haar DWT
4.2.2 Applying IFWHT
4.2.3 AES Decryption
5 Performance Analysis
5.1 ECG Signal
5.2 EEG Signal
5.3 PPG Signal
6 Conclusion
References
Privacy Assurance with Content Based Access Protocol to Secure Cloud Storage
1 Introduction
2 Literature Survey
3 Existing System
4 System Architecture
4.1 Owner Side Functionality
4.2 User Side Functionality
4.3 Download Module
4.3.1 Decryption
4.4 Storage Allocation Module
5 Experimental Results
6 Conclusion
References
Leaf Recognition Using Artificial Neural Network
1 Introduction
2 Literature Survey
3 Methodology
3.1 Data Collection
3.2 Pre-processing of Captured Image
3.3 Feature Extraction of Processed Image
3.4 Training Neural Pattern Recognition
3.5 Testing Neural Pattern Recognition
3.6 Displaying and Comparing Results
4 Results and Discussion
5 Conclusions
References
Data Security in Cloud Using RSA and GNFs Algorithms an Integrated Approach
1 Introduction
2 Proposed Design
2.1 RSA Algorithm
2.2 NFS Algorithm
2.3 GNFS Algorithm
3 Results and Discussion
4 Conclusions
References
Machine Learning Supported Statistical Analysis of IoT Enabled Physical Location Monitoring Data
1 Introduction
2 Related Work
3 Methodology
4 Lessons Learned in Data Collections
5 IoT and Cloud Interfacing
6 Conclusion
References
A Genetic Algorithm Based System with Different Crossover Operators for Solving the Course Allocation Problem of Universities
1 Introduction
2 Genetic Algorithm
3 The Course Allocation Problem
4 Related Works
4.1 Part I: GA Based Systems for Classical and Real World Problems
4.2 Part II: Course Allocation Systems with Non-GA and GA Techniques
5 Design of Experiments
5.1 Deriving Information from Course Allocation Problem
5.2 Formulation of Optimization Problem
5.3 Designing GA Based System
6 Algorithm, Results and Discussion
6.1 The Proposed Algorithm
6.2 Results and Discussion
7 Conclusions
References
Detecting Anomalies in Credit Card Transaction Using Efficient Techniques
1 Introduction
2 Related Work
3 Proposed System
3.1 Data Description
3.2 K-means Clustering
3.3 SMOTE (Synthetic Minority Over-Sampling Technique)
3.4 Support Vector Machine
4 Results and Discussion
4.1 K-means Clustering
4.2 SMOTE (Synthetic Minority Over-Sampling Technique)
4.3 Support Vector Machine
5 Conclusion
References
Mohammed Ehsan Ur Rahman and Md. Sharfuddin Waseem
1 Introduction
2 State of Art
3 Problem Statement
4 Proposed Model
5 Conclusion
References
A Novel Framework for Detection of Morphed Images Using Deep Learning Techniques
1 Introduction
2 Related Work and Study
2.1 The Magic Passport: A Detailed Description of Facial Morphing Attack
3 Proposed System Framework and Methodology
3.1 Use of Algorithms Which Work on Low-Level Features of a Face Image
3.2 Algorithms Involving Deep Learning-Based Methods
3.3 Features Analysis
3.4 Mathematical Models and Intuitions
3.5 Deep Learning as a Modern Day Efficient Tool in the Context of Visual Recognition
4 Datasets and Facial Morphing Generation
4.1 Facial Morphing Techniques and Pipeline
4.2 Datasets Analysed and Evaluations Metrics Best Suited the Datasets Being Used in Your Research Work Include
5 Future Scope
References
A Novel Non-invasive Framework for Predicting Bilirubin Levels
1 Introduction
2 Related Work
3 Materials and Method
3.1 Camera Module
3.2 Image Processing
3.3 Algorithm
4 Results and Discussion
5 Conclusion
References
A Comprehensive Study on the Load Assessment Techniques in Cloud Data Center
1 Introduction
2 Scheduling
2.1 Features of a Good Scheduling Algorithm
3 Load Balancing
3.1 Process Origination
3.2 Current System State
3.3 Spatial Distribution of Nodes [4]
4 Literature Survey
5 Comparison of Various Algorithms
6 Simulation Tools
7 Conclusion
References
Multimodal Biometric System Using Ear and Palm Vein Recognition Based on GwPeSOA: Multi-SVNN for Security Applications
1 Introduction
2 Motivation
2.1 Related Works
3 Proposed Method
3.1 Pre-processing
3.2 Feature Extraction Using the Proposed BiComp Mask on the Input Ear Image
3.3 Feature Extraction Using Local Binary Pattern (LBP) on the Input Palm Vein Image
3.4 Multi-SVNN Classifier for Score Level Computation
3.4.1 Architecture of SVNN Classifier
3.4.2 Training the SVNN Using Genetic Algorithm (GA)
3.5 Optimal Score Level Fusion Using the Scores Obtained Using the Multi-SVNN Classifier
3.5.1 Testing Phase to Identify the Person
3.6 Optimal Fusion Scores Using the Proposed GwPeSOA
3.6.1 Solution Encoding
3.6.2 Objective Function
3.6.3 Algorithmic Steps of the Proposed GwPeSOA
4 Results and Discussion
4.1 Experimental Setup
4.2 Dataset Description
4.3 Competing Methods
4.4 Performance Metrics
4.5 Comparative Analysis and Discussion
5 Conclusion
References
Madhu Bhan, P. N. Anil, and D. T. Chaitra
1 Introduction
2 Functionality
2.1 Cultivation Methods
2.2 Crop Selection
2.3 Soil Analysis
2.4 Fertilizers
2.5 Selling Crops
2.6 Pre-production Management
2.7 Buying Agricultural Products
2.8 Bidding Activity/Market Price/Weather Forecast
3 Methodology
3.1 Tools and Technologies
3.2 Databases
3.3 Architecture
4 Conclusion
References
Ultra Wide Band Monopole Antenna Design by Using Split Ring Resonator
1 Introduction
2 Proposed Antenna Configuration
3 Result and Discussion
3.1 Return Loss Characteristics
3.2 Gain of Antenna
3.3 Radiation Efficiency
3.4 E-Plane and H-Plane Radiation Patterns
4 Conclusion
References
Green Supply Chain Management of Chemical Industrial Development for Warehouse and its Impact on the Environment Using Artificial Bee Colony Algorithm: A Review Articles
1 Introduction
2 Literature Review and Survey of Green Supply Chain Management
3 Related Works
3.1 Green Chemical Supply Chain
3.2 Green Inventory Policy
4 Model Design
4.1 Green Chemical Supply Chain
4.2 Green Chemical Model
4.3 Green Inventory Policy
5 Industrial Development and Its Impact on Environment
6 Simulation
6.1 Simulation Result
7 Conclusion
References
A Novel Dyno-Quick Reduct Algorithm for Heart Disease Prediction Using Supervised Learning Algorithm
1 Introduction
2 Literature Review
3 Proposed Work
4 Feature Selection Method
4.1 Filter Method
4.2 Embedded Method
4.3 Wrapper Method
5 Rough Set Theory
6 Review on Dynamic Quick Reduct
7 Novel Dyno-Quick Reduct Algorithm
8 Results and Discussion
9 Conclusion
References
Impact of Meltdown and Spectre Threats in Parallel Processing
1 Introduction
2 State of Art
3 Problem Statement
4 Security Analysis Model
5 Conclusion
References
Minimum Dominating Set Using Sticker-Based Model
1 Introduction
2 Sticker-Based Model
3 Minimum Dominating Set Computation
3.1 Basic Definitions
3.2 Design of Solution Space
3.3 Algorithm
4 Conclusion
References
Assistive Technology Evolving as Intelligent System
1 Introduction
2 Artificial Intelligence
3 Fuzzy Logic
4 Genetic Algorithm
5 Expert Systems
6 Artificial Neural Network (ANN)
7 Machine Learning
8 Important Developments in Intelligent Assistive Systems
9 Discussion and Conclusion
References
A Bio Potential Sensor Circuit of AFE Design with CT -Modulator
1 Introduction
2 A Novel Structural Design 4-Channel AFE for Bio Signal ΔM
3 Clamor Involvement in AFE Circuit
4 A Tuneable Bandwidth Low Noise Amplifier in AFE
5 Modulator Circuit
6 Results
7 Conclusion
References
Image Encryption Based on Transformation and ChaoticSubstitution
1 Introduction
2 Proposed Scheme
2.1 Encryption Algorithm
2.2 Decryption Algorithm
3 Experimental Results
4 Conclusion
References
An Efficient Geographical Opportunistic Routing Algorithm Using Diffusion and Sparse Approximation Models for Cognitive Radio Ad Hoc Networks
1 Introduction
2 SMOR Routing and Modified SMOR Routing Models
2.1 SMOR Routing Model
2.2 Modified SMOR Routing
3 Opportunistic Routing Using Modified SMOR Models
3.1 Modified SMOR-1 for Regular CRAHNs
3.1.1 Diffusion Approximation Based Markov Chain Modeling
3.2 Modified SMOR-2 for Large-scale CRAHNs
3.2.1 Sparse Approximation Based Stochastic Geometry Analysis
4 Performance Evaluation
5 Conclusion
References
Traffic Violation Tracker and Controller
1 Introduction
2 Literature Survey
3 Proposed Methodology
3.1 Detecting and Monitoring the Road Activities
3.2 Sending and Processing of the Information Gathered
3.3 Controller Interface
4 Conclusion
5 Future Work
References
PTCWA: Performance Testing of Cloud Based Web Applications
1 Introduction
1.1 Pitfalls in Performance Testing of Web Applications
1.2 Cloud Testing and Its Benefits
1.3 Challenges in Cloud Testing
2 Related Work
3 A Performance Testing Tool for Cloud Infrastructure
3.1 Private Cloud Setup
3.2 Fetching Performance Parameters
3.3 Analysis of the Fetched Values
3.4 Report Generation
4 System Implementation
4.1 Recording User Scenarios
4.2 Parameterization of Test Scripts
4.3 Grouping User Scenarios
4.4 Load Generation
4.5 Performance Analysis Process
4.6 Report Generation
4.6.1 Local Private Cloud Server
4.6.2 Public Cloud Server
4.7 Performance Analysis from a Mobile Platform
5 Results and Discussion
5.1 Performance Testing in Public Cloud
5.2 Performance Testing in Private Cloud
6 Conclusion and Future Enhancements
References
Analysis of Regularized Echo State Networks on the Impact of Air Pollutants on Human Health
1 Introduction
2 Echo State Networks (ESN)
3 Case Study
4 Conclusion
References
Detection of Cancer by Biosensor Through Optical Lithography
1 Introduction
1.1 Figure 1
1.2 Formulae[i]:
1.3 Table 1
1.4 Governing Equations [References—h, g]
2 Simulation Results
3 Conclusion
References
Paradigms in Computer Vision: Biology Based Carbon Domain Postulates Nano Electronic Devices for Generation Next
1 Introduction
2 Methodology
2.1 Three Dimensional CARd Analysis
2.2 Bond of All Analysis
3 Results
3.1 CARd Analysis
4 Discussion
4.1 Semiconductor and Carbon Value
4.2 Nanotechnology for World of Signaling
5 Conclusion
References
A Secure Authenticated Bio-cryptosystem Using Face Attribute Based on Fuzzy Extractor
1 Introduction
1.1 Biometrics
1.2 Cryptography
1.3 Biometric Cryptosystem
2 Related Works
3 Existing System
4 Proposed System
5 Conclusion
References
Implementation of Scan Logic and Pattern Generation for RTL Design
1 Introduction
2 Existing Work
3 Proposed Method
4 DFT Flow for Scan Insertion and Compression
5 Results and Discussion
5.1 Design 1 Has 32 S1 Violations and 32 D5 Violations
5.2 Design2 Has 4 S1 Violations and 4 D5 Violations
5.3 Scan Compression for Design 2
5.4 Result for Obtaining Coverage Report for Design 1 and 2
5.5 Results for Scan Insertion for Design 3
6 Conclusion
References
Optimization Load Balancing over Imbalance Datacenter Topology
1 Introduction
2 Related Work
3 System and Algorithm Design
3.1 The Cost of the Link Module
3.2 Shortest Path Module
3.3 The Sub Topology Discovery Module
4 Open Flow Network Implementation
4.1 System Design
4.2 OpenFlow Flow Dispatch
4.3 Emulation Framework in Load Balancing
4.4 Design of Datacenter
4.5 Comparison of Algorithms
5 Result of Analysis
6 Conclusion and Future work
References
Text Attentional Character Detection Using Morphological Operations: A Survey
1 Introduction
2 Literature Survey
2.1 Proposed System
3 Modules
3.1 Image Pre-processing
3.2 Text Area Detection
3.3 Non-text Area Removal
3.4 Text Recognition
4 Conclusions
References
IoT Based Environment Monitoring System
1 Introduction
2 Proposed System
2.1 System Overview
2.2 Flow Chart
2.3 System Architecture
2.4 System Requirements
3 Implementation
4 Result and Conclusion
4.1 Advantages
References
Design and Development of Algorithms for Detection of Glaucoma Using Water Shed Algorithm
1 Introduction
1.1 Fundus Image Database
1.2 Convert Image to Grayscale and Resize
1.3 Sending the Image as a Pixel Stream
1.4 Preprocessing the Image
1.5 Grayscale Erosion
1.6 Grayscale Dilation
2 Watershed Algorithm
2.1 Details of the Algorithm
2.1.1 Step 1
2.1.2 Step 2
2.1.3 Step 3
3 Simulations Results for Watershed Algorithm
3.1 Normal Eyes
3.1.1 Case 1: For Normal eye
3.1.2 Case 2: For Moderate Glaucoma
3.1.3 Case 3: Glaucoma Eye
3.1.4 Input and Output Images
4 Schematic Design
4.1 ModelSim Signals
5 Conclusions
References
A Novel Development of Glaucoma Detection Technique Using the Water Shed Algorithm
1 Introductory Part
1.1 Fundus Image Database
1.2 Convert Image to Grayscale and Resize
1.3 Sending the Image as a Pixel Stream
1.4 Preprocessing the Image
1.5 Grayscale Erosion
1.6 Grayscale Dilation
2 Watershed Algorithm
2.1 The Algo Detail/s
2.1.1 Step 1
2.1.2 Step 2
2.1.3 Step 3
3 Simulations Results for Watershed Algorithm
3.1 Normal Eyes
3.1.1 Case 1: For Normal Eye
3.1.2 Case 2: For Moderate Glaucoma
3.1.3 Case 3: Glaucoma Eye
3.1.4 Input and Output Images
4 Schematic Design
4.1 ModelSim Signals
5 Conclusions
References
Solutions of Viral Dynamics in Hepatitis B Virus InfectionUsing HPM
1 Introduction
2 Mathematical Modelling
2.1 Basic Concepts of the Homotopy Perturbation Method
2.2 Analytical Solution of HBV
3 Results and Discussion
4 Conclusions
References
A Mathematical Modeling of Dengue Fever for the Dynamics System Using HAM
1 Introduction
2 Mathematical Modeling
2.1 Basic Concept of HAM
2.2 Analytic Solution of the Dengue Fever Model
3 Result and Discussion
4 Conclusion
References
Vision-Based Robot for Boiler Tube Inspection
1 Introduction
2 Past Work
3 Design and Development
4 Preliminary Inspection Results and Discussion
5 Conclusions
References
Qualitative Study on Data Mining Algorithms for Classification of Mammogram Images
1 Introduction
2 Method
2.1 Data Mining Algorithms
2.2 Classification Algorithm
2.2.1 K-Nearest Neighbor (KNN)
2.2.2 The Naive Bayesian (NB)
2.2.3 Linear Discriminant Analysis (LDA)
2.2.4 Boosting
2.2.5 Support Vector Machine (SVM)
2.2.6 Bagging
3 Efficiency of SVM with Bagging
4 Findings
5 Conclusion
References
Designing a Framework for Data Migration of Odoo ERP PostgreSQL Database into NoSQL Database
1 Introduction
2 Related Work
3 NoSQL Databases
4 Proposed Framework
5 Results
5.1 Experimental Setup
5.2 Result and Description
6 Conclusion
References
Juxtaposition on Classifiers in Modeling Hepatitis Diagnosis Data
1 Introduction
2 Related Works
3 Methodology
3.1 Dataset Description
3.2 Process Flow
3.3 Classification Algorithms
3.4 Performance Measures
4 Results and Discussion
5 Conclusion
References
Voltage Stabilization by Using Buck Converters in the Integration of Renewable Energy into the Grid
1 Introduction
2 System Configuration
2.1 Solar Panel
2.2 Buck Inverter (Fig. 3)
2.3 Single Phase Inverter
2.4 PI Controller
3 System Design and Results
4 Advantage
5 Application
6 Conclusion
References
OCR System For Recognition of Used Printed Components For Recycling
1 Introduction
2 Related Work
3 Methodology
3.1 Proposed Algorithm
4 Result and Conclusion
5 Limitations of the System
References
Modern WordNet: An Affective Extension of WordNet
1 Introduction
2 Related Work
3 Gaps and Scope
4 Ontology Development
5 Proposed Modern Wordnet
5.1 Working as Thesaurus
5.2 Working as Dictionary
5.3 Working as Sentence Classifier
6 Conclusions and Future Scope
References
Analysis of Computational Intelligence Techniques for PathPlanning
1 Introduction
2 Related Work
3 Research Analysis and Discussion
4 Conclusion and Future Scope
References
Techniques for Analysis of the Effectiveness of Yoga Through EEG Signals: A Review
1 Introduction
2 Related Work
3 Comparison
4 Conclusion
References
Multiobjective Integrated Stochastic and Deterministic Search Method for Economic Emission Dispatch Problem
1 Introduction
2 Multiobjective Constrained Economic Emission Dispatch Problem
2.1 Economic Dispatch Objective
2.2 Emission Objective
2.3 Constraints Handling
2.4 Multiobjective Constrained Optimization Problem
3 Integrated PSO with Simplex Method
3.1 Conventional Stochastic Particle Swarm Optimization (PSO)
3.2 Conventional Deterministic Simplex Method (DSM)
3.3 Proposed Integrated PSO with DSM
4 Test Systems: Results and Discussion
4.1 Validation of Proposed Algorithm Using Benchmark Functions
4.2 Validation on Practical MCEED Problem
5 Conclusion
References
Enhanced Webpage Prediction Using Rank Based Feedback Process
1 Introduction
1.1 Effective Use of Caching Techniques
2 Objectives
3 Related Work
4 Enhanced Monte Carlo Prediction
5 Implementation
5.1 Modified Graph Construction
5.2 Ranking the Pages Through a Feedback Process
6 Results and Discussion
6.1 Accuracy
7 Conclusion
References
A Study on Distance Based Representation of Molecules for Statistical Learning
1 Introduction
2 Methodology
2.1 Regressors
2.2 Data Set
3 Results
3.1 Regression Results for Sin Clusters
3.2 Regression Results for Water Molecules
3.3 Regression Results for Methane and Ethane Molecules
4 Conclusion
References
Comparative Analysis of Evolutionary Approaches and Computational Methods for Optimization in Data Clustering
1 Introduction
2 Related Research
2.1 Research in Data Clustering Using Genetic Algorithm
3 Evaluation of Related Research
4 Conclusion
References
Bringing Digital Transformation from a Traditional RDBMS Centric Solution to a Big Data Platform with Azure Data Lake Store
1 Introduction
2 Need of Data Lake
3 An Overview of Azure Data Lake
3.1 Data Lake Structure
3.1.1 Raw Zone
3.1.2 Stage Zone
3.1.3 Curated Zone
3.2 Data Lake Operations
3.3 Data Lake Discovery
4 Conclusion
References
Smart Assist for Alzheimer's Patients and Elderly People
1 Introduction
2 Prototype Design
3 Result and Analysis
3.1 Triggered Event
3.2 Received Message
3.3 The Obtained Location
4 Conclusions
References
An Unconstrained Rotation Invariant Approach for Document Skew Estimation and Correction
1 Introduction
2 Methodology
2.1 Image Acquisition
2.2 Initial Deskew
2.3 Detection of Vertical Text
2.4 Deskew Based on Tesseract Feedback
3 Experimental Results and Discussion
4 Conclusion and Scope for Future Work
References
Smart Assistive Shoes for Blind
1 Introduction
2 Prototype Design
2.1 Hardware Description
2.2 System Design
3 Result and Discussion
4 Conclusion
References
Comparative Study on Various Techniques Involved in Designing a Computer Aided Diagnosis (CAD) System for Mammogram Classification
1 Introduction
2 Materials and Methods
2.1 Pre-Processing Techniques
2.1.1 Mean Filter
2.1.2 Median Filter
2.1.3 Adaptive Median Filter
2.1.4 Wiener Filter
2.1.5 Gaussian Filter
2.1.6 Partial Differential Equations
2.1.7 Adaptive Fuzzy Logic Based bi-Histogram Equalization (AFBHE)
2.1.8 Unsharp Masking (UM) Based Enhancement Techniques
2.1.9 Contrast Stretching
3 Segmentation
3.1 FCM Segmentation
3.2 Segmentation with Prior Knowledge Learning
3.3 Chan-Vese Level Set Segmentation
3.4 Active Contour-Based Method
3.5 Segmentation Based on Pixel-Wise Clustering
3.6 Segmentation Based on Morphological Operators
4 Feature Learning
4.1 Feature Extraction Using Phylogenetic Tree
4.2 Sequential Forward Feature Selection Approach (SFFS)
4.3 Dual Tree M-Band Wavelet Transform (DTMBWT)
5 Classification
5.1 Support Vector Machine (SVM)
5.2 Random Forests
5.3 K-Nearest Neighbour
6 Results
References
Traffic Flow Prediction Using Regression and Deep LearningApproach
1 Introduction
2 Related Work
3 Methodology
4 Simulation Results
4.1 Data Description
4.1.1 Experiment Result
4.2 Performance Using Different Algorithms
5 Conclusion
References
A Comparative Study on Assessment of Carotid Artery Using Various Techniques
1 Introduction
2 Image Acquisition
2.1 Carotid Endarterectomy
2.2 Doppler Ultrasound
2.3 B Mode Ultrasound
2.4 Computed Tomography
2.5 Reflection-Mode all-Optical Laser-Ultrasound (LUS)
2.6 3D Ultrasound Imaging
2.7 MRI
3 Pre-Processing
3.1 Pre-Processing Techniques
3.1.1 Mean Filter
3.1.2 Median Filter
3.1.3 Adaptive Median Filter
3.1.4 Wiener Filter
3.1.5 Gaussian Filter
3.1.6 Salt and Pepper Noise
3.1.7 Speckle Noise
4 Segmentation
4.1 Active Contour
4.2 Deep Learning
4.3 Snake Algorithm
4.4 Level Set Segmentation Algorithm
5 Performance Analysis
5.1 Pulse Wave Velocity Assessment (PWV)
5.2 Subject-Specific Mathematical Modeling
5.3 Nonlinear State-Space Approach: (State of the Art Approach)
5.4 Intima-Media Thickness
5.5 Logistic and Linear Regression Models
6 Results
References
Evaluation of Fingerprint Minutiae on Ridge Structure Using Gabor and Closed Hull Filters
1 Introduction
2 Pre-Processing
3 Fingerprint Image Enhancement
4 Minutiae Extraction
5 Post-Processing
5.1 Construction of Polygon Using Border Line Minutiae Points
5.2 Algorithm to Form the Closed Hull
5.3 Closed Hull-Based Filtering
5.4 Graham Scan Algorithm
5.5 Evaluation of Goodness of Index
6 Results and Discussion
7 Conclusions
References
A Perspective View on Sybil Attack Defense Mechanisms in Online Social Networks
1 Introduction
2 Graph Based Methods
3 Machine-Learning Based Methods
4 Conclusion
References
Minor Finger Knuckle Print Image Enhancement Using CLAHE Technique
1 Introduction
2 Overview of Histogram Equalization Methods
2.1 Histogram Equalization
2.2 Dynamic Histogram Equalization
2.3 Contrast Limited Adaptive Histogram Equalization
2.4 Adaptive Histogram Equalization
2.5 Brightness Preserving Dynamic Fuzzy Histogram Equalization
3 Performance Metrics
3.1 Mean Square Error
3.2 Peak Signal to Noise Ratio (PSNR)
3.3 The Root Mean Square Error (RMSE)
4 Result and Discussion
5 Conclusion
References
Learning Path Construction Based on Ant Colony Optimization and Genetic Algorithm
1 Introduction
2 Literature Survey
3 Methodology for Proposed Work
3.1 Ant Colony Optimization (ACO) Algorithms
3.2 Genetic Algorithm
4 Proposed Architecture
5 Experimental Result
6 Conclusions and Future Work
References
Pneumonia Detection and Classification Using Chest X-Ray Images with Convolutional Neural Network
1 Introduction
2 Related Works
3 Software Tools Used
4 System Architecture
4.1 Residual Neural Network
5 Methodology
5.1 Dataset Description
5.2 Image Preprocessing
5.3 Classification Using CNN
5.3.1 K-Fold Cross Validation
5.3.2 Training
5.3.3 Testing
6 Results
7 Future Scope
8 Conclusion
Compliance with Ethical Standards
References
An Optimized Approach of Outlier Detection Algorithm for Outlier Attributes on Data Streams
1 Introduction
2 Data Streams Challenges and Characteristics
3 Existing Distance-Based Outlier Detection Method
3.1 Comparision of Outlier Detection Algorithms [1]
3.1.1 Explanation
3.1.2 Working of the Proposed Algorithm
4 Results and Discussions
4.1 Comparison of Existing Outlier Detection Methods
4.2 Segregation of Outlier and Inliers for User Defined Bound Value: (n == 4)
4.3 Segregation of Outlier and Inliers for User Defined Bound Values: (Bound Values 6 & 7)
4.3.1 Comparison of Outliers Found in MCOD and Proposed Work (Figs. 10 and 11)
4.3.2 Time Analysis for MCOD and Proposed Work (Fig. 12)
5 Conclusion
References
Indo-Pak Sign Language Translator Using Kinect
1 Introduction
1.1 A Brief: Indo-Pakistani Sign Language
2 Existing Systems [1]
2.1 Disadvantage
2.2 Disadvantage
3 Proposed System
3.1 Advantages
4 Modules
4.1 Tracing and detection module
4.2 Normalization of user's height
4.3 Training phase
4.4 Text to Speech Conversion Module
5 System Architecture
6 Mathematical Modelling
6.1 Normalisation of User's Height
7 Technologies
7.1 Kinect Xbox 360
8 Conclusion and Future Work
References
Semantic Interoperability for a Defining Query
1 Introduction
2 Query Ontology Matching Schema (QOMS)
3 Assesment Measures
4 Conclusion
References
Gestational Diabetics Prediction Using Logisitic Regression in R
1 Introduction
1.1 Machine Learning in GDM research
2 Literature Survey
3 GDM Prediction Using Logistic Regression
4 Prediction Algorithm Using Logistic Regression in R
5 Results and Discussions
6 Conclusion
References
IOT Based Gas Pressure Detection for LPG with Real Time No SQL Database
1 Introduction
1.1 Mongo DB [11]
1.2 MQTT [1]
1.3 Arduino Uno-Wi-Fi [2]
2 Related Works
3 System Methodology
4 User Interface
5 Conclusion and Results
References
Hybrid Steerable Pyramid with DWT for Multiple Medical Image Watermarking and Extraction Using ICA
1 Introduction
2 Steerable Pyramid Transform
3 Modified Arnold Transform
4 Watermark Embedding
5 Independent Component Analysis
5.1 Pearson ICA
6 Results and Discussion
7 Conclusion
References
Dimensional & Spatial Analysis of Ultrasound Imaging Through Image Processing: A Review
1 Introduction
2 Related Work
3 Comparison
4 Conclusion
References
A Review on Methods to Handle Uncertainty
1 Introduction
1.1 Uncertainty Categorization
1.2 Sources of Uncertainty
1.3 Uncertainty Analysis
2 Probability Analysis Technique
3 Bayesian Analysis
3.1 Bayesian Network
3.2 Multi Entity Bayesian Network (MEBN)
4 Fuzzy Analysis
5 Soft Computing Techniques
6 Conclusion
References
Identity-Based Hashing and Light Weight Signature Scheme for IoT
1 Introduction
1.1 IoT System Architecture
2 Preliminaries and Related Works
3 Proposed Intrusion Detection and Authentication Scheme
3.1 Hash Algorithm
4 Security of the Proposed Hashing Scheme
5 Conclusion
References
Adaptive Particle Swarm Optimization Based Wire-length Minimization for Placement in FPGA
1 Introduction
2 FPGA Placement Problem
3 PSO Algorithm-Overview
4 Related Works
5 Proposed PSO Algorithm for FPGA Placement Problem
6 Results and Discussion
6.1 Experimental Setup
6.2 Convergence Behavior
6.3 Performance Evaluation for Placement in FPGA
7 Conclusion
References
Clustering of Various Diseases by Collagen Gene Using the Positional Factor
1 Introduction
2 Medical Importance
3 Proposed Methodology
4 Algorithm
5 Results and Discussions
6 Conclusion
References
Prediction of Water Demand for Domestic Purpose Using Multiple Linear Regression
1 Introduction
2 Methodology
2.1 Data Acquisition
2.2 Prediction of the Water Demand
3 Results and Discussion
4 Conclusion and Future Scope
References
Implementation of Regression Analysis Using Regression Algorithms for Decision Making in Business Domains
1 Introduction
2 Main Application of Regression Analysis in Business Are
2.1 Predictive Analytics
2.2 Operation Efficiency
2.3 Supporting Decisions
2.4 New Insights
2.5 Correcting Errors
3 The Application of Top 6 Regression Algorithms Used in Mining Industrial Data
3.1 Simple Linear Regression
3.2 Lasso Regression (Least Absolute Selection Shrinkage Operator)
3.3 Logistic Regression
3.4 Support Vector Machines
3.5 Multivariate Regression Algorithm
3.6 Multiple Regression Algorithms
4 Conclusion
References
Blockchain Based System for Human Organ Transplantation Management
1 Introduction
2 Participants in a e-Transplantation Blockchain Network
3 Benefits of Blockchain Based Organ Transplantation System
3.1 Secure and Reliable Data Sharing Across Various Entities in a Blockchain
3.2 Distributed Ledger and Database [8]
3.3 Peer-to-Peer Transmission Between Various Stakeholders in the Network [8]
3.4 Transparent Communication for All Entities [8]
3.5 Irreversibility and Immutability of Records [8]
3.6 Traceability of All Transactions
3.7 Transactions and Exchange of Digital Patient Data Direct from EHR
4 Envisaged Results of the Proposed System
5 Conclusion and Future Work
6 Compliance with Ethical Standards
References
Identification of Melanoma Using Convolutional Neural Networks for Non Dermoscopic Images
1 Introduction
1.1 Related Works
1.2 Dataset
2 Proposed Methodology
3 Results and Discussion
4 Conclusion and Future works
References
Exploitation of Data Mining to Analyse Realistic Facts from Road Traffic Accident Data
1 Introduction
2 Data Mining
3 Literature Review
4 Data Mining Methods for Road Traffic Analysis
4.1 Hierarchical Clustering (HC)
4.2 Random Forest (RF)
4.3 Classification and Regression Tree (CART)
5 Methodology for Accident Analysis
6 Conclusion
References
A Deep Learning Approach for Segmenting Time-Lapse Phase Contrast Images of NIH 3T3 Fibroblast Cells
1 Introduction
2 Materials and Methods
2.1 Reference Data
2.2 Methodology
3 Results
4 Conclusion
References
Flow Distribution-Aware Load Balancing for the Data Centre over Cloud Services with Virtualization
1 Introduction
2 Virtualization
3 VMMB: Virtual Machine Memory Balancing for Unmodified Operating Systems
4 Energy-Efficient Thermal-Aware Autonomic Management of Virtualized HPC Cloud Infrastructure
5 Virtualization Migrant over the Cloud with Load Balancing with Threshold (VMOVLBWT) System
5.1 Requirements
6 Organizational Flow
7 Discussion
8 Experimental Phase
9 Conclusion
References
Disease Severity Diagnosis for Rice Using Fuzzy Verdict Method
1 Introduction
2 Development of Fuzzy Expert System
2.1 Rice Dataset
2.2 Elements of Fuzzy Expert System
2.2.1 Fuzzification Phase
2.2.2 Rice Fuzzy Verdict Method
3 MATLAB Results
4 Assessment of System Performance
5 Conclusions and Future Research
References
HELPI VIZ: A Semantic image Annotation and Visualization Platform for Visually Impaired
1 Introduction
2 Existing Tools for Image Annotation
3 HELPI VIZ: A Semantic Image Annotation and Visualization Platform
3.1 Helpi Viz: Annotation
3.2 Helpi Viz: Search and Visualization
4 Conclusion
References
A Survey of Multi-Abnormalities Disease Detection and Classification in WCE
1 Introduction
2 Review of Literature
2.1 Abnormality Detection of Bleeding
2.2 Abnormality Detection of Polyp
2.3 Abnormality Detection of Ulcer
2.4 Abnormality Detection of Hookworm
2.5 Abnormality Detection of Tumor
3 Conclusion
References
Detection of Alzheimer's Disease in Brain MR Images Using Hybrid Local Graph Structure
1 Introduction
2 Proposed Method
2.1 Local Graph Structure
2.2 Extended Local Graph Structure
2.3 Hybrid Local Graph Structure
3 Result and Analysis
4 Conclusion
References
A Review on Object Tracking Wireless Sensor Network an Approach for Smart Surveillance
1 Introduction
2 Object Tracking Wireless Sensor Network [OTWSN] Structure and Terminologies
2.1 Object Tracking Wireless Sensor Network [OTWSN]
2.2 Discrete Object
2.3 Continuous Object
3 Research Challenges and its Solution in OTWSN
3.1 Computation and Communication Cost
3.2 Energy Constraint
3.3 Data Aggregation
3.4 Sensor Technology and Localization Techniques
3.4.1 Object Detection
3.4.2 Object Speed and Sampling Frequency
3.4.3 Object Position Prediction, Tracking Accuracy and Reporting Frequency
3.5 Localization
3.6 Sensor Node Deployment
3.7 Sensor Node Collaboration
4 Analysis and Comparison
5 Future Directions
5.1 Fault Scenarios
5.2 Cross Layer Integration
5.3 Object Prediction Delay and Accuracy
5.4 Hybrid Network Type
6 Summary
7 Conclusion
References
A Mini Review on Electrooculogram Based Rehabilitation Methods Using Bioengineering Technique for Neural Disorder Persons
1 Introduction
2 Human Computer Interactıon
3 Rehabılıtatıon Devıce
4 EOG Recording
5 Literature Survey
6 Conclusion
References
Applications Using Machine Learning Algorithms for Developing Smart Systems
1 Introduction
2 Literature Survey
3 Various Types of Machine Learning
3.1 Supervised Learning (SL)
3.2 Unsupervised Learning (USL)
3.3 Reinforcement Learning (RL)
4 Role of Machine Learning in Smart Systems
4.1 What Is a Smart System?
4.2 What Are the Smart System Available Today?
4.3 How Machine Learning Techniques Used in Smart Systems
4.4 Major Algorithms Used in Machine Learning
5 Challenges and Issues
6 Conclusion
References
Benchmarking of Digital Forensic Tools
1 Introduction
2 Literature Review
3 Methodology
4 Results
5 Conclusion
References
An Item Based Collaborative Filtering for Similar Movie Search
1 Introduction
2 Proposed Work
2.1 Preprocessing
2.2 Pivot Table Creation
2.3 Cosine Similarity Generation
2.4 Recommendation Engine
3 Conclusion
References
Identification of Musical Instruments Using MFCC Features
1 Introduction
2 System Overview
3 System Design Methodology
3.1 Mel-Frequency Cepstral Coefficients (MFCC)
3.1.1 Database
3.1.2 Support Vector Machine (SVM)
3.1.3 K-Nearest Neighbor Algorithm
4 Results
5 Conclusion
References
An Instance Identification Using Randomized Ring Matching Via Score Generation
1 Introduction
1.1 Video Examination, Surveillance and Object Instance
1.2 Video Motion Detection
2 Proposed Object Instance Searching Technique
2.1 Feature Extraction
2.2 Frame Conversion
2.3 Template Matching
2.4 Feature Matching Algorithm
2.5 Background Subtraction
2.5.1 Ring Matching Algorithm
3 Performance Evaluatıon
3.1 Dataset
3.2 Confusion Matrix
4 Conclusion
References
Performance Improvement of Multi-Channel Speech Enhancement Using Modified Intelligent Kalman Filtering Algorithm
1 Introduction
2 Proposed Algorithm
3 Simulation Results
4 Conclusion
References
A Collaborative Method for Minimizing Tampering of Image with Commuted Concept of Frazile Watermarking
1 Introduction
1.1 Equations
2 Classification of Watermarking Authentication Schemes
2.1 Fragile Watermarking Methodology
2.1.1 Principle
2.1.2 Embed Registration in LSB
2.1.3 Embedding Process
2.1.4 Self-Embedding
2.2 Semi-fragile Watermarking Methodology
2.2.1 Compression
2.2.2 Pseudo Code (Creation of Validation Bits)
2.2.3 Pseudo Code (Embedding Process)
3 Comparison and Problem Formulation with Experimental Result
4 Conclusion
References
Interval Type-2 Fuzzy Logic Based Decision Support System for Cardiac Risk Assessment
1 Introduction
2 Conceptual Understanding
2.1 Type-1 Fuzzy Set
2.2 Type-2 Fuzzy Set
2.3 Type-2 Fuzzy Logic system
3 Methodology
3.1 Steps to Design Type-2 Fuzzy Inference System
3.1.1 Selection of Antecedent and Consequent Part
3.1.2 Selection of Linguistic Variables
3.1.3 Domain Expert's Knowledge
3.1.4 Fuzzification: Formulation of Type-2 Fuzzy Sets
3.1.5 Properties of Fuzzy Relation
3.1.6 Tolerance to Equivalence Relation
3.1.7 Defuzzification
3.1.8 Formalism of Type-2 Fuzzy Set
3.1.9 Fuzzy Inference Mechanism
4 Results and Discussions
5 Conclusion
References
Classification of Multi-retinal Disease Based on Retinal Fundus Image Using Convolutional Neural Network
1 Introduction
2 Related Work
3 Dataset
4 Proposed Methodology
5 Algorithm
5.1 Alexnet Architecture
6 Result
7 Conclusion
8 Future Directions
References
Accurate Techniques of Thickness and Volume Measurement of Cartilage from Knee Joint MRI Using Semiautomatic Segmentation Methods
1 Introduction
2 Methodology
3 Results and Discussion
4 Conclusion
References
A Hybrid Approach Using Machine Learning Algorithm for Prediction of Stock Arcade Price Index
1 Introduction
2 Literature Survey
3 Experiments and Results
4 Conclusion
References
Disease Severity Diagnosis for Rice Using Fuzzy Verdict Method
1 Introduction
2 Development of Fuzzy Expert System
2.1 Rice Dataset
2.2 Elements of Fuzzy Expert System
2.2.1 Fuzzification Phase
2.2.2 Rice Fuzzy Verdict Method
3 MATLAB Results
4 Assessment of System Performance
5 Conclusions and Future Research
References
Bio-inspired Fuzzy Model for Energy Efficient Cloud Computing Through Firefly Search Behaviour Methods
1 Introduction
2 Literature Survey
2.1 Load Balancing in Cloud Computing
3 Proposed Work
3.1 Biological Model: Firefly Behaviour
3.2 Design of Algorithms
4 Results
5 Conclusion and Future Work
References
Neural Association with Multi Access Forensic Dashboard as Service (NAMAFDS)
1 Introduction
2 Forensics Over the Cloud
3 Evidence Segregation
4 Neural Association with Multi Access Forensic Dashboard and Feature Extraction
5 Gabor Filter
6 Training the Neural Network
7 Lapla Image Encryption Description and Pseudo Code
8 Discussion
9 Conclusion and Future Work
References
Exploration of the Possible Benifits for the Complementary Perfect Matching Models with Applications
1 Introduction
2 Real Life Application
3 Restrained Step Domination Number for Cartesian Product of Graphs
4 Conclusion
Annexure 1
References
Cloud Robotics in Agriculture Automation
1 Introduction
2 A Brief History
2.1 Cloud Computing
2.2 Cloud Robotics
3 Research and Development in Agricultural Robots
3.1 Cotton Picking
3.2 Nursery Planting
3.3 Crop Seeding
3.4 Crop Monitoring and Analysis
3.5 Fertilizing and Irrigation
3.6 Crop Weeding and Spraying
3.7 Thinning and Pruning
3.8 Picking and Harvesting
4 Cloud Enabled Agri Robots in Smart Farming
5 Conclusion and Future Scope
References
Comparative Analysis of EMG Bio Signal Based on Empirical Wavelet Transform for Medical Diagnosis
1 Introduction
2 Materials and Methods
2.1 Empirical Wavelet Transform
3 Results and Discussion
3.1 EMG Signal Analysis with Synthetic and Real Data Set
4 Conclusion
References
Efficient Prevention Mechanism Against Spam Attacks for Social Networking Sites
1 Introduction
2 Literature Survey
3 Problem Identification
4 Research Methodology
4.1 Adaboost Logit Boost Algorithm
4.2 Chaos Genetic Algorithm
4.3 Proposed Hybrid Optimization
5 Experimental Results
6 Conclusion
References
PPG Signal Analysis for Cardiovascular Patient Using Correlation Dimension and Hilbert Transform Based Classification
1 Introduction
2 Materials and Methods
2.1 Correlation Dimension
2.2 Detrend Fluctuation Analysis
2.3 Hilbert Transform
3 Post Classifiers for Classification
3.1 KNN Classifier
3.2 Firefly Algorithm
3.3 TR-LDA
4 Results and Discussion
5 Conclusion
References
A Robust and Fast Fundus Image Enhancement by Dehazing
1 Introduction
2 Related Works
3 Proposed Approach
3.1 Luminosity Correction
3.2 Reducing Contrast Variations
3.3 Filtering
3.4 Phase A
3.5 Phase B
4 Experimental Results
5 Conclusion
References
A Generalized Study on Data Mining and Clustering Algorithms
1 Introduction
1.1 Need for Clustering?
2 Clustering and Its Types
2.1 Partitioned Clustering
2.1.1 k-Means Algorithm
2.1.2 k-Medoids Algorithm
2.2 Hierarchical Based Clustering
2.2.1 Agglomerative Clustering Algorithm
2.2.2 Divisive Based Clustering
2.3 Density Based Clustering
2.3.1 DBSCAN (Density Based Spatial Clustering of Applications with Noise)
3 Conclusion
References
Multimodal Medical Image Fusion Using Discrete Fractional Wavelet Transform (DFRWT) with Non-subsampled Contourlet Transform (NSCT) Hybrid Fusion Algorithm
1 Introduction
2 Background Works
3 Proposed Method
3.1 Discrete Fractional Wavelet Transform [18–20]
3.2 Non-subsampled Contourlet Transform (NSCT) [9, 18, 19, 21]
3.3 Hybrid Algorithm
3.3.1 Steps for Hybrid Algorithm (NSCT-DFRWT)
4 Implementations Results
4.1 Evaluation Metrics
5 Conclusion
References
Emotion Recognition on Multi View Static Action Videos Using Multi Blocks Maximum Intensity Code (MBMIC)
1 Introduction
2 Related Works
3 Proposed Work
3.1 Frame Subtraction
3.2 Feature Extraction
3.3 Random Forest (RF)
4 Results and Discussions
4.1 Quantitative Evaluation
4.2 GEMEP Dataset
4.3 Random Forest Classifiers Based Experimental Results
5 Conclusion and Future Work
References
Survey on Security Systems in Underwater CommunicationsSystems
1 Introduction
2 Security Attacks in Underwater Communication Network
2.1 Jamming
2.2 Wormhole Attack
2.3 Sink Hole Attack
2.4 Hello Flood Attack
2.5 Sybil Attack
2.6 Selective Forwarding
3 Security Requirements for Underwater Acoustic Communication
3.1 Authentication
3.2 Confidentiality
3.3 Integrity
3.4 Availability
3.5 Source Localization
3.6 Self-Organization
3.7 Data Freshness
4 The Security Issues in UWSN Layer, Its Vulnerabilities and Defenses Against Threats
4.1 Data Link Layer
4.2 Network Layer
4.3 Transport Layer
4.4 Application Layer
5 Layer Attacks and Suggested Counter Measures
5.1 Physical Layer
5.2 MAC Layer
5.3 Network Layer
5.4 Transport Layer
6 Analysis and Evaluation of Educational Benefits
7 Future Scope
8 Conclusion
References
A Review on Meta-heuristic Independent Task Scheduling Algorithms in Cloud Computing
1 Introduction
2 Escalation (Optimization) Parameters
2.1 User's View
2.2 Service Provider's View
3 Genetic Algorithm (GA) for Task Scheduling
4 Ant Colony Optimization (ACO) for Task Scheduling
5 Particle Swarm Optimization (PSO) for Task Scheduling
6 League Championship and Pareto Optimization for Task Scheduling
7 Observations
8 Conclusion
References
Facial Expression Recognition for Human Computer Interaction
1 Introduction
2 Proposed Facial Emotion Detection Methodology
2.1 Face Detection and Pre-Processing
2.2 Facial Features Extraction
2.3 Facial Expressions Classification
3 Experiment and Results
3.1 The JAFFE and the CK+ Database
3.2 Results
4 Conclusion
References
Evolutionary Motion Model Transitions for Tracking Unmanned Air Vehicles
1 Introduction
2 Positioning the Drone
3 Tracking Filter
4 Fuzzy Motion Models
5 Rule Base
6 Evolutionary Approach
7 Analysis and Findings
8 Conclusion
References
A Survey on Major Classification Algorithms and Comparative Analysis of Few Classification Algorithms on Contact Lenses Data Set Using Data Mining Tool
1 Introduction
1.1 Classification and Prediction
1.2 Clustering
1.3 Association Rule Mining
1.4 Outlier Analysis
2 Classification Algorithms
2.1 Decision Tree Induction
2.2 Random Forest
2.3 CART
2.4 REP Tree
2.5 Random Trees
3 Metrics for Classification
3.1 Confusion Matrix
3.2 Mean Absolute Error
3.3 Root Mean Square Error
4 Results and Comparisons
4.1 Results
5 Overall Comparison of J48, REP, Random Forest and Tree with Training Dataset
6 Conclusion
References
Segmentation of Blood Vessels in Retinal Fundus Images for Early Detection of Retinal Disorders: Issues and Challenges
1 Introduction
2 Related Works
2.1 Survey on Preprocessing Techniques
2.2 Survey on Image Enhancement
2.3 Survey on Segmentation Techniques
2.4 Survey on Optimization Techniques
2.5 Evaluation Metrics
3 Findings of the Survey
References
Interrogation for Modernistic Conceptualization of Complementary Perfect Hop Domination Number with Various Grid Models
1 Introduction
2 CPHD Number for Some Standard Graphs
3 CPHD Number for Some Mirror Graphs
4 CPHD Number for Some Special Type of Graphs
5 Conclusion
References
Temporal Change Detection in Water Body of Puzhal Lake Using Satellite Images
1 Introduction
2 Study Area and Data Collection
2.1 Computation of Spectral Indices
2.2 Reference Maps
3 Methodology
3.1 Flow Chart
4 Results and Discussion
4.1 Change Detection using Iso Clustering
4.2 Change Detection using NDWI and MNDWI Indices
5 Conclusion
References
Content Based Image Retrieval: Using Edge Detection Method
1 Introduction
2 Edge Detection Techniques
3 Proposed Method for Combined Edge Detection Method
3.1 Roberts Edge Model
3.2 Sobel Edge Model
3.3 Prewitt Operator Edge Model
3.4 Laplacian of Gaussian (LoG) Model
3.5 Canny Edge Model
3.6 Combined Edge Detection Algorithm
4 Performance Evaluation
4.1 Performance Analysis
5 Conclusion
References
Sentiment Analysis to Quantify Healthcare Data
1 Introduction
2 Related Work
3 Proposed Model
3.1 Datasets or Data Files
4 Working of System Components
4.1 Text Score Calculator
4.2 Clustering and Labelling Algorithm
4.3 Classification Algorithm
5 Performance Analysis and Comparison
6 Conclusion and Future Scope
References
Convolutional Neural Network with Fourier Transform for Road Classification from Satellite Images
1 Introduction
2 Literature Survey
3 Proposed Method
3.1 Tools Employed
3.2 Overview of the Solution
3.3 Convolution Neural Network Configuration
3.4 Training
4 Results
5 Conclusion
References
Prediction of Gender Using Machine Learning
1 Introduction
2 Proposed Solution
2.1 Online Dataset
2.2 Attributes of the Dataset
3 Block Diagram
4 Algorithms on the Dataset
5 Class Mapping
6 Data Split
7 Transformation
7.1 Logistic Regression
7.2 Decision Tree Algorithm
7.3 Random Forest Classifier
8 Conclusion
8.1 Acoustics Values
9 Future Work
References
Camera Feature Ranking for Person Re-Identification Using Deep Learning
1 Introduction
2 Related Works
3 Proposed System
3.1 Data Set
3.2 Algorithm Used
3.2.1 Convolution Neural Network
3.2.2 Siamese Network
3.2.3 Feature Ranking
3.2.4 Steps for Feature Ranking
3.2.5 Pseudocode
4 Results
5 Conclusion and Future Works
References
Definite Design Creation for Cellular Components Present in Blood and Detection of Cancerous Cells by Using Optical Based Biosensor
1 Introduction
2 Different Types of Existing Sensors
2.1 Multilocus Sequence Typing Sensors
2.2 Biosensors in the Field of Agriculture
2.3 Biosensors in the Field of Medicine
2.4 Pollution Control Biosensors
2.5 Optical Based Biosensors
2.6 Microbial Biosensors
3 Proposed Technology
3.1 Design
3.2 MEEP Tool
3.3 Results Using R-Soft Tool (Fig. 8, Table 4)
4 Conclusion
5 Results
6 Future Scope
References
Virtual Screening of Anticancer Drugs Using Deep Learning
1 Introduction
2 Literature Review
3 Background
3.1 Decision Tree Algorithm
3.2 Support Vector Machine Algorithm
3.3 Random Forest Algorithm
3.4 Convolutional Neural Network
3.5 Long Short Term Memory
4 Proposed Method
5 Implementation and Results
5.1 Datasets
5.2 Results
6 Conclusion
References
Identification and Detection of Glaucoma Using Image Segmentation Techniques
1 Introduction
2 Flowchart for Glaucoma Detection (Fig. 1)
3 Segmentation Techniques
4 Experimental Results
5 Measuring Parameter
6 Results and Discussions
7 Conclusions
References
Tile Pasting P System Constructing Homeogonal Tiling
1 Introduction
2 Tile Pasting P System (TPPS)
3 Homeogonal Tiling
4 Conclusion
References
Analysis of Breast Cancer Images/Data Set Based on Procedure Codes and Exam Reasons
1 Introduction
2 Characteristics of a Malignant Tumor
3 Breast Cancer Symptoms
4 Related Works
5 Diagnoses
6 Data Collection and Representation
7 Data Analysis (Table 1)
7.1 Rule Coverage and Accuracy
8 Conclusion
References
Empirical Analysis of Machine Learning Algorithms in Fault Diagnosis of Coolant Tower in Nuclear Power Plants
1 Introduction
2 Literature Review
3 Empirical Analysis
3.1 Metrics and Its Significance
3.2 Critical Parameters to Be Monitored in Coolant Tower
4 Results and Discussions
5 Conclusion
References
Marker Controlled Watershed Segmented Features Based Facial Expression Recognition Using Neuro-Fuzzy Architecture
1 Introduction
2 Literature Survey
3 Methodology
3.1 Data Collection
3.2 Preprocessing
3.3 Feature Extraction
3.4 Classification
3.5 Adaptive Neuro-Fuzzy Inference System (ANFIS)
4 Experimental Results and Discussion
5 Conclusion
References
A Review on Sequential and Non-Overlapping Patterns for Classification
1 Introduction
2 Literature Survey
3 Conclusion
References
An Analytical Review on Machine Learning Techniques to Predict Diseases
1 Introduction
1.1 Supervised
1.2 Unsupervised
1.3 Deep Learning
1.4 Semi-Supervised
1.5 Reinforcement
2 Literature Survey
3 Conclusion
References
Driver's Behaviour Analytics in the Traffic Accident RiskEvaluation
1 Introduction
1.1 Traffic Safety
1.2 Driver's Perception Error
2 Literature Survey
2.1 Earlier System
2.2 Existing System
2.3 Disadvantage
3 Proposed Methodology
3.1 Advantages
3.2 Software Requirements
4 Implementation
4.1 Field Experiment and Analysis
4.2 Data Collection
4.3 Attributes of the Dataset
4.4 Data Analytics
4.5 Model
5 Discussion on Results
6 Conclusion
References
Emotion Speech Recognition Through Deep Learning
1 Introduction
2 Problem
2.1 Problem Statement
3 Proposed Solution
4 Experimental Result
5 Experiment on Original Data
6 Main Parameters of the Experiment
7 Confusion Matrix on the Augmented Testing Dataset
8 Conclusion
8.1 Future Enhancement
References
Segmentation Techniques Using Soft Computing Approach
1 Introduction
2 Review of Literature
3 Rationale Study
4 Objectives
4.1 K-means Clustering
4.2 The Fuzzy C-Means (FCM) Clustering Algorithm
4.3 Thresholding
5 Methodology
5.1 Histogram Thresholding
5.2 For K-means Clustering
5.3 For FCM Clustering
5.4 There Are Some Steps in the Proposed Approach. Input Image, Noise Filter, Feature Selection, Feature Extraction and Classification
5.5 Result Analysis
6 Expected Outcomes
7 Conclusion
References
Detection of Tumor in Brain MR İmages Using Hybrid IKProCM and SVM
1 Introduction
2 Related Work
3 Proposed Work
3.1 Acquisition of Data
3.2 Data Enhancement
3.3 Skull Removal Phase
3.4 Clustering
3.5 Extraction of Attributes Using SZM
3.6 Support Vector Machine (SVM)
3.7 Evaluation metrics
4 Result and Discussion
5 Conclusion
References
A Novel Methodology for Converting English Text into Objects
1 Introduction
2 Materials and Methodology
2.1 System Process Flow
2.2 KBIR Algorithm
2.3 Inverse KBIR Algorithm
3 Implementation and Results
3.1 System Implementation
4 Justification and Conclusion
4.1 Conclusion
References
Framework for Adaptıve Testıng Strategy to Improve Software Relıabılıty
1 Introduction
2 Discussion
3 Problem Statement
4 Proposed Work
4.1 Adaptive Testing with Regression Testing
4.2 Hill Climbing Algorithm
4.3 Genetic Algorithm
5 Conclusion
References
Detection of Primary Glaucoma Using Fuzzy C Mean Clustering and Morphological Operators Algorithm
1 Introduction
2 Proposed Block Diagram
3 Database Collection
4 Image Pre-processing
5 Edge Detection and Background Removal
6 Segmentation of OD and OC Using Fuzzy Clustering Algorithm
7 Feature Extraction of Optic Disc and Optic Cup
8 Computation of CDR
9 Simulation Results
9.1 Case 1: Normal (CDR < 0.4)-Image_1.jpg
9.2 Case 2: Moderate (0.4 < CDR < 0.6): Image_4.jpg
9.3 Case 3: Severe (CDR > 0.6): Image_5.jpg
10 Conclusions
References
An Analytical Framework for Indian Medicinal Plants and Their Disease Curing Properties
1 Introduction
2 Related Work
3 Analytical Framework for Mining Medicinal Plant Properties from Biomedical Literature
3.1 General Architecture
3.2 Candidate MeSH Term Identification from Biomedical Literature
3.3 Probable MeSH term Selection (PMS)
3.4 Informative Sentence Selection (ISS) from Biomedical literature
3.4.1 Description of Data Set for Feature Selection
3.4.2 Feature Extraction
3.4.3 Classification of Candidate Sentences
4 Results and Discussion
5 Conclusion
References
Plant Leaf Recognition Using Machine Learning Techniques
1 Introduction
1.1 Edge Detection
1.1.1 Canny Edge Detection
1.1.2 Prewitt Operator
1.1.3 Sobel Edge Detection
2 Literature Review
2.1 Classification Using Multi Class SVM
2.2 Convolutional Neural Network (CNN)
3 Architecture Diagram
4 Empirical Results
4.1 Edge Detection (Table 1)
4.2 Classification Using SVM (Fig. 4)
4.3 Deep Learning Approach for Leaf Recognition
5 Conclusion
References
Conceptualization of Indian Biodiversity by Using Semantic Web Technologies
1 Introduction
2 Background and Related Work
3 Ontology Engineering Process
4 Results
4.1 The Indian Biodiversity Ontology (InBiOn)
5 Ontology Evaluation
6 Conclusion
References
A New Ensemble Clustering Approach for Effective Information Retrieval
1 Introduction
2 Literature Survey
3 Proposed Work
3.1 Sequential Clustering
3.2 Cluster Based Ensembles Approach (A New Experiment On Distributed Systems)
4 Results and Discussion
5 Conclusion
References
Detection of Cancer Cell Growth in Lung Image Using Artificial Neural Network
1 Introduction
2 Wavelet Transform
3 Encoding
4 Feature Extraction
5 Result and Discussion
6 Conclusion
References
Single Image Dehazing Using Deep Belief Neural Networks to Reduce Computational Complexity
1 Introduction
1.1 Haze: Physical Phenomena
1.2 Haze: Conceptual Model
1.3 Problem Formulation
2 Related Works
3 Proposed Work
4 Results and Discussion
5 Conclusion
References
Measuring Social Sarcasm on GST
1 Introduction
2 Related Works
3 Methodology
3.1 Dataset and Preprocessing
3.2 Sentiment Analyzer and Topic Extractor
3.3 Sarcasm Detection Meter
3.4 Prediction
3.5 Visualization
4 Result and Discussion
5 Conclusion and Futurework
References
A Review on False Data Injection in Smart Grids and the Techniques to Resolve Them
1 Introduction
2 False Data Injection Attacks
2.1 Kalman Filter
2.1.1 Techniques used in Kalman Filter
2.2 Bad Data Detection
2.2.1 Cyber Side
2.2.2 Physical Side
2.3 Phasor Measurement Unit (PMU)
3 Results and Discussion
3.1 Defending Methods
3.1.1 Strict Analysis
3.1.2 Wireless Network in Smart Grids
3.1.3 Introducing Modules
4 Conclusion
References
A Novel Methodology for Identifying the Tamil Character Recognition from Palm Leaf
1 Introduction
2 Materials and Methodology
2.1 Proposed System Work Flow
2.2 Proposed Algorithm for the Palm Leaf Character Recognition
3 Implementation and Results
3.1 Segmentation and Feature Extraction
4 Conclusion and Future Enhancement
4.1 Conclusion
4.2 Future Enhancement
References
Leaf Recognition Using Prewitt Edge Detection and K-NNClassification
1 Introduction
2 Related Work
3 Automatic Identification of Leaf Species
3.1 Prewitt Edge Detection
3.2 K-Nearest Neighbor Classification
4 Conclusion
References
Learning Deep Topics of Interest
1 Introduction
2 Review on Topic Models
3 Deep Topic Models
4 Evaluation with Topic Coherence
5 Proposed Metric: Structured Topic Coherence
6 Discussion
7 Conclusion
References
A Study on Varıous Bıo-Inspıred Algorıthms for Intellıgent Computatıonal System
1 Introduction
2 Types of Bıo Inspıred Algorıthms
2.1 Ant Colony Algorithm
2.2 Firefly Algorithm
2.3 Bee Colony Algorithm
2.4 Bat algorithm
2.5 Cuckoo search algorithm
2.6 Firefly Algorithm
3 Conclusıon
References
Credit Card Fraud Detectıon in Retaıl Shopping Using Reinforcement Learning
1 Introduction
2 Related Work
3 System Design
3.1 Reinforcement Learning
3.2 Random Forest
3.3 Evaluation
4 Implementation
4.1
4.2
4.3 Validation Results
4.4 Performance Measures
5 Conclusion
References
Deseasonalization Methods in Seasonal Streamflow SeriesForecasting
1 Introduction
2 Monthly Seasonal Streamflow Series
3 Deseasonalization Models
3.1 Padronization
3.2 Moving Average
3.3 Seasonal Difference
4 Predictors
4.1 Periodic Autoregressive
4.2 Extreme Learning Machines
5 Case Study
6 Conclusion
References
Local Painted Texture Pattern for Quality of Content Based Image Retrieval
1 Introduction
2 Related Work
3 Proposed Work
3.1 Algorithm: Local Painted Texture Pattern
3.2 Formation of Pattern Extraction
4 Performance Evaluation
5 Conclusion
References
Deep Learning Architectures for Medical Diagnosis
1 Introduction
2 Related Work
3 Methodology
3.1 K-Means Clustering
3.2 Softmax Regression
3.3 Energy Normalization
3.4 Generalized Linear Neural Network (GLNN) (Fig. 4)
4 *-15pt
5 Conclusion
References
Improved Blog Classification Using Multi Stage Dimensionality Reduction Technique
1 Introduction
2 Methodology
2.1 Blog Representation and Preprocessing
2.2 Feature Reduction
2.3 Pattern Reduction
2.4 Compact Pattern Reduction and Classification
3 Blog Classification
4 Experiments and Results
5 Conclusion
References
Knowledge—Guru System Using Content Management for an Education Domain
1 Introduction
2 Literature Review
3 Proposed Methodology
4 Conclusion and Future Work
References
Deep Learning for Voice Control Home Automation with Auto Mode
1 Introduction
2 System Overview
3 Hardware Development
4 Software Development
5 Working Process
6 Results & Analysis
7 Deep Learning for Home Automation
8 Conclusion
References
Review on Spectrum Sharing Approaches Based on Fuzzy and Machine Learning Techniques in Cognitive Radio Networks
1 Introduction
2 Approaches for Sharing the Spectrum
2.1 Based on Network Architecture
2.2 Based on Spectrum Allocation
2.3 Based on Spectrum Access
3 Comparison of Spectrum Sharing
3.1 Spectrum Management
3.2 Complexity in Spectrum Sharing
3.3 Spectrum Scalability
3.4 Spectrum Allocation Duration
4 Conclusion
References
Artificial Intelligence Based Technique for Base Station Sleeping
1 Introduction
2 System Model
2.1 Model Representation for Power Consumption
2.2 Signal to Interference and Noise Ratio (SINR)
2.3 Traffic Model
2.3.1 Little's Law
3 Broadcast Model
4 Blocking Probability
5 Small Base Station Deployment and Sleeping
6 Results
7 Conclusion
References
A Novel Region Based Thresholding for Dental Cyst Extraction in Digital Dental X-Ray Images
1 Introduction
2 Dental Cyst Segmentation
3 Maxımally Stable Extremal Regıons (MSER)
4 The Proposed MSER Based Dental Cyst Segmentation
4.1 Steps in MSER Based Segmentation
5 Results and Dıscussıon
5.1 Parameter Evaluation
5.2 Cyst Extraction Using MSER
5.3 Comparison of Dental Cyst Extraction Methods (Binary form)
6 Conclusion
References
Salient Object Detection Using DenseNet Features
1 Introduction
2 Related Work
2.1 CNN Architectures
2.2 Salient Object Detection
3 Proposed System
3.1 DenseNet-121
3.2 Deconvolution Model
4 Implementation
5 Experimental Results
6 Conclusion
7 Future Work
References
Attentive Natural Language Generation from Abstract Meaning Representation
1 Introduction
2 Related Work
2.1 Modular and Planning Based Approaches
2.2 Sequential Stochastic Model
2.3 Encoder-Decoder Architecture
3 Learning Model
3.1 Sequential Model
3.2 Long-Short Term Memory
3.3 Attention Mechanism
4 Realization Model
4.1 Preprocessing
4.2 Encoding & Decoding
4.3 Attention Mechanism
5 Results
6 Conclusion
References
Euclidean Distance Based Region Selection for Fundus Images
1 Introduction
2 Proposed Methodology
2.1 Euclidean Distance Based Region Selection
3 Conclusion
References
Big Data Oriented Fuzzy Based Continuous Reputation Systems for VANET
1 Introduction
2 Problem Statement
3 Related Work
4 Multinomial Bayesian Model
4.1 The Dirichlet Distribution
4.2 Dirichlet Distribution with Prior
4.3 Dirichlet Reputation System
4.4 Collecting Ratings
4.5 Aggregating Rating on Time
4.6 Convergence Values for Reputation Scores
4.6.1 Representing Reputation
4.7 Point Estimate Representation
5 Continuous Ratings
5.1 Multinomial Reputation System
6 Network Model for Continuous Rating in VANET
7 Big Data Oriented Vehicle Reputation System
8 Conclusion
References
Multi-faceted and Multi-algorithmic Framework (MFMA) for Finger Knuckle Biometrics
1 Introduction
2 Background
3 MFMA Framework
3.1 Implementation Module A
3.2 Implementation Module B
3.3 Implementation Module C
3.4 Decision Level Fusion Using Bayesian Approach
4 Experimental Analysis and Results Discussion
4.1 Experiment-1: Performance Evaluation of Integration Module-A
4.2 Experiment-2: Performance Evaluation of Integration Module-B
4.3 Experiment-3: Performance Evaluation of Integration Module-C
4.4 Experiment-4: Performance Evaluation of MFMA Framework
4.5 Experiment-5: Robustness of MFMA Framework Towards Distortions and Deformations in Finger Knuckle Images
5 Conclusions
References
Implementation of SSFCM in Cross Sectional Views of Paediatric Male and Female Brain MR Images for the Diagnosis of ADHD
1 Introduction
2 ADHD and Gender
3 Literature Review
4 Proposed Method
5 Results and Discussion
5.1 Accuracy (%)
5.2 Specificity (%)
5.3 Sensitivity (%)
6 Conclusion
References
Hand Gesture Recognition Using OpenCv and Python
1 Introduction
2 Methods
2.1 Input Capturing
2.2 Background Subtractor
2.3 Running Average
2.4 Histogram of our Hand
2.5 Thresholding & Motion Detection
2.6 Contour Extraction
2.7 Counting Fingers
3 Experimental Results
4 Future Enhancements
5 Conclusion
References
Real Time Facial Recognition System
1 Introduction
2 Basic Recommendations of Facial Recognition
2.1 Existing System
3 Problem Statement
4 Proposed System
5 Conclusion
References
Recommend Papers

New Trends in Computational Vision and Bio-inspired Computing: Selected works presented at the ICCVBIC 2018, Coimbatore, India
 3030418618, 9783030418618

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

S. Smys Abdullah M. Iliyasu Robert Bestak Fuqian Shi   Editors

New Trends in Computational Vision and Bio-inspired Computing Selected works presented at the ICCVBIC 2018, Coimbatore, India

New Trends in Computational Vision and Bio-inspired Computing

S. Smys • Abdullah M. Iliyasu • Robert Bestak Fuqian Shi Editors

New Trends in Computational Vision and Bio-inspired Computing Selected works presented at the ICCVBIC 2018, Coimbatore, India

Editors S. Smys Department of CSE RVS Technical Campus Coimbatore, TN, India

Abdullah M. Iliyasu Tokyo Institute of Technology School of Computing Tokyo, Japan

Robert Bestak Department of Telecommunication Engineering Czech Technical University in Prague Prague, Czech Republic

Fuqian Shi College of Information Science & Engineering Wenzhou Medical University Wenzhou, China

ISBN 978-3-030-41861-8 ISBN 978-3-030-41862-5 (eBook) https://doi.org/10.1007/978-3-030-41862-5 Mathematics Subject Classification (2020): 65D19, 68Uxx, 68T05, 92-08, 92Bxx © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

We are honored to dedicate the proceedings of ICCVBIC 2018 to all the participants and editors of ICCVBIC 2018.

Foreword

It is with deep satisfaction that I write this Foreword to the Proceedings of the ICCVBIC 2018 held in, Coimbatore, Tamil Nadu, on 29–30 November 2018. This conference was bringing together researchers, academics, and professionals from all over the world, experts in Computational Vision and Bio-inspired Computing. This conference particularly encouraged the interaction of research students and developing academics with the more established academic community in an informal setting to present and to discuss new and current work. The papers contributed the most recent scientific knowledge known in the field of Computational Vision, Fuzzy, Image Processing, and Bio-inspired Computing. Their contributions helped to make the conference as outstanding as it has been. The Local Organizing Committee members and their helpers put much effort into ensuring the success of the day-to-day operation of the meeting. We hope that this program will further stimulate research in Computational Vision, Fuzzy, Image Processing, and Bio-inspired Computing and provide practitioners with better techniques, algorithms, and tools for deployment. We feel honored and privileged to serve the best recent developments to you through this exciting program. We thank all authors and participants for their contributions. Coimbatore, India

S. Smys

vii

Preface

This Conference Proceedings volume contains the written versions of most of the contributions presented during the conference of ICCVBIC 2018. The conference provided a setting for discussing recent developments in a wide variety of topics including Computational Vision, Fuzzy, Image Processing, and Bio-inspired Computing. The conference has been a good opportunity for participants coming from various destinations to present and discuss topics in their respective research areas. ICCVBIC 2018 Conference tends to collect the latest research results and applications on Computational Vision and Bio-inspired Computing. It includes a selection of 179 papers from 505 papers submitted to the conference from universities and industries all over the world. All of accepted papers were subjected to strict peer-reviewing by 2–4 expert referees. The papers have been selected for this volume because of quality and the relevance to the conference. ICCVBIC 2018 would like to express our sincere appreciation to all authors for their contributions to this book. We would like to extend our thanks to all the referees for their constructive comments on all papers, especially, we would like to thank the organizing committee for their hard working. Finally, we would like to thank the Springer publications for producing this volume. Coimbatore, India

Abdullah M. Iliyasu

ix

Acknowledgments

ICCVBIC 2018 would like to acknowledge the excellent work of our conference organizing the committee, keynote speakers for their presentation on 29–30 November 2018. The organizers also wish to acknowledge publicly the valuable services provided by the reviewers. On behalf of the editors, organizers, authors, and readers of this conference, we wish to thank the keynote speakers and the reviewers for their time, hard work, and dedication to this conference. The organizers wish to acknowledge Dr. Smys, Dr. Joy Chen, Dr. R. Harikumar, and Dr. Jude Hemanth for the discussion, suggestion, and cooperation to organize the keynote speakers of this conference. The organizers also wish to acknowledge speakers and participants who attend this conference. Many thanks are given for all persons who help and support this conference. ICCVBIC would like to acknowledge the contribution made to the organization by its many volunteers and members who have contributed their time, energy, and knowledge at a local, regional, and international level. We also thank all the Chair Persons and conference committee members for their support.

xi

Contents

3-Dimensional Multi-Linear Transformation Based Multimedia Cryptosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. N. Prajwalasimha

1

A Computer Vision Based Approach for Object Recognition in Smart Buildings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Kavin Kumar, Latha Parameswaran, and Senthil Kumar Thangavel

13

A Cascade Color Image Retrieval Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. S. Gautam, Latha Parameswaran, and Senthil Kumar Thangavel Enhanced Geographical Information System Architecture for Geospatial Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Madhavendra Singh, Samridh Agarwal, Y. Ajay Prasanna, N. Jayapandian, and P. Kanmani IoT Based Power Management and Condition Monitoring in Microgrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Sivankumar, V. Agnes Idhaya Selvi, M. Karuppasamypandiyan, and A. Sheela A Comparative Performance Study of Cloud Resource Scheduling Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ved Kumar Gupta and Khushboo Maheshwari Image Context Based Similarity Retrieval System . . . . . . . . . . . . . . . . . . . . . . . . . Arpana D. Mahajan and Sanjay Chaudhary

23

37

45

61 73

Emotions Recognition from Spoken Marathi Speech Using LPC and PCA Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. B. Waghmare, R. R. Deshmukh, and G. B. Janvale

81

Implementation of Point of Care System Using Bio-medical Signal Steganography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Thenmozhi, Ramgopal Segu, Shahla Sohail, and P. Sureka

89

xiii

xiv

Contents

Privacy Assurance with Content Based Access Protocol to Secure Cloud Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vitthal Sadashiv Gutte and Kamatchi Iyer Leaf Recognition Using Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . B. Shabari Shedthi, M. Siddappa, and Surendra Shetty

105 119

Data Security in Cloud Using RSA and GNFs Algorithms an Integrated Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Siju John, D. Dhanya, and Lenin Fred

127

Machine Learning Supported Statistical Analysis of IoT Enabled Physical Location Monitoring Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ajitkumar Shitole and Manoj Devare

137

A Genetic Algorithm Based System with Different Crossover Operators for Solving the Course Allocation Problem of Universities . . . S. Abhishek, Sunil Coreya Emmanuel, G. Rajeshwar, and G. Jeyakumar

149

Detecting Anomalies in Credit Card Transaction Using Efficient Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Divya Jennifer DSouza and Venisha Maria Tellis

161

Mohammed Ehsan Ur Rahman and Md. Sharfuddin Waseem . . . . . . . . . . . T. M. Nived, Juhi Jyotsna Tiru, N. Jayapandian, and K. Balachandran A Novel Framework for Detection of Morphed Images Using Deep Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohammed Ehsan Ur Rahman and Md. Sharfuddin Waseem A Novel Non-invasive Framework for Predicting Bilirubin Levels . . . . . . . Aditya Arora, Diksha Chawla, and Jolly Parikh A Comprehensive Study on the Load Assessment Techniques in Cloud Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. Priya and T. Gnanasekaran Multimodal Biometric System Using Ear and Palm Vein Recognition Based on GwPeSOA: Multi-SVNN for Security Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Vijay and G. Indumathi Madhu Bhan, P. N. Anil, and D. T. Chaitra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Madhu Bhan, P. N. Anil, and D. T. Chaitra Ultra Wide Band Monopole Antenna Design by Using Split Ring Resonator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ritesh Kumar Saraswat, Antriksh Raizada, and Himanshu Garg

173

181 199

207

215 233

247

Contents

Green Supply Chain Management of Chemical Industrial Development for Warehouse and its Impact on the Environment Using Artificial Bee Colony Algorithm: A Review Articles . . . . . . . . . . . . . . . Ajay Singh Yadav, Anupam Swami, Navin Ahlawat, and Sharat Sharma A Novel Dyno-Quick Reduct Algorithm for Heart Disease Prediction Using Supervised Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . T. Marikani and K. Shyamala

xv

257

267

Impact of Meltdown and Spectre Threats in Parallel Processing . . . . . . . . . Sneha B. Antony, M. Ragul, and N. Jayapandian

275

Minimum Dominating Set Using Sticker-Based Model . . . . . . . . . . . . . . . . . . . . V. Sudha and K. S. Easwarakumar

283

Assistive Technology Evolving as Intelligent System . . . . . . . . . . . . . . . . . . . . . . . Amlan Basu, Lykourgos Petropoulakis, Gaetano Di Caterina, and John Soraghan  A Bio Potential Sensor Circuit of AFE Design with CT - Modulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. A. Raheem and K. Manjunathachari

289

Image Encryption Based on Transformation and Chaotic Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. N. Prajwalasimha and L. Basavaraj An Efficient Geographical Opportunistic Routing Algorithm Using Diffusion and Sparse Approximation Models for Cognitive Radio Ad Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. V. Senthil Kumar, Hesham Mohammed Ali Abdullah, and P. Hemashree

305

313

323

Traffic Violation Tracker and Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. P. Maniraj, Tadepalli Sarada Kiranmayee, Aakanksha Thakur, M. Bhagyashree, and Richa Gupta

335

PTCWA: Performance Testing of Cloud Based Web Applications . . . . . . . M. S. Geetha Devasena, R. Kingsy Grace, S. Manju, and V. Krishna Kumar

345

Analysis of Regularized Echo State Networks on the Impact of Air Pollutants on Human Health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lilian N. Araujo, Jônatas T. Belotti, Thiago Antonini Alves, Yara de Souza Tadano, Flavio Trojan, and Hugo Siqueira Detection of Cancer by Biosensor Through Optical Lithography . . . . . . . . K. Kalyan Babu

357

365

xvi

Contents

Paradigms in Computer Vision: Biology Based Carbon Domain Postulates Nano Electronic Devices for Generation Next . . . . . . . . . . . . . . . . . . Rajasekaran Ekambaram, Meenal Rajasekaran, and Indupriya Rajasekaran

371

A Secure Authenticated Bio-cryptosystem Using Face Attribute Based on Fuzzy Extractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Aanjanadevi, V. Palanisamy, S. Aanjankumar, and S. Poonkuntran

379

Implementation of Scan Logic and Pattern Generation for RTL Design R. Madhura and M. J. Shantiprasad

385

Optimization Load Balancing over Imbalance Datacenter Topology . . . . K. Siva Tharun and K. Kottilingam

397

Text Attentional Character Detection Using Morphological Operations: A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Arun Kumar, A. Divya, P. Jeeva Dharshni, M. Vedharsh Kishan, and Varun Hariharan IoT Based Environment Monitoring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Vidhyavani, S. Guruprasad, M. K. Praveen Keshav, B. Pranay Keremore, and A. Koushik Gupta

409

417

Design and Development of Algorithms for Detection of Glaucoma Using Water Shed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fazlulla Khan, Ashok Kusagur, and T. C. Manjunath

423

A Novel Development of Glaucoma Detection Technique Using the Water Shed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fazlulla Khan and Ashok Kusagur

437

Solutions of Viral Dynamics in Hepatitis B Virus Infection Using HPM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Balamuralitharan and S. Vigneshwari

451

A Mathematical Modeling of Dengue Fever for the Dynamics System Using HAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Balamuralitharan and Manjusree Gopal

463

Vision-Based Robot for Boiler Tube Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . Md. Hazrat Ali, Shaheidula Batai, and Anuar Akynov

475

Qualitative Study on Data Mining Algorithms for Classification of Mammogram Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Arivazhagan and S. Govindarajan

483

Designing a Framework for Data Migration of Odoo ERP PostgreSQL Database into NoSQL Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Krina Shah and Hetal Bhavsar

491

Contents

xvii

Juxtaposition on Classifiers in Modeling Hepatitis Diagnosis Data . . . . . . Preetham Ganesh, Harsha Vardhini Vasu, Keerthanna Govindarajan Santhakumar, Raakheshsubhash Arumuga Rajan, and K. R. Bindu

501

Voltage Stabilization by Using Buck Converters in the Integration of Renewable Energy into the Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Suganya, R. Karthikeyan, and J. Ramprabhakar

509

OCR System For Recognition of Used Printed Components For Recycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shubhangi Katti and Nitin Kulkarni

519

Modern WordNet: An Affective Extension of WordNet . . . . . . . . . . . . . . . . . . . Dikshit Kumar, Agam Kumar, Man Singh, Archana Patel, and Sarika Jain

527

Analysis of Computational Intelligence Techniques for Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monica Sood, Sahil Verma, Vinod Kumar Panchal, and Kavita

537

Techniques for Analysis of the Effectiveness of Yoga Through EEG Signals: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rahul Goyat, Anil Khatak, and Seema Sindhu

547

Multiobjective Integrated Stochastic and Deterministic Search Method for Economic Emission Dispatch Problem . . . . . . . . . . . . . . . . . . . . . . . . Namarta Chopra, Yadwinder Singh Brar, and Jaspreet Singh Dhillon

555

Enhanced Webpage Prediction Using Rank Based Feedback Process . . . K. Shyamala and S. Kalaivani A Study on Distance Based Representation of Molecules for Statistical Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abdul Wasee, Rajib Ghosh Chaudhuri, Prakash Kumar, and Eldhose Iype Comparative Analysis of Evolutionary Approaches and Computational Methods for Optimization in Data Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anuradha D. Thak Bringing Digital Transformation from a Traditional RDBMS Centric Solution to a Big Data Platform with Azure Data Lake Store . . . Ekta Maini, Bondu Venkateswarlu, and Arbind Gupta Smart Assist for Alzheimer’s Patients and Elderly People . . . . . . . . . . . . . . . . B. Swasthik, H. N. Srihari, M. K. Vinay Kumar, and R. Shashidhar An Unconstrained Rotation Invariant Approach for Document Skew Estimation and Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. N. Balachandra, K. Sanjay Nayak, C. Chakradhar Reddy, T. Shreekanth, and Shankaraiah

567

577

587

595 603

611

xviii

Contents

Smart Assistive Shoes for Blind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Sohan, S. Urs Ruthuja, H. S. Sai Rishab, and R. Shashidhar Comparative Study on Various Techniques Involved in Designing a Computer Aided Diagnosis (CAD) System for Mammogram Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. R. Mrunalini, A. R. NareshKumar, and J. Premaladha

619

627

Traffic Flow Prediction Using Regression and Deep Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Savita Lonare and R. Bhramaramba

641

A Comparative Study on Assessment of Carotid Artery Using Various Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Mounica, B. Thamotharan, and S. Ramakrishnan

649

Evaluation of Fingerprint Minutiae on Ridge Structure Using Gabor and Closed Hull Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Anandha Jothi, J. Nithyapriya, V. Palanisamy, and S. Aanjanadevi

663

A Perspective View on Sybil Attack Defense Mechanisms in Online Social Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Blessy Antony and S. Revathy

675

Minor Finger Knuckle Print Image Enhancement Using CLAHE Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. Sathiya and V. Palanisamy

681

Learning Path Construction Based on Ant Colony Optimization and Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Vanitha and P. Krishnan

689

Pneumonia Detection and Classification Using Chest X-Ray Images with Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Angeline, Munukoti Mrithika, Atmaja Raman, and Prathibha Warrier

701

An Optimized Approach of Outlier Detection Algorithm for Outlier Attributes on Data Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Madhu Shukla and Y. P. Kosta

711

Indo-Pak Sign Language Translator Using Kinect . . . . . . . . . . . . . . . . . . . . . . . . . M. S. Antony Vigil, Nikhilan Velumani, Harsh Varddhan Singh, Abhishek Jaiswal, and Abhinav K

725

Semantic Interoperability for a Defining Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mamta Sharma and Vijay Rana

733

Gestational Diabetics Prediction Using Logisitic Regression in R . . . . . . . . S. Revathy, M. Ramesh, S. Gowri, and B. Bharathi

739

Contents

xix

IOT Based Gas Pressure Detection for LPG with Real Time No SQL Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Danish Saikia, Abdul Waris, Bhumika Baruah, and Bhabesh Nath

747

Hybrid Steerable Pyramid with DWT for Multiple Medical Image Watermarking and Extraction Using ICA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Nanmaran and G. Thirugnanam

753

Dimensional & Spatial Analysis of Ultrasound Imaging Through Image Processing: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kajal Rana, Anju Gupta, and Anil Khatak

763

A Review on Methods to Handle Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sonika Malik and Sarika Jain

773

Identity-Based Hashing and Light Weight Signature Scheme for IoT . . . K. A. Rafidha Rehiman and S. Veni

783

Adaptive Particle Swarm Optimization Based Wire-length Minimization for Placement in FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Sudhanya and S. P. Joy Vasantha Rani Clustering of Various Diseases by Collagen Gene Using the Positional Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Gowri, S. Revathy, S. Vigneshwari, J. Jabez, Yovan Felix, and Senduru Srinivasulu Prediction of Water Demand for Domestic Purpose Using Multiple Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. N. Chandrashekar Murthy, H. N. Balachandra, K. Sanjay Nayak, and C. Chakradhar Reddy

793

803

811

Implementation of Regression Analysis Using Regression Algorithms for Decision Making in Business Domains . . . . . . . . . . . . . . . . . . . . K. Bhargavi and Ananthi Sheshasaayee

819

Blockchain Based System for Human Organ Transplantation Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benita Jose Chalissery and V. Asha

829

Identification of Melanoma Using Convolutional Neural Networks for Non Dermoscopic Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Rangarajan, V. Sesha Gopal, R. Rengasri, J. Premaladha, and K. S. Ravichandran Exploitation of Data Mining to Analyse Realistic Facts from Road Traffic Accident Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Namita Gupta and Dinesh Kumar Saini

839

847

xx

Contents

A Deep Learning Approach for Segmenting Time-Lapse Phase Contrast Images of NIH 3T3 Fibroblast Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aruna Kumari Kakumani and L. Padma Sree

855

Flow Distribution-Aware Load Balancing for the Data Centre over Cloud Services with Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Srinivasulu Reddy and P. Supraja

863

Disease Severity Diagnosis for Rice Using Fuzzy Verdict Method . . . . . . . . M. Kalpana, L. Karthiba, and A. V. Senthil Kumar

873

HELPI VIZ: A Semantic image Annotation and Visualization Platform for Visually Impaired . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Siddharth Prasad, Akhilesh Kumar Lodhi, and Sarika Jain

881

A Survey of Multi-Abnormalities Disease Detection and Classification in WCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Ponnusamy, S. Sathiamoorthy, and R. Visalakshi

889

Detection of Alzheimer’s Disease in Brain MR Images Using Hybrid Local Graph Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Srinivasan, I. Ananda Prasad, V. Mounya, P. Bhattacharjee, and G. Sanyal A Review on Object Tracking Wireless Sensor Network an Approach for Smart Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nilima D. Zade, Shubhada Deshpande, and R. Kamatchi Iyer A Mini Review on Electrooculogram Based Rehabilitation Methods Using Bioengineering Technique for Neural Disorder Persons . . . . . . . . . . . S. Ramkumar, M. Muthu Kumar, G. Venkata Subramani, K. P. Karuppaiah, and C. Anandharaj Applications Using Machine Learning Algorithms for Developing Smart Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Nagakannan, S. Ramkumar, S. Chandra Priyadharshini, S. Nithya, and A. Maheswari

899

909

923

929

Benchmarking of Digital Forensic Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mayank Lovanshi and Pratosh Bansal

939

An Item Based Collaborative Filtering for Similar Movie Search . . . . . . . . V. Arulalan, Dhananjay Kumar, and V. Premanand

949

Identification of Musical Instruments Using MFCC Features . . . . . . . . . . . . Sushen R. Gulhane, D. Shirbahadurkar Suresh, and S. Badhe Sanjay

957

An Instance Identification Using Randomized Ring Matching Via Score Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Premanand V, Dhananjay Kumar, and Arulalan V

969

Contents

Performance Improvement of Multi-Channel Speech Enhancement Using Modified Intelligent Kalman Filtering Algorithm . . . . . . . . . . . . . . . . . . Tusar Kanti Dash and Sandeep Singh Solanki A Collaborative Method for Minimizing Tampering of Image with Commuted Concept of Frazile Watermarking. . . . . . . . . . . . . . . . . . . . . . . . Abhishek Kumar, Jyotir Moy Chatterjee, Avishek Choudhuri, and Pramod Singh Rathore

xxi

979

985

Interval Type-2 Fuzzy Logic Based Decision Support System for Cardiac Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Trupti and B. Kalyani

995

Classification of Multi-retinal Disease Based on Retinal Fundus Image Using Convolutional Neural Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Vanita Sharon and G. Saranya

1009

Accurate Techniques of Thickness and Volume Measurement of Cartilage from Knee Joint MRI Using Semiautomatic Segmentation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mallikarjunaswamy M. S., Mallikarjun S. Holi, Rajesh Raman, and J. S. Sujana Theja

1017

A Hybrid Approach Using Machine Learning Algorithm for Prediction of Stock Arcade Price Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shubham Khedkar and K. Meenakshi

1027

Disease Severity Diagnosis for Rice Using Fuzzy Verdict Method . . . . . . . . M. Kalpana, L. Karthiba, and A. V. Senthil Kumar Bio-inspired Fuzzy Model for Energy Efficient Cloud Computing Through Firefly Search Behaviour Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kaushik Sekaran, P. Venkata Krishna, Yenugula Swapna, P. Lavanya Kumari, and M. P. Divya

1035

1043

Neural Association with Multi Access Forensic Dashboard as Service (NAMAFDS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Manikanta Vital, V. Lavanya, and P. Savaridassan

1051

Exploration of the Possible Benifits for the Complementary Perfect Matching Models with Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Mahadevan, M. Vimala Suganthi, and Selvam Avadayappan

1061

Cloud Robotics in Agriculture Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vahini Siruvoru and Nampally Vijay Kumar Comparative Analysis of EMG Bio Signal Based on Empirical Wavelet Transform for Medical Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Karthick, C. Jeyalakshmi, and B. Murugeshwari

1073

1087

xxii

Contents

Efficient Prevention Mechanism Against Spam Attacks for Social Networking Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Praveena and S. Smys

1095

PPG Signal Analysis for Cardiovascular Patient Using Correlation Dimension and Hilbert Transform Based Classification . . . . . . . . . . . . . . . . . . . Harikumar Rajaguru and Sunil Kumar Prabhakar

1103

A Robust and Fast Fundus Image Enhancement by Dehazing . . . . . . . . . . . C. Aruna Vinodhini, S. Sabena, and L. Sai Ramesh

1111

A Generalized Study on Data Mining and Clustering Algorithms . . . . . . . Syed Thouheed Ahmed, S. Sreedhar Kumar, B. Anusha, P. Bhumika, M. Gunashree, and B. Ishwarya

1121

Multimodal Medical Image Fusion Using Discrete Fractional Wavelet Transform (DFRWT) with Non-subsampled Contourlet Transform (NSCT) Hybrid Fusion Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. Rajalingam, R. Priya, and R. Bhavani

1131

Emotion Recognition on Multi View Static Action Videos Using Multi Blocks Maximum Intensity Code (MBMIC) . . . . . . . . . . . . . . . . . . . . . . . . . R. Santhoshkumar and M. Kalaiselvi Geetha

1143

Survey on Security Systems in Underwater Communications Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Prem Kumar Deepak and M. B. Mukesh Krishnan

1153

A Review on Meta-heuristic Independent Task Scheduling Algorithms in Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anup Gade, M. Nirupama Bhat, and Nita Thakare

1165

Facial Expression Recognition for Human Computer Interaction . . . . . . . Joyati Chattopadhyay, Souvik Kundu, Arpita Chakraborty, and Jyoti Sekhar Banerjee Evolutionary Motion Model Transitions for Tracking Unmanned Air Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metehan Unal, Erkan Bostanci, Mehmet Serdar Guzel, Fatima Zehra Unal, and Nadia Kanwal A Survey on Major Classification Algorithms and Comparative Analysis of Few Classification Algorithms on Contact Lenses Data Set Using Data Mining Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Syed Nawaz Pasha, D. Ramesh, and Mohammad Sallauddin Segmentation of Blood Vessels in Retinal Fundus Images for Early Detection of Retinal Disorders: Issues and Challenges . . . . . . . . . . . . . . . . . . . . D. Devarajan and S. M. Ramesh

1181

1193

1201

1211

Contents

xxiii

Interrogation for Modernistic Conceptualization of Complementary Perfect Hop Domination Number with Various Grid Models . . . . . . . . . . . . . G. Mahadevan, V. Vijayalakshmi, and Selvam Aavadayappan

1219

Temporal Change Detection in Water Body of Puzhal Lake Using Satellite Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nikhitha, Laxmi Divya, R. Karthi, and P. Geetha

1229

Content Based Image Retrieval: Using Edge Detection Method . . . . . . . . . . P. John Bosco and S. K. V. Jayakumar

1239

Sentiment Analysis to Quantify Healthcare Data . . . . . . . . . . . . . . . . . . . . . . . . . . John Britto, Kamya Desai, Huzaifa Kothari, and Sunil Ghane

1249

Convolutional Neural Network with Fourier Transform for Road Classification from Satellite Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jose Hormese and Chandran Saravanan Prediction of Gender Using Machine Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Ramcharan and K. Sornalakshmi

1257 1265

Camera Feature Ranking for Person Re-Identification Using Deep Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Akshaya and S. Lavanya

1275

Definite Design Creation for Cellular Components Present in Blood and Detection of Cancerous Cells by Using Optical Based Biosensor . . . . G. Sowmya Padukone and H. Uma Devi

1283

Virtual Screening of Anticancer Drugs Using Deep Learning . . . . . . . . . . . . S. Leya and P. N. Kumar Identification and Detection of Glaucoma Using Image Segmentation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Neetu Mittal and Sweta Raj Tile Pasting P System Constructing Homeogonal Tiling . . . . . . . . . . . . . . . . . . S. Jebasingh, T. Robinson, and Atulya K. Nagar

1293

1299 1309

Analysis of Breast Cancer Images/Data Set Based on Procedure Codes and Exam Reasons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Prabha and M. G. Dinesh

1317

Empirical Analysis of Machine Learning Algorithms in Fault Diagnosis of Coolant Tower in Nuclear Power Plants . . . . . . . . . . . . . . . . . . . . . . S. Sharanya and Revathi Venkataraman

1325

Marker Controlled Watershed Segmented Features Based Facial Expression Recognition Using Neuro-Fuzzy Architecture . . . . . . . . . . . . . . . . K. Sujatha, V. Balaji, P. Vijaibabu, V. Karthikeyan, N. P. G. Bhavani, V. Srividhya, P. SaiKrishna, A. Kannan, N. Jayachitra, and Safia

1333

xxiv

Contents

A Review on Sequential and Non-Overlapping Patterns for Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gajanan Patle, Sonal S. Mohurle, and Kiran Gotmare

1343

An Analytical Review on Machine Learning Techniques to Predict Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dhiraj Dahiwade, Gajanan Patle, and Kiran Gotmare

1349

Driver’s Behaviour Analytics in the Traffic Accident Risk Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sai Sambasiva Rao Bairaboina and D. Hemavathi

1355

Emotion Speech Recognition Through Deep Learning . . . . . . . . . . . . . . . . . . . . Mohammad Mohsin and D. Hemavathi

1363

Segmentation Techniques Using Soft Computing Approach . . . . . . . . . . . . . . Sudha Tiwari and S. M. Ghosh

1371

˙ Detection of Tumor in Brain MR Images Using Hybrid IKProCM and SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radha R, Sasikala E, and Prakash M A Novel Methodology for Converting English Text into Objects . . . . . . . . . I. Infant Raj and B. Kiran Bala

1383 1391

Framework for Adaptıve Testıng Strategy to Improve Software Relıabılıty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Prem Jacob and Pravin

1399

Detection of Primary Glaucoma Using Fuzzy C Mean Clustering and Morphological Operators Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Pavithra, T. C. Manjunath, and Dharmanna Lamani

1407

An Analytical Framework for Indian Medicinal Plants and Their Disease Curing Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Niyati Kumari Behera and G. S. Mahalakshmi

1421

Plant Leaf Recognition Using Machine Learning Techniques . . . . . . . . . . . . R. Sujee and Senthil Kumar Thangavel

1433

Conceptualization of Indian Biodiversity by Using Semantic Web Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shama and Sarika Jain

1445

A New Ensemble Clustering Approach for Effective Information Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Archana Maruthavanan and Ayyasamy Ayyanar

1455

Detection of Cancer Cell Growth in Lung Image Using Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Pandian, S. LalithaKumari, and R. Raja Kumar

1465

Contents

Single Image Dehazing Using Deep Belief Neural Networks to Reduce Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Samuel Manoharan and G. Jayaseelan Measuring Social Sarcasm on GST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. S. Smitha, S. Sendhilkumar, and G. S. Mahalaksmi

xxv

1471 1479

A Review on False Data Injection in Smart Grids and the Techniques to Resolve Them . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Asha, K. Deepika, J. Keerthana, and B. Ankayarkanni

1487

A Novel Methodology for Identifying the Tamil Character Recognition from Palm Leaf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. Kiran Bala and I. Infant Raj

1499

Leaf Recognition Using Prewitt Edge Detection and K-NN Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Vilasini and P. Ramamoorthy

1507

Learning Deep Topics of Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. S. Mahalakshmi, S. Hemadharsana, G. Muthuselvi, and S. Sendhilkumar

1517

A Study on Varıous Bıo-Inspıred Algorıthms for Intellıgent Computatıonal System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. S. Mrutyunjaya, R. Arulmurugan, and H. Anandakumar

1533

Credit Card Fraud Detectıon in Retaıl Shopping Using Reinforcement Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. SaiRamesh, E. Ashok, S. Sabena, and A. Ayyasamy

1541

Deseasonalization Methods in Seasonal Streamflow Series Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hugo Siqueira, Yara de Souza Tadano, Thiago Antonini Alves, Romis Attux, and Christiano Lyra Filho Local Painted Texture Pattern for Quality of Content Based Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Sivaprakasam and A. Ayyasamy Deep Learning Architectures for Medical Diagnosis . . . . . . . . . . . . . . . . . . . . . . . Vishakha Malik and S. Maheswari

1551

1561 1569

Improved Blog Classification Using Multi Stage Dimensionality Reduction Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Aruna Devi and T. Kathirvalavakumar

1579

Knowledge—Guru System Using Content Management for an Education Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Jayashri, K. Kalaiselvi, and V. Aravind

1591

xxvi

Contents

Deep Learning for Voice Control Home Automation with Auto Mode. . . Indranil Saha and S. Maheswari Review on Spectrum Sharing Approaches Based on Fuzzy and Machine Learning Techniques in Cognitive Radio Networks . . . . . . . . Abdul Sikkandhar Rahamathullah, Merline Arulraj, and Guruprakash Baskaran Artificial Intelligence Based Technique for Base Station Sleeping. . . . . . . . Deepa Palani and Merline Arulraj A Novel Region Based Thresholding for Dental Cyst Extraction in Digital Dental X-Ray Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Karthika Devi, A. Banumathi, G. Sangavi, and M. Sheik Dawood Salient Object Detection Using DenseNet Features . . . . . . . . . . . . . . . . . . . . . . . . P. Kola Sujatha, N. Nivethan, R. Vignesh, and G. Akila Attentive Natural Language Generation from Abstract Meaning Representation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radha Senthilkumar and S. Afrish Khan Euclidean Distance Based Region Selection for Fundus Images . . . . . . . . . . Ramakrishnan Sundaram, K. S. Ravichandran, and Premaladha Jayaraman

1605

1615

1623

1633 1641

1649 1659

Big Data Oriented Fuzzy Based Continuous Reputation Systems for VANET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Thenmozhi and R. M. Somasundaram

1665

Multi-faceted and Multi-algorithmic Framework (MFMA) for Finger Knuckle Biometrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Usha, T. Thenmozhi, and M. Ezhilalarasan

1681

Implementation of SSFCM in Cross Sectional Views of Paediatric Male and Female Brain MR Images for the Diagnosis of ADHD . . . . . . . . . K. Uma Maheswary and S. Manju Priya

1701

Hand Gesture Recognition Using OpenCv and Python . . . . . . . . . . . . . . . . . . . . V. Harini, V. Prahelika, I. Sneka, and P. Adlene Ebenezer

1711

Real Time Facial Recognition System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ashwini, Vijay Balaji, Srivarshini Srinivasan, and Kavya Monisha

1721

3-Dimensional Multi-Linear Transformation Based Multimedia Cryptosystem S. N. Prajwalasimha

1 Introduction As the technology nurtures, different cryptanalysis techniques are being introduced, in order to crack the cryptographic algorithms [1–3]. Presently most of the existing systems are composed of different permutation and substitution methods [4, 5]. Even though the key length is more, it will be a quite greater combinations for the brute force attacker to find the secrete key and to cryptanalyze the algorithm with the help of high speed super computers [6, 7]. Now a day, communication system requires high level security and authentication [8–13]. Chaos theory of randomness has been adopted by many cryptographic algorithms. These chaotic maps are used to generate random numbers for substitution. During cryptanalysis, attacker exercises on all possible chaotic generators for different combinations of key to decrypt the information. Algorithmic complexity can be further increased by number of rounds, but it is a matter of time for the attacker to perform cryptanalysis on the algorithm. Sensitivity to the initial conditions is a major parameter of chaotic systems. Based on the above parameters, the popular chaotic maps such as: 3D Baker map, 3D Arnold’s cat map and Logistic map are adopted for cryptography [14, 15]. Inter pixel redundancy in the images will be high. Hence the correlation between the adjacent pixels is strong and that should be reduced as high as possible in the cipher image [16–18]. Based on the above facts, a new chaotic transformation technique has been introduced, which is 3-dimensional, discrete and real. Moreover the above transformation is reversible, so that to retrieve a loss less decrypted information.

S. N. Prajwalasimha () Department of Electronics and Communication, ATME Research Centre, Mysore, Karnataka, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_1

1

2

S. N. Prajwalasimha

2 Three Dimensional Multi Linear Transformation (3D-MLT) Let f : S → S be a 3-dimensional chaotic map. Where, S is phase space (3-dimensional cube or 3-dimensional torus). Let FM be a chaotic cryptographic primitive such that FM : {0, 1, . . . .M − 1}3 → {0, 1, . . . .M − 1}3 For large values of M, FM approximates f             x + δ-1 y + ρ-1 z mod 2n , x + δy + ρ-1 z mod 2n , f x | , y | , z| =      x + δ-1 y + ρz mod 2n x, y, z < 2n δ, ρ < 2n (1) where x| is the first dimensional equation y| is the second dimensional equation z| is the third dimensional equation n is the information size (Bits) x is the first initial condition y is the second initial condition z is the third initial condition δ and ρ are the primary constants In the above plot it can be clearly observed that, the randomness of the derived samples is not linear and changes greatly with very small variations in the initial conditions. From Fig. 1 it can be concluded that the proposed transformation technique is very sensitive to the initial conditions, so that it is very difficult to predict the derived samples.

2.1 Inverse 3D-MLT Consider Eq. (1)             x + δ-1 y + ρ-1 z mod 2n , x + δy + ρ-1 z mod 2n , f x | , y | , z| =      x + δ-1 y + ρz mod 2n where

3-Dimensional Multi-Linear Transformation Based Multimedia Cryptosystem

3

Fig. 1 Plot of 3D-MLT with initial conditions

    x | = x + δ-1 y + ρ-1 z

(2)

  y | = x + δy + ρ-1 z

(3)

  z| = x + δ-1 y + ρz

(4)

Subtracting Eq. (2) from (3) we get, y = y | -x |

(5)

Subtracting Eq. (2) from (4) we get, z = z| -x |

(6)

Substituting Eqs. (5) and (6) in (2) we get, x=

       δ + ρ-1 x | + 1-δ y | + 1-ρ z|

(7)

Equations (5), (6) and (7) give the inverse 3D-MLT   y = y | -x | mod 2n   z = z| -x | mod 2n

(8) (9)

4

S. N. Prajwalasimha



  |   |   | n δ + ρ-1 x + 1-δ y + 1-ρ z x= mod 2

(10)

Equations (8), (9) and (10) represent inverse 3D-MLT for bounded state space.

3 Proposed Scheme In the proposed algorithm, encryption is done in two phases: Transformation (Mapping) phase and Substitution (Saturation) phase.

3.1 Transformation (Mapping) Phase Transformation phase involves mapping of each pixel position in the host image to get cipher image of first stage, which is also termed as 3D-MLT image of host. The same process is carried out for secrete image and the resultant transformed images of both host and secrete images are subjected to logical XOR operation to get the cipher image of second stage. The transformation phase performs the following steps: Step 1: The host image is subjected for 3D-MLT. h|

        | | x + δ-1 y + ρ-1 z mod 2n , p ,q = h      x + δy + ρ-1 z mod 2n

(11)

The initial values consider here are, δ = 4, ρ = 1 n=8         x + 3y mod 256, x + 4y mod 256 h| p| , q| = h where h is the host image. h| is the 3D-MLT image of host. Step 2: The secrete image is also subjected for 3D-ML transformation.

(12)

3-Dimensional Multi-Linear Transformation Based Multimedia Cryptosystem

        x + δ-1 y + ρ-1 z mod 2n , s | p| , q| = s      n x + δ-1 y + ρz mod 2

5

(13)

The initial values consider here are same as that of the host image, δ = 1, ρ = 5 n=8     s | p| , q| = s (x + 4z) mod 256, (x + 5z) mod 256

(14)

where s is the secrete image. s| is the 3D-ML transformed image of secrete. Step 3: The resultant transformed images of both host and secrete are subjected for logical XOR operation pixel wise.       | | | r p, q = h p , q ⊕ s | p| , q|

(15)

where r is the cipher image of second stage.

3.2 Substitution (Saturation) Phase Substitution phase comprises of S-box of size 2n × 1, which consists of 256 bits secrete key. The secrete key is subjected for first set of initial permutation and then inserted into the S-box at specified locations. The remaining values in the S-box are. pre-defined and pre-specified. The obtained cipher image of second stage is subject for pixel wise logical XOR operation along with S-box in the row wise manner.     r | p, q = r p, q ⊕ S − box

(16)

where r| is the cipher image of third stage or cipher image of first round (Fig. 2).

6

S. N. Prajwalasimha

Host Image h (256 x 256)

Secrete Image s (256 x 256)

3D-MLT Transformation

Transformed image of host h| (p|, q| ) (256 x 256)

Transformed image of secrete s| (p|, q|) (256 x 256)

256 bits Secrete Key Resultant Cipher image r (p, q) (256 x 256)

Initial Permutation S-box (256 x 1)

Cipher image r| (p, q) (256 x 256)

Fig. 2 Flow diagram of proposed encryption algorithm

3.3 Decryption Phase Step 1: The obtained cipher image from second stage is logically XORed with the elements of S-box created by secrete key. The resultant image is the decrypted image from the second stage.     r p, q = r | p, q ⊕ S-box

(17)

Step 2: The secrete image is subjected for 3D-ML transformation for the same set of initial values as implemented in the encryption stage.

3-Dimensional Multi-Linear Transformation Based Multimedia Cryptosystem

7

        x + δ-1 y + ρ-1 z mod 2n , s | p| , q| = s      n x + δ-1 y + ρz mod 2 The initial values consider here are same as that of the host image, δ = 1, ρ = 5 n=8     s | p| , q| = s (x + 4z) mod 256, (x + 5z) mod 256 where s is the secrete image. s| is the 3D-ML transformed image of secrete. Step 3: The decrypted image from the first step is logically XORed with the transformed image from the second step to get the resultant image of the host in the transformed form.       (18) h| p| , q| = r p, q ⊕ s | p| , q| Step 4: The resultant image from the above step is subjected for inverse 3D-ML transformation to get the desired original image.   h x, y = h|

       δ + ρ-1 p| + 1-δ q | + 1-ρ r | mod 2n ,    | | n q -p mod 2

(19)

The initial values consider here are, δ = 4, ρ = 1 n=8

       h x, y = h| 4p| -3q | mod 256, q | -p| mod 256 where h is the host (Original) image. h| is the 3D-ML transformed image of host.

(20)

8 Table 1 Comparison mean square error and correlation between original and decrypted images with LSB neutralization attack

S. N. Prajwalasimha Images Lena Baboon Peppers

MES 4.4948 3.4061 3.8869

Correlation 0.9763 0.9685 0.9764

4 Experimental Results Matlab software is used for the implementation. Three standard images are considered for the analysis with a substitution image. Table 1 describes the mean square error (MSE) and correlation between original and decrypted images showing very minimum MSE. The correlation is almost equal to one indicating the original and decrypted images are similar to each other under least significant bit (LSB) neutralization attack (Tables 2, 3, and 4).

5 Conclusion In the proposed cryptosystem, a new three dimensional chaotic transformation technique is designed and implemented, showing the effective random behavior for different initial conditions. By the above transformation technique, the correlation between the adjacent pixels have been effectively broken and more entropy is achieved by the cipher image, when compared with other existing cryptosystems which are implemented by making uses of some more popular chaotic transformation techniques. The algorithm is designed with two levels of security. The key length used by the algorithm is 256 bits, by that 2256 combinations makes the level of difficulty high for the brute force attacker. Even though the key combination is made by the attacker, it is very difficult to decrypt the information, since it requires the secrete images at the first stage per each round, which is unknown for the attacker and hence it is hard to cryptanalyze the algorithm. The obtained cipher image is subjected to security tests such as unified average changing intensity (UACI), number of pixel change rate (NPCR) and mean square error rate (MSE). The results obtained are better compared to the existing algorithms. The security level can be further increased by increasing the size of the S-box and number of rounds in the algorithm.

Secrete image (Concordaerial)

Secrete image (Concordaerial)

Secrete image (Concordaerial)

Host image (Lena)

Host image (Baboon)

Host image (Peppers)

Cipher image (Peppers)

Cipher image (Baboon)

Cipher image (Lena)

Histogram of Cipher image (Peppers)

Histogram of Cipher image (Baboon)

Histogram of cipher image (Lena)

Table 2 Comparison of Host images, Secrete image, Cipher images of respective host images and the histograms of the respective cipher images 3-Dimensional Multi-Linear Transformation Based Multimedia Cryptosystem 9

Peppers

Baboon

Images Lena

Entropy ≈ 8 5.5407 (Blow Fish) [17] 5.5438 (Two Fish) [17] 5.5439(AES 256) [17] 5.5439 (RC 4) [17] 7.5220 [20] 7.6427 [19] 7.9950 [21] 7.9958 [22] 7.9970 [5] 7.9971 [23] 7.9973 (Proposed Scheme) 7.9947 [23] 7.9950 [4] 7.9969 (Proposed Scheme) 7.9954 [23] 7.9960 [4] 7.9973 (Proposed scheme) 5.9910e-04 (Proposed scheme)

−0.0030(Proposed scheme)

−0.0042 (Proposed Scheme)

−0.023 (Proposed Scheme)

−0.0051 (Proposed Scheme)

0.1500 [19]

N/A

0.023 (Proposed Scheme)

Correlation after substitution (Cipher image) HSV image HSV image -> x x -> histogram of query image //Single Row Vector [x_1,..,x_288] y1 , y2 . . . ..yn -> histograms of database images // Each are of single row vector of size [x_1,..,x_288] For y1 y2 y3

each y OVER x OVER x OVER x : : : y4 over x } If

compute the distance with x ➔ d1 ➔ d2 ➔ d3

{

➔ dn distance (0.5200 = 0.5362 { Images similar to the patch are retrieved } Else { No images are retrieved }

————————————————————————————————— Since we iterate through all the ‘n’ patches in the image, the running time complexity is 0(n). there is only one variable ‘R(x,y)’ which measures the match metric the space complexity is considered to be 1. The sliding operation moves the patch one pixel at a time from left to right and top to bottom. For each slide of (x1 ,y1 ) to (xn , yn ) the best match is searched by calculating correlation coefficient as shown below in Eq. (3).    R x, y =

x y

         T x · I x + x , y , y + y 

(3)

I denote image, T denotes template, R denotes result. For retrieving chair images in the database, the correlation coefficient is observed to be greater than or equal to 0.5362. The challenge involved in this approach is that we have to move the patch one pixel at a time both left to right and top to bottom and at each time metric is calculated to find the best match. The images retrieved by shape-based image retrieval system are shown below in Fig. 5. The edges are considered to be one of the strong features for image retrieval. An analysis has been made with Laplacian [7], Sobel [8] and canny edge detection algorithms [9]. Canny filter is chosen for further operation since the edges components from the canny filter are comparatively prominent with minimal noise. The edges detected using Canny, Sobel and Laplacian filters are shown in Figs. 6, 7 and 8 respectively.

A Cascade Color Image Retrieval Framework

29

Fig. 5 Retrieved images by shape-based Image retrieval system

Fig. 6 Edge detection using Canny Filter

In Canny edge detection, the Gaussian filter is used for smoothing the input image and noises are removed. In the noise removed image, the intensity of gradient is found out and non maximum suppression is applied to get rid of spurious response. To detect potential edges, double thresholding is done. The query image is compared with the dataset image using XOR operation as shown in Fig. 9 and Table 2 and the total number of white pixels for the query image and database image are determined. The total count of white pixel represents the magnitude of dissimilarity, though it a challenging task, when near optimal threshold is fixed based on the above strategy image retrieval is done state of art precision. The query image is compared with

30

K. S. Gautam et al.

Fig. 7 Edge detection using Sobel Filter

Fig. 8 Edge detection using Laplacian Filter

260 chair images in the database and the upper and lower limit to find the threshold (pixel dissimilarity), is fixed as 6500 and 7600 respectively. Figure 10 shows the analysis made on chair images in the database to set threshold.

A Cascade Color Image Retrieval Framework

31

Fig. 9 XOR operation between the query and database images

————————————————————————————————— Pseudo code: Image Retrieval based on pixel dissimilarity ————————————————————————————————— Input I1 I➔ Query Image I1 , I2 , I3 . . . IN ➔Database Image For each image in database D = I1 , I2 , . . . IN { // Measure the dissimilarity (XOR)1 = I ⊕ I1 (XOR)2 = I ⊕ I2 (XOR)3 = I ⊕ I3 (XOR)4 = I ⊕ I4 : (XOR)N = I ⊕ IN }If dissimilarity (6500 >

2.2 Crop Selection Farmers have to provide crop of their choice, Area of cultivation and budget values under this feature in the application as shown in Fig. 3. The system will process given data and provide the result in the form of yield per hectare. Based on these inputs the application calculates the cost for human labor, animal labor, machine power, seed cost, fertilizer and manure, plant protection charge, irrigation charge, working capital [1]. The application also calculates yield per hectare and thus the total cost and profit/loss values are displayed on the android application for farmer as shown in Fig. 4.

2.3 Soil Analysis To use this feature, the farmer has to deposit his soil sample for testing at the nearest Soil Testing Laboratory where the sample is tested with NPK sensor [3, 4]. The sensor is attached to arduino board to fetch contents of nitrogen, phosphorus, and potassium. The tested value is sent to raspberry pi board through serial

E-agriculture Fig. 4 Total Cost and Profit

237

E-Agriculture

Crop-Analysis Crop Name: Rice

Human Labour: Animal Labour: Machine Power: Seed Cost: Ferlizer Cost: Plant Protecon Cost: Irrigaon Cost: Working Capital: Yield: Total Cost Yielded: Total Cost: Profit:

72654 1160 21742 15680 16560 3576 398 2296 108 13002 133882 66110

communication between arduino board and raspberry pi. In raspberry pi the PPM that is particles per million value as given by the sensor is converted to PPH that is parts per hectare. Based on this value, which crop is suitable for the soil is searched from the database and the result is sent to the firebase from where it is fetched in android application of farmer [5, 6].This process is shown in Figs. 5 and 6.

2.4 Fertilizers The application finds the most economical crop to be cultivated and also finds how much fertilization is required for any crop of farmer’s choice. Whenever the user wants to check the suitable fertilizers for his soil, the soil is first tested using NPK sensor and the required fertilization and cost estimation of the fertilization can be viewed. It provides the Fertilizers required, for the most economic crop [1, 7]. The user can also check the cost of fertilizer for a crop of his choice and on a soil which is tested by NPK sensor. The application calculates the deficiency of nitrogen, potassium and phosphorus for that crop and accordingly computes the cost of fertilizers for that particular crop. The application also displays the best suitable crop for the soil without using fertilizer as shown in Fig. 7.

Fig. 5 Confirmation of Sample Deposited

E-Agriculture

Deposit Soil Sample √ Yes

No

Soil Sample Received

√ Yes

RESULTS

Fig. 6 NPK values for sample soil

E-Agriculture

Tested Soil Sample Data

N: 26 P: 137 K: 200

Soil suitable for: TOMATO

E-agriculture Fig. 7 Information for Almonds crop

239

E-Agriculture

Ferlizer Data

N: 26 P:137 K:200 Soil Suitable for: TOMATO

Enter Crop Name Almonds Nitrogen in excess for Almonds: -65 Phosphorous in excess for Almonds: -93 Potassium deficiency for Almonds: 57 Ferlizer cost for Almonds: 2611

2.5 Selling Crops Farmers can post crop information which they want to sell. They need to enter fields like Crop Name, Quantity of crop for sale, Expected Value per kg, Harvested date as shown in Fig. 8.

2.6 Pre-production Management Most of the farmers grow same kind of crop which often leads to excess production and hence to price drop of the crop. In such a situation the farmers are likely to go through financial Loss. In Pre-production management the farmer can be assured that the crop is not over grown and hence there are least chances of price drop [2]. Under this feature, a farmer can register the crop which he wants to grow The registration will be successful only if the specified area on which he wants to grow the crop is less than or equal to the available area on which that particular crop can be grown or else an error message is displayed. The interface of this activity is shown in Fig. 9. Here farmer enters crop name and total area on which he wants to

Fig. 8 Crop Information for sale

E-Agriculture Selling Crops Crop Name Quanty (Kgs) Expected Value Harvested on (mm/dd/yy)

SUBMIT Fig. 9 Approval for Crop Cultivation

E-Agriculture

Pre-Producon Crop Name Culvaon Area

SUBMIT

E-agriculture Fig. 10 Area Available for Wheat Cultivation

241

E-Agriculture

Wheat 200000 Total Culvated Area: 1012 Acres Available Area: 2988 Acres

SUBMIT

grow the crop. After the form is submitted the data is stored and updated only if the area specified is less than or equal to the available area for that particular crop in the database or else it displays error message. The remaining area available is displayed as shown in Fig. 10.

2.7 Buying Agricultural Products Whenever the farmer wants to buy agricultural products he should enter the products he wants to buy. He can mention the quantity of items and increment and decrement the quantity of purchase as shown in Fig. 11. On clicking of Checkout button, list of products will be displayed along with quantity and price. OTP will be sent to registered mobile number only after clicking the confirm button in Fig. 12.

Fig. 11 Item Buying Details

E-Agriculture

Buying Products Budget: 2000 CHECKOUT CHANGE BUDGET Ferlizers 1000 + 1 Seeds + 2

500 -

SUBMIT Fig. 12 onfirmation for Purchase of Items

E-Agriculture

Buying Products Total Amount: 2000

Name Quanty Price Ferlizers 1 1000 Seeds 2 1000

CONFIRM

E-agriculture

243

2.8 Bidding Activity/Market Price/Weather Forecast A farmer can post the crop information which he wants to sell and a buyer can view it. If the farmer is satisfied with the bid amount offered by buyer, he will contact the buyer. Market price displays the market rate of crops and rental lands. The farmers and buyers can view the market rates of various crops and vegetables in the town. This feature also displays the lands available for cultivation with contact number of owner. Any farmer who is interested in such resources for cultivation can contact the owner for rentals. Weather forecast is also displayed. This is done with help of google API. This API helps in getting the information of weather of a particular day.

3 Methodology 3.1 Tools and Technologies The system consists of farmers who access this application through an android smart phone. This application is developed using an IDE called Android Studio [8, 9] Android Studio is a framework which is used to design the interface of the application and to write the java codes which contain the application logic. IOT is used to get the values from the sensors.

3.2 Databases Firebase is used for database, authentication and storage. Database has the Cropselection table with attributes Crop name, Cultivation-area and Budget. A table called Crop contains crop name, its nitrogen, phosphorous and potassium requirements. The other attributes associated with this table are human labor cost per hectare, seed cost per hectare, etc. Thus from these two tables we are able to find out cost benefit analysis for the farmer. We have a table which contains Nitrogen, Phosphorous and Potassium contents in a soil sample and also Nitrogen, phosphorous and potassium contents in a Fertilizer. Tables Buyer and Seller contain all the details of buyers and farmers including their contact details. Pre-production management uses the data such as available-area for cultivation of any crop in a particular town. Comparing this value with the user specified cultivation area, a farmer is informed whether he can grow the specified crop or not. Once confirmed the available-area is updated. Bidding table contains the bidding prices related to each crop. Each record in this table has a farmer associated with it. For selling and

244

M. Bhan et al.

buying activity we have an item table. All the above tables are in a 3NF state and join operations are performed to support all above mentioned features [9, 10].

3.3 Architecture The sensors or devices collect data from the soil sample. Thus in the first step data is being collected from NPK sensor. The data is sent to the cloud. Once the data gets to the cloud, software performs processing on it. This could be very simple, such as fetching the details of the NPK value. Next, the information is made useful to the end-user [11, 12]. This could be via an alert to the user (email, text, notification, etc.). Communication interfaces exist between the IOT sensor to Raspberry-pi, Raspberry-pi to Firebase, and Firebase to Android application. Android smart phone and the firebase database are established through internet connection.

4 Conclusion The percentage of population using mobile phones and apps is increasing and that is what makes adoption of these technologies more favorable in the developing countries. Although reduction in man power, production time and impact on ecosystem are major benefits of mobile and Internet technology in agriculture. Every technology comes with its own disadvantages. In case of agriculture, technology leads to dependency on machines which makes farmers less self-reliant.

References 1. Dr. P. Chandra Shekara et.al, “Farmer’s Handbook on Basic Agriculture”, published by Desai Fruits & Vegetables Pvt. Ltd, 2nd edition, 2016. 2. Ed Keturakis, Tulika Narayan,“India’s Potential Best Practices for Food and Nutrition Security, 2011. 3. R.Sindhuja and B. Krithiga “Soil Nutrient Identification Using Arduino” Asian Journal of Applied Science and Technology (AJAST) Volume 1, Issue 4, Pages 40–42, May 2017 4. Kowalenko CG, “Relationship between extraction methods for Soil nutrient testing in British Columbia.Report for Ministry of Agriculture and Lands, 2010. 5. M. L. Stablen, J. W. Ellsworth, D. M. Sullivan, D, Horneck, B. D. Brown, R. G. Stevens, “Monitoring Soil Nutrients Using Management Unit Approach”, in PNW 570-E, Oct. 2012 6. C. Jones and J. Jacobsen, “Plant Nutrition and Soil Fertility”, in Nutrient Management Modules, No. 2, 2014. 7. R.Sujatha and R. Anitha Nithya, “A Survey on Soil Monitoring and Testing in Smart Farming Using IoT and Cloud Platform”, Int. Journal of Engineering Research and Application, Vol. 7, Issue 11, pp.55–59, 2017. 8. Laird Dornin,G. Blakeike, Masumi Nakamura, “Programming Android, 2nd Edition.

E-agriculture

245

9. Reto Meier “Professional Android 4 Application Development” 4th Edition, John Wiley and sons, 2011. 10. Michael Blaha and James Rumbaugh, “Object-Oriented Modeling and Design with UML”, 2007. 11. Real Time Embedded Based Soil Analyzer. International Research Journal of Engineering and Technology (IRJET). Volume: 3 Issue 3, March 2014 12. J.Dhivya, R.SivaSundari, S.Sudha, R.Thenmozhi, Smart Sensor Based Soil Monitoring System, IJAREEIE Vol. 5, Issue 4, April 2016.

Ultra Wide Band Monopole Antenna Design by Using Split Ring Resonator Ritesh Kumar Saraswat, Antriksh Raizada, and Himanshu Garg

1 Introduction In this fast growing and developing field of communication the need to provide secure, loss free and distortionless transmission is increasing day by day. So as to satisfy this need communication systems are moving towards the higher frequency bands. In this development of communication systems ultra wide band plays an important role which ranges from 3.1 to 10.6 GHz and has become an attractive and challenging area in the research of the system design [1, 2]. The UWB communication system provides very high bandwidth, reduced fading from multipath and low power requirements. Also these systems are becoming very popular due to reliability, security and high speed data transmission over smaller distances. The main concept behind the UWB communication system is that they transmit short interval pulses accurately and effectively. For these applications it is very essential to have smaller size printed antennas for transmitting and receiving signals [3–5]. However, designing smaller size printed antennas with good and desired performance is quite a challenging task. In general, the ultra wide band antenna must operate at broad bandwidth for better impedance matching and high gain radiation in desired direction. In recent days, UWB is used in imaging radar, communication and localized applications [6, 7]. The article proposes an UWB antenna design which is based on the principle of SRR (split ring resonator) which covers the entire frequency of interest (Fig. 1). The antenna design has an 8-sided polygon ring which are 4 in number with decreasing radius and each of them has slits for better results in the required operating frequency. The advantage of employing a polygon ring is that it provides more area to be covered with less amount of electrical contact or material and hence

R. K. Saraswat · A. Raizada · H. Garg () M.L.V. Govt. Textile and Engineering College, Bhilwara, Rajasthan, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_22

247

248

R. K. Saraswat et al.

Fig. 1 Evolution of the proposed antenna; (a) Octagonal shape of antenna with feed line; (b) Octagonal multiband antenna; (c) Octagonal multiband antenna with slots

it facilitates the antenna design to be small and compact with cost effectiveness. The design considerations and the most significant antenna characteristics are presented in the subsequent sections [8].

2 Proposed Antenna Configuration The UWB antenna has a dimension of 22 × 32 mm2 and is been fabricated on a 4.3 permittivity Forgotten Realms 4 (FR-4) substrate having a thickness of h = 1.6 mm. The trapeziodal shape microstrip line is used for supplying antenna having dimension L8 = 3.16 mm, L7 = 2.2 mm and L11 = 11 mm to provide better impedance match with the coaxial cable connector. It has a trapezoidal shape ground plane having dimension L9 = 22 mm, L10 = 10 mm, L12 = 9.5 mm which provides good radiation characteristics in the desired direction and also provides better gain and directivity. Firstly a 8-sided polygon ring is fabricated having a radius of 10.5 mm and an edge length of 8 mm as the radiating patch. Then the radiating patch is modified by etching cylinder with radius 7.7 mm and drawing another polygon ring with radius of 7.8 mm and edges of length 6 mm. The etching of cylinder and drawing of polygon ring is continued for next three iterations with different radius and edge length. Then the slots are etched with length equal to polygon ring and having similar width of 0.5 mm for each polygon ring. The radiating patch consists of four different polygon rings having radius as follows R1 = 10.5 mm, R8 = 7.8 mm, R7 = 5.2 mm, R6 = 3.02 mm and these have an edge length of L1 = 8 mm, L2 = 6 mm, L3 = 4 mm, L4 = 2.5 mm, respectively. The final antenna dimensions after series of parametric studies are as follows, L5 = 0.5 mm, L6 = 32 mm, G1 = 2.1 mm, G2 = 2.2 mm, G3 = 2 mm, G4 = 1.4 mm, R2 = 7.7 mm, R3 = 5.1 mm, R4 = 3 mm, R5 = 1.5 mm. The image of the proposed antenna is shown in Fig. 2. The structure of the radiating patch and its characteristics are modified to provide better performance in the operating UWB band [8–10].

Ultra Wide Band Monopole Antenna Design by Using Split Ring Resonator

249

Fig. 2 Geometry of the proposed monopole antenna (Top view with parameters)

3 Result and Discussion The results of the proposed octagonal ultra wide band monopole antenna are observed by simulating it using an appropriate simulator. The width (L7 , L8 ) and the shape (trapezoidal) of the feed-Line to the proposed monopole UWB antenna plays an important role in determining the frequency range of operation.

3.1 Return Loss Characteristics As shown in Fig. 3, the impedance bandwidth after simulation for |s11| < −10 dB is found in the range from 3.0 to 14.1 GHz. This frequency band covers the UWB frequency band ranging from 3.1 to 10.6 GHz. The UWB band is observed below the −10 dB mark as represented by Fig. 3.

250

R. K. Saraswat et al.

Fig. 3 Simulated return loss characteristics of the proposed antenna

3.2 Gain of Antenna Figure 4 represents the proposed antenna 3D gain plots which are observed and simulated for a number of resonant frequencies such as 3.0, 5.0, 7.0 and 9.5 GHz and also for distinct values of phi and theta in maximum radiation direction. It can also be observed that at lower frequency levels (6 GHz). Also with increasing frequency the directivity increases, so that the gain is improved [11, 12]. The gain of UWB antenna design is shown in Fig. 5. It can be concluded that the antenna has a acceptable gain of 3.7 dB in UWB mode.

3.3 Radiation Efficiency As shown in Fig. 6, the simulated radiation efficiency of the proposed antenna varies in the range of 98.9% to 85.8% in case of UWB mode. So we can conclude that in all operating bands the antenna maintains its efficiency above 70% and it starts decaying with increment of frequency.

3.4 E-Plane and H-Plane Radiation Patterns Figure 7 shows the E-Plane radiation pattern of the proposed UWB antenna whereas Fig. 8 shows the H-Plane radiation pattern of the proposed UWB antenna. These radiation patterns are plotted for some selected frequencies such as 2.4, 5.4, 7.5 and 10 GHz. They are plotted for two different planes (E-Plane and H-Plane). It can be

Ultra Wide Band Monopole Antenna Design by Using Split Ring Resonator

251

Fig. 4 Simulated gain (dB) of the proposed UWB antenna for different frequencies

observed from Fig. 7 that the E-Plane radiation pattern are quasi-Omnidirectional in nature whereas from Fig. 8 that H-Plane radiation pattern resemble dumb-Bell shape. It can be observed from the radiation patterns that they are stable with respect to frequency [13, 14].

252

Fig. 5 Simulated gain of the proposed UWB antenna

Fig. 6 Simulated radiation efficiency of the proposed UWB antenna

R. K. Saraswat et al.

Ultra Wide Band Monopole Antenna Design by Using Split Ring Resonator

253

Fig. 7 E-plane radiation pattern at different frequencies for the proposed antenna

4 Conclusion A UWB octagonal monopole antenna covering UWB band having frequencies from 3.1 to 10.6 GHz are evaluated with high gain and unidirectional patterns presented in this paper. The antenna geometry is very simple and also compact making mass production easy. The radiating element consists of four pairs of octagonal concentric split rings single port antenna which can find wide applications in UWB band such as imaging radar, communication and localized application.

254

R. K. Saraswat et al.

Fig. 8 H-plane radiation pattern at different frequencies for the proposed antenna

References 1. G. Roberto Aiello and Gerald D. Rogerson, “Ultra-wideband Wireless Systems”, IEEE Microwave Magazine, June, 2003, pp. 36–47. 2. Breed, G., “A summary of FCC rules for ultra wideband Communications,” High Frequency Electronics, 42–44, Jan. 2005. 3. Liang, J. X., C. C. Choo, C. X. Dong, and C. G. Parini, “Study of a printed circular disc monopole antenna for UWB systems,” IEEE Trans. Antennas Propag., vol. 53, no. 11, 3500– 3504, 2005. 4. C. K. Aanandan, “Square monopole antenna for ultra wide band communication applications,” Journal of Electromagnetic Waves and Applications, Vol. 21, No. 11, 1525–1537, 2007. 5. Geran, F., G. Dadashzadeh, M. Fardis, N. Hojjat, and A. Ahmadi, “Rectangular slot with a novel triangle ring microstrip feed for UWB applications,” Journal of Electromagnetic Waves and Applications, Vol. 21, No. 3, 387–396, 2007.

Ultra Wide Band Monopole Antenna Design by Using Split Ring Resonator

255

6. Xiao, J. X., X. X. Yang, G. P. Gao, and J. S. Zhang, “Double printed U-shape ultra-wideband dipole antenna,” Journal of Electromagnetic Waves and Applications, Vol. 22, No. 8–9, 1148– 1154, 2008. 7. Liu, L., J. P. Xiong, Y. Z. Yin, and Y. L. Zhao, “A novel dual F-shaped planar monopole antenna for ultra wideband communications,” Journal of Electromagnetic Waves and Applications, Vol. 22, No. 8–9, 1106–1114, 2008. 8. Ritesh kumar saraswat, Anand Kishore chaturvedi, Vinita Sharma, Jagmohan, “Slotted ground miniaturized UWB antenna metamaterial inspired for WLAN and WiMAX Applications” IEEE international conference on Computational Intelligence and Communication Networks (CICN), 23–25th Dec., 2016. 9. Ritesh k. saraswat, and Mithilesh kumar, “A frequency band reconfigurable uwb antenna for high gain applications,” progress in electromagnetics research b, vol. 64, 29–45, 2015. 10. Ritesh k. saraswat, and Mithilesh kumar, “Miniaturized slotted ground uwb antenna loaded with metamaterial for wlan and wimax applications,” progress in electromagnetics research b, vol. 65, 65–80, 2016. 11. Ritesh kumar saraswat, and Mithilesh kumar, “A reconfigurable patch antenna using switchable slotted structure for polarization diversity”, ieee international conference communication systems and network technologies (csnt-2015), 4–6th april 2015. 12. Ritesh kumar saraswat, Mithilesh kumar, Sitaram gurjar, Chandra prakash singh, “A reconfigurable polarized antenna using switchable slotted ground structure”, ieee international conference communication systems and network technologies (csnt-2015), 4–6th april 2015. 13. Ritesh kumar saraswat, and Mithilesh kumar, “Planar frequency-band reconfigurable switchable slotted ground uwb antenna”, ieee international conference communication systems and network technologies (csnt-2016), 5–7th march, 2016. 14. Ritesh kumar saraswat, Mithilesh kumar, Ghamanda ram, Abhishek singh, “A reconfigurable microstrip bowtie patch antenna with pattern diversity”, ieee international conference communication systems and network technologies (csnt-2016), 5–7th march, 2016.

Green Supply Chain Management of Chemical Industrial Development for Warehouse and its Impact on the Environment Using Artificial Bee Colony Algorithm: A Review Articles Ajay Singh Yadav, Anupam Swami, Navin Ahlawat, and Sharat Sharma

1 Introduction Today the environment is a necessary question rather a vivid issue, but today people have no awareness about it. Except rural society, even in cosmopolitan life it is not particularly keen. As a result, environmental protection has become a mere government agenda. While it is a question of very close relationship with the whole society. Unless there is a natural attachment between people, environmental protection will remain a distant dream. The direct connection to the environment is from nature. In our environment, we find all kinds of animals, plants and other living things. They all make the environment together. In the various branches of science such as physics, chemistry and biology, etc., the fundamental principles of the subject and the related experimental subjects are studied. But today’s requirement is to emphasize the practical knowledge related to the environment as well as the practical knowledge related to it. Modern society should be given the education of problems related to the environment at a broader level. Also, information about preventive measures to deal with it is also necessary. In today’s mechanical era we are going through such a A. S. Yadav () Department of Mathematics, SRM Institute of Science and Technology (Formerly Known as SRM University), Ghaziabad, UP, India A. Swami Department of Mathematics, Govt. P.G. College Sambhal, Sambhal, UP, India N. Ahlawat Department of Computer Science, SRM Institute of Science and Technology (Formerly Known as SRM University), Ghaziabad, UP, India S. Sharma Department of MBA, SRM Institute of Science and Technology (Formerly Known as SRM University), Ghaziabad, UP, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_23

257

258

A. S. Yadav et al.

situation. Pollution is standing in front of us to destroy the entire environment in the form of a curse. The entire world is going through a serious challenge. Although we have a lack of environment related text However there is no shortage of reference material. In fact, there is a need to make the knowledge available to the environment relevant to the environment so that the public can easily understand the problem. It is necessary for the society to realize its duty and obligation in such a heinous situation. Thus, awareness of the environment can be created in society. Actually, both living and non-living entities form together the nature. Air, water and land come in non-living areas, whereas living creatures are formed in conjunction with animals and plants. An important relationship between these components is that they depend on their life-sustaining relationships. Although man is the most conscious and sensitive person in the jiva world, however, he is dependent on other animals, plants, air, water and land to meet their needs. The organisms found in the environment of human beings form the structure of plants, air, water and land. Knowledge of education through education is a powerful tool for the multifaceted development of human life. Its main objective is to bring physical, mental, social, cultural and spiritual wisdom and maturity within the individual. Knowledge of the natural environment is very important to fulfill the objectives of education. The tradition of knowledge about the natural environment has been from the beginning in Indian culture. But in today’s materialistic era the circumstances are going to be different. On the one hand, there are new inventions in different areas of science and technology. On the other hand, the human environment is also affected by the same speed. It is necessary for the next generation to be aware of the changes happening in the environment through education. By acquiring knowledge of the interconnections of environment and education, one can do many important functions in this direction. Environment is deeply related to science, but there is no scientific intricacies in her education. The learners should be taught nature and ecological knowledge in simple and simple language. Initially, this knowledge should only be in the introductory form in a superficial manner. Further technical aspects should be considered. Knowledge of environment in the field of education is essential for human security. In essence, then to do all this right and quickly, especially with a large number of products, someone needs a system that will do a series of tasks. Prepare forecasts in it, calculate the appropriate safety stock level, set the economic order quantity, set the best discount volume, make adjustments for variation and provide full visibility of changes in the green supply chain so that they can respond promptly to changes. The green supply chain is strong only as relationships that bind vendors, buyers and other participants together. It is important to see these other companies and suppliers as partners in the success of Green Supply Chain and should be the top priority within the organization.

Green Supply Chain Management of Chemical Industrial Development for. . .

259

2 Literature Review and Survey of Green Supply Chain Management Weraikat et al. [1] “This paper examines the pharmaceutical reverse supply chain. For this industry, the reverse supply chain is not usually owned by a company. In this way decentralized negotiation process is presented to coordinate the collection of unwanted drugs in the customer area. Using a Lagrangian relaxation method, the model has been solved for genuine generic pharmaceutical company”. Zailani et al. [2] “Green Innovation is currently getting a great deal of international attention due to the growing concern of consumers, governments and the community in relation to the degradation of natural resources and environmental pollution. The automotive sector is one of the leading generators of industrial waste, which affects the quality of the natural environment”. Zhang et al. [3] “In order to examine the pricing and coordination issues of single-term green supply chain, we base our work on the demand of the market that green products and non-green products co-exist with each other Reside and check the balanced results for two respectively, production mode in cooperative game and non-cooperative game. Theoretical analysis indicates that different production costs influence manufacturers on the choice of production mode when consumers’ products are different evaluations”. Shen et al. [4] “Today’s international business environment has compelled many companies to focus on supply chain management to gain competitive advantage. During the recent years, the supplier selection process in the supply chain has become an important strategic idea. With the increasing global awareness of environmental protection and similar increase in legislation and regulations, green procurement has become an important issue for companies to achieve environmental sustainability”. Giovanni and Vinzi [5] “In this paper, we examine to check the relationship between environmental management (EM) and performance: whether the implementation of an effective internal environment is a pre-condition of a firm related to a green supply chain; What kind of environmental practices (either internal or external) contribute most to the firm’s performance; And the demonstration of the environment translates into high economic performance”. Wu et al. [6] “Green Supply Chain Management has emerged as an important organizational demonstration to reduce environmental risk. Choosing the appropriate supplier to eliminate the impact on supply chain management is an important strategic decision for productions and logistics management in many companies”. “This study examines important green supply chain management (GSCM) capacity dimensions and persistence performance based on electronicsrelated manufacturing firms in Taiwan. On the basis of a factor analysis, six green supply chain management dimensions were identified: Green Manufacturing and Packaging, Environmental Partnership, Green Marketing, Green Suppliers, Green Stock, and Green Environmental Design”.

260

A. S. Yadav et al.

3 Related Works 3.1 Green Chemical Supply Chain One Green Raw material delivery is almost identical to the other logistics. Green chemical are exported to Green warehouses through a production and packaging process in a Green Manufacture. A shipping method is determined by the characteristics of each Green Raw material. Figure 1 shows a process of Green Raw material supply chain. Green chemical are divided into types of prescription and general type. A Green Product packaging disposal can buy Green chemical from Green Distribution Centers and Green chemical. However, the prescription issued as a prescription Green chemical ion is required. The Green warehouses deliver the Green Raw material policy to the Green Distribution Center or the Green Retailer’s according to the distribution policy. The latter process differs according to the Green Raw material types. Specifically, the distributor should report to government the supply, purchase and use history of therapeutic Green Raw materials. The Green Distribution Center administers chemical Green Raw materials such as injections or mixtures to the patient. Green Raw material stores sell prescriptions or generic Green Raw materials according to prescriptions. Through this process, Green chemical are given to the final Green Product packaging disposal.

Green Manufacture Green Raw material

Green Retailer’s

Green warehouses

Green Distribution Center

Green Product packaging disposal

Fig. 1 Green chemical supply chain

Green Supply Chain Management of Chemical Industrial Development for. . .

261

3.2 Green Inventory Policy The list policy is ready for uncertainty in all phases of the sales chain from the production chain. For this purpose, the strategic work remains of raw materials and items. The ideal amount of the list is close to 0. However, the Green warehouses makes some reserve stocks due to uncertainty. A decrease problem may occur when the stock is set at the lowest point, and vice versa. According to the strategy of maintaining the optimum storage level, the list policy can be classified into a Variables order quantity (VOQ), economic order quantity (EOQ), Time order quantity (TOQ), Disposal order quantity (DOQ) and Environment order quantity (EOQ). The Variables order quantity (VOQ) is also called the V system. When the list is less than threshold, the agent automatically orders the specified quantity of the products. It is easy to manage Variables order quantity (VOQ), and it is appropriate for the product that is impossible to manage and predict the demand. However, due to the increased list of Variables order quantity (VOQ), the cause of the stock could cause problems. The (TOQ) is also called the P system. The agent orders the deficiency by checking the stock at regular intervals. However, the (TOQ) should keep the stock safe more secure than other methods, because it does not consider stock instability until the next list. Disposal order quantity (DOQ) is a combination method of Variables order quantity (VOQ) and (TOQ). Manager’s order is the amount of “R” when the list is less than the threshold. “R is obtained by reducing the current amount to the appropriate amount. Environment order quantity (EOQ) is a periodic policy. The manager examines the stock level on regular order duration U, and gives pre-determined amount “R” supplements. This policy is helpful when demand is stable. Otherwise, it is difficult to determine the quantity of orders.

4 Model Design 4.1 Green Chemical Supply Chain Based on the analysis of the results of the process management, we prepare a modeling for the overall configuration of Green inventory management and pharmaceutical supply chain. To implement Green inventory management methods, we have to analyze the goods and its properties on the supply chain. The data component for listing data analysis includes maximum and minimum stock, order cycle and shipping time. And we define sales prices, sales volume, delivery fees, order count, average stock count, stock price and net profit using data components. Models of the Raw material management need to be modeled and emulated. The

262

A. S. Yadav et al.

Raw material supply chain includes a Green Manufacture, Green warehouses, Green Distribution Center and Green Retailer’s. A Green Manufacture develops and manufactures new Green chemical by using raw materials. Green chemical produced are stored in packaging and labeling conditions according to those types and sizes. The company requested the quantity of Green chemical to each Green warehouses to manage the ships. A Green warehouses imports Green chemical from a Green Manufacture. Imported Green chemical are stored in its container according to type and characteristic. The Green warehouses Green Manufacture requests reduction Green Raw materials. To order the appropriate amount of Green chemical, the Green warehouses requires Green inventory management which considers the maintenance and administrative costs. Due to the characteristics of the Raw material supply chain, both Green Distribution Center and Raw material stores are located at lower levels. The Green warehouses receive a request for supply of Green chemical from Green Distribution Centers and pharmaceuticals, and exports the requested amount. Therefore, it is difficult to predict the demand. A Green Distribution Center receives Green chemical from a Green warehouses. Green chemical can be used directly from the Green Distribution Center through prescription. An unused list can be issued in nearby Raw materials. This shipment occurs when the Green Retailer’s requests Green chemical. A Green Retailer’s is the last step of the supply chain, and provides direct Green chemical to Green Product packaging disposals. Generic Raw materials can be sold directly to Green Product packaging disposals. On the contrary, a slip is required for specific Green chemical. Green chemical can be provided from a Green warehouses or Green Distribution Center. An urgent demand is procured through the Green Distribution Center. Bulk purchase is a part of the Green warehouses.

4.2 Green Chemical Model The supply chain is made up of four phases, which include Green Manufacture, Green warehouses, Green Distribution Centers and Raw material stores. Green chemical have been classified into six types designated as A and J as values and refer to product demand for production of production and ranking data for specific products. There are differences in lead time, price and demand for each product. One product margin is 40% of the sales. We believe that despite the type of product, the list and shipping costs are the same.

4.3 Green Inventory Policy In this study, we measure the quantity and frequency reduction for the green supply chain, net profit and green Green inventory costs, individual and total green Green inventory. Due to the deficiency, the cost is only added to the retail store.

Green Supply Chain Management of Chemical Industrial Development for. . .

263

In contrast, shipping and Green inventory costs are happening in both phases. We select Variables order quantity (VOQ), Economic order quantity, Time order quantity (TOQ), Disposal order quantity (DOQ), Environment order quantity (EOQ) and Artificial bee colony algorithm for Green inventory management of Green warehouses. The proposed Artificial bee colony algorithm calculates the optimum order at specific order time. Artificial bee colony algorithm Uses the present, minimum and maximum stocks to calculate orders for each medication. Artificial bee colony algorithm Calculates the optimum order for each Green chemical at order time, respectively. We have set local mass and global mathematics as 0.7, which is the most widely used value to find the optimum point. The number of iterations is set to 100. The optimum order is calculated after the 1000th repetition is over. This fitness function calculates the current volume and order quantity. On this calculation the order quantity should be more than the minimum required capacity. The GA enhances net profit using the fitness function to maintain the minimum potential.

5 Industrial Development and Its Impact on Environment Extraordinary progress of technology has greater than before our capacity to manufacture goods and increase the level of standard of living. However, it has generated less important events such as environmental pollution. The effect of its fall in the quality of life has been affected. For most of the history, the increase in the quality of life produced by the new technology has affected its negative effects on the environment. Recently, however, there have been some doubts that the further development of technology will be guaranteed to improve the quality of life. It is seen that the increase in productivity not only exacerbates the deterioration of the raw material but also reduces the environment due to the discharge of waste. On one hand, the environment is the source of energy and materials, which are converted into goods and services to meet human needs; On the other hand, it is sync for waste and emissions generated by producers and consumers (Fig. 2).

6 Simulation A simulation Green Raw material is to compare and verify the list management of the Green warehouses in the supply chain. As shown in figures, a virtual system comprises 10 Green Manufacture, bulk, 10 Green Distribution Center and 55 Raw material stores. Supply chain simulation was run for 700 virtual days using each management method. We have achieved results by calculating the sum value from the total 5000

264

A. S. Yadav et al. (Water+ Air+ Waste)

Raw material

(Water+ Air+ Waste)

(Water+ Air+ Waste)

Green Manufacture

Green warehouses

Green Retailer’s

Green Distribuon Center

(Water+ Air+Waste)

(Water+ Air+ Waste)

Green Product packaging disposal

(Water+ Air+ Waste)

Fig. 2 The environmental impacts of each stage

runs of simulations. In order to assess each Green inventory management method, we compare sales prices, sales account, order count, delivery cost, stock price and net profit.

6.1 Simulation Result This simulation is to identify the optimal policy, along with reducing the gap between order count and sales account. Both Disposal order quantity (DOQ) and Time order quantity TOQ are not suitable because they show the highest difference (741,280) between the model order count and the sales account. Variables order quantity (VOQ) records 583,262 and Environment order quantity (EOQ) records 14,903 which are relatively small differences. However, the proposed artificial bee colony algorithm shows the smallest difference. We find that the proposed model is an optimal list policy for the medical supply chain with the highest net profit. As a result of simulation, the artificial bee colony algorithm green Raw material is the most effective method for Green inventory management in the supply chain. Artificial bee colony algorithm Order orders the predefined amount according to the list. Artificial bee colony algorithm is usually a problem with the extra list. However, there are serious fluctuations in drug orders. In addition, drug delivery should manage many items. Artificial bee colony algorithm these can be a suitable method for the features.

Green Supply Chain Management of Chemical Industrial Development for. . .

265

7 Conclusion The aim of this paper is to analyze the effective list control method of Green warehouses in the Green Raw material supply chain. We analyze the Green Raw material supply chain, and do modeling and simulation. A list policy is an important factor in determining order time and quantity. It is also an important policy to handle optimum benefits in the supply chain. Therefore, to increase the profit, a trade-off must be reduced between consumption and order. This letter proposes list policy using Artificial bee colony algorithm. The proposed Artificial bee colony algorithm calculates the optimum order from the existing stock at the designated regular order time. We compare Variables order quantity (VOQ), economic order quantity (EOQ), Time order quantity (TOQ), Disposal order quantity (DOQ), Environment order quantity (EOQ) and Artificial bee colony algorithm. The results of simulations show the effectiveness of remaining stockbased orders and the specified quantity of order. Artificial bee colony algorithm meets both conditions, and the Green Raw material supply chain is a useful way to manage Green warehouses list policy. The limitations of this study are as follows. It is difficult to think of the number of Green Distribution Center and Green chemical. Apart from this, we did not reflect the features of the demand.

References 1. Weraikat, D., Zanjani, M.K., & Lehouxb, N., 2016 Coordinating a green reverse supply chain in pharmaceutical sector by negotiation. Computers & Industrial Engineering 93, 67–77 2. Zailani, S., Govindan, K., Iranmanesh, M., Shaharudin, M.R., & Chong, Y.S. 2015 Green innovation adoption in automotive supply chain: the Malaysian case. Journal of Cleaner Production 108, 1115–1122 3. Zhang, C.T., Wang, H.X., & Ren, M.L., 2014 Research on pricing and coordination strategy of green supply chain under hybrid production mode. Computers & Industrial Engineering 72, 24–31 4. Shen, L., Olfat, L., Govindan, K., Khodaverdi, R., & Diabat, A., 2013 A fuzzy multi criteria approach for evaluating green supplier’s performance in green supply chain with linguistic preferences. Resources, Conservation and Recycling 74, 170–179. 5. Giovanni, P.D., & Vinzi, V.E. 2012 Covariance versus component-based estimations of performance in green supply chain management. International Journal of Production Economics 135 (2), 907–916 6. Wu, K.J., Tseng, M.L., & Vy, T., 2011 Evaluation the drivers of green supply chain management practices in uncertainty. Procedia - Social and Behavioral Sciences 25, 384–397

A Novel Dyno-Quick Reduct Algorithm for Heart Disease Prediction Using Supervised Learning Algorithm T. Marikani and K. Shyamala

1 Introduction By the middle of the twentieth century the most life threatening disease called Cardio-Vascular disease which affects most of the people in this world. CVD has taken the rapid growth in this modern world due to unhealthy food habits, high cholesterol, lack of exercise, smoking, high blood pressure, diabetics etc. Medical diagnosis helps to identify the symptoms and causes for this disease. In order to reduce the risk of heart disease prediction should be done [2]. Data mining means extracting and mining the knowledge from huge amount of data. It allows users to extract data from many different dimensions and undergoes the processes of cleaning the data, integrating, selection of exacted data, transformation of data from one stage to another, identifying the interesting pattern and finally producing the mined knowledge to the user. The following analysis predicts the prevalence of heart disease from a dataset drawn from the Cleveland Clinic Foundation, from the UCI Machine Learning Repository. The paper focuses on the classification of heart disease by using several machine learning algorithms such as random forests, kth-nearest neighbors, support vector machine and decision tree. The implemented algorithms is to come up with a model that best predicts the disease (0 = not present, 1 = present). Wisaeng [3], the author examined 8 from 76 standard attributes and produced the result. With further study, it is possible to reduce the attributes to 7 from 8 by applying

T. Marikani () Department of Computer Science, Sree Muthukumaraswamy College, Chennai, Tamil Nadu, India K. Shyamala P.G. and Research Department of Computer Science, Dr. Ambedkar Govt. Arts College (Autonomous), Chennai, Tamil Nadu, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_24

267

268

T. Marikani and K. Shyamala

the novel dyno-quick reduct algorithm. The attributes include age, chest pain type, cholesterol, fasting blood sugar, resting electrocardiographic results, exercise induced angina and resting blood pressure. These above attributes helps to find which algorithm works effectively and consistently to predict the presence of heart disease.

2 Literature Review There has been a lot of research on the diagnosis of heart disease and classification of data with the data set in the literature with a relatively high classification performance. In paper [1–3], the authors use a heuristic algorithm to determine the optimal feature subset with improved classification accuracy in heart disease diagnosis. To identify the disease in the patients with the help of features available in the data, author used A Binary Artificial Bee Colony (BABC) algorithm. The results indicate that BABC–KNN (K-Nearest Neighbour) outperform than other methods. Keerthika et al. [4, 6] proposed on a new feature selection mechanism based on Ant Colony Optimization (ACO) and Genetic Algorithm (GA). The work is applied in the medical domain to find the minimal reduct and experimentally compared with the Quick Reduct, Entropy Based Reduct, and other hybrid Rough Set methods such as genetic algorithm, ant colony optimization and Particle Swarm Optimization (PSO) to show that feature selection is best for minimal reductions. According to a current journalism revise by the Indian Council of Medical Research, about 25% of cardiac problems are between age group of 25–69 years. Hence, the main objective of the work is to predict the possibility of heart disease at its early stage with less number of attributes [7–9].

3 Proposed Work The main concept of the proposed work is to build a Novel Dyno-Quick Reduct algorithm (NDQR). The research works deal with supervised learning algorithm which helps us to analysis the prediction of cardio-vascular disease. The work begins with the basic concepts of rough set theory and explains the techniques of dynamic quick reduct. The proposed work is to recognize the appropriate set of factors by eliminating the irrelevant factors to improve the performance of the classifier. As the size of the data sample increases the accuracy of the prediction can be improved gradually. The Cleveland heart disease dataset obtained from UCI machine learning data repository to assess the performance of proposed algorithm [10–12].

A Novel Dyno-Quick Reduct Algorithm for Heart Disease Prediction Using. . .

269

4 Feature Selection Method Feature selection is the method of selecting automatically the attributes of our data which is appropriate to the predictive modeling problem which we proposed to do. It is also known as attribute selection method or variable selection method. Basically there are three general classes of feature selection algorithms such as filter methods, wrapper methods and embedded methods [13].

4.1 Filter Method Filter feature selection methods select the variable regardless of the model we propose. They act as a pre-processing method. It applies a statistical measure to consign the scoring to each feature. It often act as an univariate and independent method. It mostly selects the redundant variables for the statistical calculation. Correlation co-efficient scores, information gain and Chi-squared test are the example of filter methods.

4.2 Embedded Method The propose of embedded feature selection techniques depend on a specific learning algorithm. Basically this method learns which features preeminent contribute to the accuracy of the model. Regularization methods are one of the most common types of embedded feature selection method. Elastic Net, Ridge regression and LASSO are the example of regularization methods. In-sequence to construct the mathematical model for feature selection, this method is extremely complex [14].

4.3 Wrapper Method Wrapper feature selection method is one of the simplest method to implement and exertion on the classification algorithms. This method hypothesizes on the selection of the set of features and fix as the search problem and evaluate the subset with different models and finally compare with various combinations to score the best accuracy. Even-though the computational complexity is higher in the method, this method is most preferable to carry out the research work compared to filter and embedded method. Recursive feature elimination algorithm is the best example for wrapper method.

270

T. Marikani and K. Shyamala

5 Rough Set Theory Initially the Rough Set Theory tender the mathematical tools to dig out the hidden knowledge in data. It is one of the most precious tools in the resolution of the different problems like knowledge analysis, assessment of quality of data, identifying and evaluation of data dependency. It can be prescribe as a generalization by employing a rough membership function as an alternative of objective approximation. In this proposed work rough set theory used with data mining tools as a special application in analysis of data in prediction of heart disease [15].

6 Review on Dynamic Quick Reduct Dynamic Reduct: The purpose of dynamic reduct is to get the stable reduct from decision subsystems. Dynamic reduct can be distinct in the subsequent way. Let A = (U, C ∪ d) be a decision table, then any system B = (U  , C ∪ d) (U  ∈ U) is called a sub-table of A. If F is a family of subtables of A then,   DR (A, F ) = Red (A, d) Ç ∩B ∈ F Red (B, d)

(1)

defines the set of F-dynamic reducts of A. From this definition, it follows that a relative reduct of A is dynamic if it is also a reduct of all subtables in F. The concept of (F, ε)-dynamic reducts can be defined in Eq. (2), after introducing a threshold, 0 ≤ ε ≤ 1   DRε (A, F ) = C ∈ Red (A, d) : sF (C) ≥ ε

(2)

where, 

B ∈ F : C ∈ Red (B, d) SF (C) = |F|



From Eq. (2) the definition of generalized dynamic reduct follows that, any subset of A is a generalized dynamic reduct if it reducts of all sub tables from a given family F. The following Table 1 display the attributes which is selected from Cleveland database for the research work. The proposed Novel Dyno-Quick Reduct algorithm is executed in Python 3.6.3 in Windows 7 environment with 3 GB RAM capacity and tested on dataset of heart disease which is from Cleveland database.

A Novel Dyno-Quick Reduct Algorithm for Heart Disease Prediction Using. . .

271

Table 1 Selected attributes from Cleveland database Name Age Chest pain

Type Continuous Discrete

Chol Fbs Restecg

Continuous Discrete Discrete

Exang Trestbps Max-heart-rate

Discrete Continuous Continuous

Description Age in years Chest pain type, 1 = typical angina, 2 = atypical angina, 3 = non-anginal, 4 = asymptomatic Serum cholesterol Fasting blood sugar >120 mg/d; 1 = true; 0 = false Resting electro-cardio graphic results having 0 = normal; 1 = ST-T wave abnormality; 2 = showing probable or define left ventricular hypertrophy. Exercise induced angina; 1 = yes; 2 = no Resting blood pressure(in mm Hg) Maximum heart rate

7 Novel Dyno-Quick Reduct Algorithm The following is the proposed novel dyno-quick reduct algorithm, which was executed in (python script)orange tool and produce the better accuracy when compared with normal dynamic quick reduct. Input: NDQR(data, p_count, n_count, p_remcount, n_remcount, no_records) Output: Accuracy percentage with reduced data 1 Begin 2. p_remcount←n_remcount←no_count←no_records=0;totalcount=x; 3. generate len(data) 4. for each i ranging from 0 to max len of data do; 5. repeat 6. if(data[i][0]>=25 and data[i][1]==“asympt” and data[i][6]==“yes” and data[i][3]==“f” or“t” 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. End

and data[i][2]>=130 ): else if(data[i][7]==“negative”): no_count+=1 no_records+=1 temp.append(data[i]) end else if else if(data[i][7]==“positive”): p_remcount+=1 else if(data[i][7]==“negative”): n_remcount+=1 end else if end else if total_count=x end if until reduct found for total no_records

272

T. Marikani and K. Shyamala

In the proposed algorithm, the process begins by declaring all the inputs to zero and total count to x. In the line number 3, it generates the total number of data in the dataset. Subsequently it fix the for condition and check from zero to maximum length of the data. Line number 6 check for if condition, if the condition satisfies, it increment the total number of count and total number of record by 1, else it again check for the condition and increment the p_remcount and n_remcount by 1. Finally calculate all the value and store in total count. In Line 20, repeat the same process until the reduct found for total number of records. Where, p_count and n_count indicates heart disease affected and not affected patients i.e. the positive count and the negative count of the patient respectively. p_remcount and n_remcount specifies the positive and negative dataset which is to be removed and finally no_records hold the total number of records in the dataset.

8 Results and Discussion The dataset considered for the research work is Cleveland, which is a popular public database where the heart disease prediction data is available. The dataset includes 76 attributes which predict the heart disease of human being. In paper [13], the author examined 8 from 76 attributes and produced the result. With further study, it is possible to reduce the attributes to 7 from 8 by applying the novel dyno-quick reduct algorithm. The attributes include age, chest pain type, resting blood pressure, cholesterol level, fast blood sugar, exercise angina, resting electrocardiographic result. The experimental analysis of the various data mining techniques as shown in the following Table 2. From Table 2, it is clear that the reduced attribute works better in classification tree algorithm compared to other classification algorithms (Table 3).

Table 2 Experimental analysis

S. no. 1. 2. 3. 4.

Supervised learning algorithm Classification tree Random forest Naïve Bayes SVM

Accuracy ratio % before and after reduct Before dynamic reduct (with 14 attributes) NDQR (with 7 attributes) 71.30 96.70 73.90 91.20 79.50 93.50 75.90 91.40

Table 3 Comparison analysis of proposed methodology with existing methodology S. no. 1. 2. 3. 4.

Supervised learning algorithm Classification tree Naïve Bayes SVM Neural network

Accuracy % NDQR 96.70 93.50 91.40 98.5

Feature selection 71.8 [4] 83.1 [1]83.4 [15]88.76 [16] 87.14 [19] 93 [17]

A Novel Dyno-Quick Reduct Algorithm for Heart Disease Prediction Using. . .

273

Fig. 1 Comparison of reduct Before reduct Aer reduct

The following Fig. 1 depicted that the dataset with reduced attributes shows the better accuracy.

9 Conclusion Feature selection in medical diagnosis helps to improve medical resolution. Modified dynamic quick reduct method helps to resolve the problem of selecting best features. Experimented result shows that the modified dynamic quick reduct algorithm gives better accuracy. It reduces the attribute to the size of seven from the given total 76 attributes. The reduction is done based on the data dependency concept. It also provides a reduction in file size without losses of clear data. The UCI machine learning repository was accessed to collect the datasets and the Python environment is used for the development. The work can be further extended by applying the threshold condition of all the seven attributes and can check for the better accuracy.

References 1. Subanya B and Rajalaxmi R, A Novel Feature Selection Algorithm for Heart Disease Classification, International Journal of Computational Intelligence and Informatics, Vol. 4(2), pp. 117–124, 2014. 2. Ashish Kumar Sen, Shamsher Bahadur Patel and Shukla DP, A Data Mining Technique for Prediction of Coronary Heart Disease Using Neuro-Fuzzy Integrated Approach Two Levels, International Journal of Engineering and Computer Science, ISSN: 2319-7242, Vol. 2(9), pp. 2663–2671, 2013.

274

T. Marikani and K. Shyamala

3. Nidhi Bhatla and Kiran Jyoti, A Novel Approach for Heart Disease Diagnosis using Data Mining and Fuzzy Logic, International Journal of Computer Applications, ISSN: 0975-888, Vol. 54(17), pp. 16–21, 2012. 4. Walid Moudani, Dynamic Features Selection for Heart Disease Classification, International Journal of Medical, Health, Biomedical, Bioengineering and Pharmaceutical Engineering, Vol 7(2), pp. 105–110, 2013. 5. Anitha K and Venkatesan P, Feature selection by rough–quick reduct algorithm, International Journal of Innovative Research in Science, Engineering and Technology, ISSN: 2319-8753, Vol. 2(8), pp. 3989–3992, 2013. 6. Keerthika T and Premalatha K, Rough Set Reduct Algorithm based Feature Selection for medical domain, Journal of Chemical and Pharmaceutical Sciences, ISSN: 0974-2115, Vol. 9(2), 2016. 7. Dai Jian-hua, Li Yuan-xiang and Liu Qun, A hybrid genetic algorithm for reduct of attributes in decision system based on rough set theory, Wuhan University Journal of Natural Sciences, Vol. 7(3), pp. 285–289, 1997. 8. Suganya R and Rajaram S, A Novel Feature Selection Method for Predicting Heart Diseases with Data Mining Techniques, Asian Journal of Information Technology, ISSN: 1682-3915, Medwell journal Vol 15(8), pp. 1314–1321, 2016. 9. Wang GY, Zhao J and An JJ, Theoretical Study on Attribute Reduction of Rough Set Theory, Comparison of algebra and information views In: proceedings of the Third IEEE International Conference of Cognitive Informatics, ISBN: 0-7695-2190-8, 2004. 10. Roselin R, Thangavel K and Velayutham C, Fuzzy Rough Feature Selection for Mammogram Classification, Journal of electronic science and technology. Vol. 9, pp. 124–132, 2011. 11. Haijun Wang, Shaoliang Wei and Yimin Chen, An Improved Attribute Reduction Algorithm Based on Rough Set, In proceeding of ACIS International Conference, Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, Volume. 3, pp. 1007– 1010, 2006. 12. Saravana Kumar S and Rinesh S, Effective Heart Disease Prediction using Frequent Feature Selection Method, International Journal of innovative research in computer and communication engineering (IJIRCCE), ISSN(Online): 2320-9801, Vol. 2(1), pp. 2767–2774, 2014. 13. Kittipol Wisaeng, Predict the Diagnosis of Heart Disease Using Feature Selection and kNearest Neighbor Algorithm, Applied Mathematical Sciences, Vol. 8(83), pp. 4103–4113, 2014. 14. Shampa Sengupta and Asit Kumar Das, A study on rough set theory based dynamic reduct for classification system optimization, International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 5(4), pp. 35–49, 2014. 15. Al-Aidaroos KM, Bakar AA and Othman Z, Medical data classification with Naïve Bayes Approach, Information Technology Journal, Vol. 11(9), pp. 1166–1174, 2012.

Impact of Meltdown and Spectre Threats in Parallel Processing Sneha B. Antony, M. Ragul, and N. Jayapandian

1 Introduction The improvement of Communications and Information Technologies and expanding availability to the web, affiliations end up helpless against the two insiders and outsider threats. Data frameworks are always presented to different kinds of risks, and these risks can cause diverse sorts of harms, which may prompt huge financial losses. Sizes of these harms can extend from small blunders, which just mischief the integrity of databases, to those that decimate entire PC centres; in this way shielding resources from these dangers turns into a noteworthy concern for the two people and organizations. Threats originate from various sources, for example, hacker’s attacks, workers’ activities or mistakes in information passage. The money related misfortunes caused by security ruptures generally can’t accurately be distinguished, in light of the fact that a critical number of misfortunes originate from minor scale security occurrences, which are not normally found. The recent trend of security attack is social engineering attacks. The author take research and find some solution for social engineering threats [1]. The small business industry is also facing higher security threats, but that company is not aware of this risk factor. The author taking survey of more than 300 small business industries and consolidate the risk management factors [2]. The security technology is designed and developed many ways. The hardware security mechanism is implemented in firewall. This author is discussed and develops the security threats in human factor. This human factor is involved in security analysis and management [3]. Ongoing centralized computers comprehend several distinct units for execution, and a unit

S. B. Antony · M. Ragul · N. Jayapandian () Department of Computer Science and Engineering, CHRIST (Deemed to be University), Bangalore, India e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_25

275

276

S. B. Antony et al.

scheduler that disentangles methodology and settle, and prescribes the efficient mode to execute them. This incorporates the decision that double trio directions to execute at a particular time on divergent units. In order to get the ultimate result in a skillful way, contemporary frameworks routinely execute parallel procedures. Parallel Computing is supposed sort of PC resonation process in which voluminous practical calculations are completed simultaneously. This procedure can be coarsely arranged giving to the state at which the equipment arrangements parallelisms. Information pipeline being one of numerous types of parallelism is a segment set of information handling associated in grouping, where the consequence of one part is the admission of the accompanying one. The segments of a pipeline are much of the time affected in parallel or in preemption approach. Cloud parallel scheduling is also facing major research problem [4].

2 State of Art A logical order is a figure of reality used to expand more noticeable understanding in a field of study. A literary work overview shows the going with gauges for information security portrayal should be valued: In a general sense random, every threat is portrayed in one class disallows all others since characterizations don’t cover. Every case should fit in at most one arrangement. Intensive is the classes in a gathering must consolidate each one of the possible results (all hazard cases). Unmistakable is all classes must be clear and correct with the objective that course of action is certain. Every order should be joined by unambiguous gathering criteria describing what cases to be set in that class. Quotable is repeated applications result in a comparable gathering, paying little personality to who is organizing. Recognize the all classes are reasonable, intuitive and sharpen easy to be recognized by the lion’s offer. Supportive can be used to get information into the field of demand; it can be changed in accordance with different application needs. These models can be used as a piece of demand to evaluate threat orders. A fair risk portrayal should support the most showed gauges. The organizational risk factor is happened with employee. Employee crime is an unavoidable factor; the only solution is information hierarchy. This information hierarchy is deal with accessing rights of information. To avoiding these risk they use risk assessment method, one of the possible method is Reid Background Check Plus method. This assessment method is categorized into five different levels [4]. The quick and advancing changes in information and correspondence advancements have provoke the ascent of bleeding edge IT structures which empower for the control, storing and transportation of gigantic volumes of data and information. The information technology field is facing both storage and security problem. To avoid these kind of problem they use some technology like data deduplication and encryption methods [5]. Truth be told, information security is for the most part

Impact of Meltdown and Spectre Threats in Parallel Processing

277

retrofitted in programming structures; it is managed as a touch of knowledge of the past provoking a cycle of entering and settling the assorted parts of the system. It is fundamental for relationship to keep up the best condition of information security position to ensure their business practices are less upset by information security danger. Information security governance is maintaining many parameters to reduce the information risk. This information is deal with corporate governance system [6]. Mooring information is the key for the survivals of various affiliations. Information security is a creating challenge for every affiliation. Information security risks are chances of perils movement on vulnerabilities to cause impacts added to information security events. An illustration of an impending distressing scene can be termed as a threat. The computer system security is now modernize in many ways that is named as data security, network security and cloud security [7]. The data systems thus are prone to various types of risks, and these perils can cause particular sorts of damages, that may incite basic budgetary adversities. Amounts of these damages can stretch out from little oversights, which just naughtiness the respectability of databases, to those that destroy whole PC centers. In this way anchoring assets from these risks transforms into an imperative stress for the two entities and associations. Orientation to exhibit that the centrality of information security motivation lies on the recognizing evidence of potential risks to information structures and propose answers for help with vanquishing the perceived perils. With the criminal threats facing IT establishment in this period, the strategy of susceptibility assessment, course of action consistence and remediation has advanced toward getting to be bit of the consistently definitive process. Recognizing and managing peril relating to exposures requires that an affiliation fathom the effect and cost of a productive strike on their state. Information safety, presently, is a vital stress for any data communication mastermind. The major safety necessities including confidentiality, integrity and availability [8]. This should be purposely considered with the arrangement of the hazard illustrates. Not simply concerns the perspective of zone sort out, strike rehearses, or the threated locales, it’s essential to moreover think the potential ambushes related to the critical assets including gear devices, data in transmission and staff private information. The frail security key organization cryptography and unplanned firewalls are up ‘til now displayed to gatecrashers, though firewalls and cryptography designs are used for tech-security. The hybrid security is also providing and avoiding information security risk [9]. According to IT, advanced security issues are eminent and new security developments are open. There is a necessity to be up-to-date on IT risks so they can apply the security practices in the most ideal way. Regardless of the way that by a long shot the majority of the relationship in the review see information security care getting ready as goal, some investigation comes to fruition appear affiliations don’t put enough here. Progressive criminals have in similarity seen their prospective and have supposed of hazardous web bots projected to covertly show themselves on undefended PCs.

278

S. B. Antony et al.

3 Problem Statement “Meltdown and Spectre: Susceptibilities in present day PCs spill passwords/keys and fragile data”. The two hardware defects namely Meltdown and Spectre are taking the refuge world by tempest, in fact the whole world. These CPU vulnerabilities revert to the decade’s improvisation and use compositional plan blame in almost all cut-edge processors—conceivably enabling information to be uncovered. In spite of the fact that security of the information turns into a dialog indicate due these blemishes, there’s no true confirmation of vindictive circumstances misusing neither Meltdown nor Spectre [10]. With the ascent in ransomware, each potential, powerlessness is regarded basic and explored for vulnerability. It’s aware that the Computer Emergency Response Team, in-short CERT, is held responsible for applying working framework updates and fixes quickly and as often as possible, when accessible. This basic cure is a mitigating inversion contrasted with the first explanation to supplant your processors. The CERT’s cure in regards to a side-channel assault potential is to perform working framework patches, CPU firmware updates, and application refreshes with a specific end goal to relieve any presentation. Beginning worries from the field proposed that framework execution might be detectably affected by a considerable lot of the accessible powerlessness patches. Contingent upon the product work process and the CPU abilities present, the execution effect of programming alleviations might be non-inconsequential for more established and heritage structures. For an upgraded cloud foundations based on more up to date silicon, standards tend to prove single-digit log jams, per different reports that includes results of Intel, Microsoft, Apple, Google, and so forth. A few industry accomplices that offer distributed computing administrations to different organizations have unveiled outcomes that indicated practically zero execution affect. These execution metric reports, alongside CERT’s amended direction to keep your processor and apply patches from different hypervisor, working framework, and chip makers should level set concerns [11]. The Nightmarish situation that is being encountered is that the corruption does not consent any hints in traditional log files. Studies say that these two vulnerabilities are tough to extricate from systematic benevolent applications, unlike other malware. Nonetheless, the usual antivirus software possibly will ascertain malware which triggers the hardware assaults by associating binary values once they turn notorious.

4 Security Analysis Model Meldown is called so because it melts security boundaries which are normally enforced by the hardware. Spectra got its name based on is origin that is speculative execution. Meltdown disruptions the most fundamental detachment between customer applications and the working structure whereas Spectre interruptions the

Impact of Meltdown and Spectre Threats in Parallel Processing

279

withdrawal between different applications. This strike of Meltdown empowers a program to get to the memory and in this way also the insider certainties of various tasks and the working structure. For as much as Spectre authorizes an assailant to trap botch free tasks, which take after acknowledged strategies, into discharging their insider certainties. Frankly, the security verification of so-called finest acts truly augments the raid outward and could create applications additionally powerless against Spectre vulnerability. Spectre being denser to abuse in comparison with Meltdown, it is in like manner stiffer to direct as well. In any case, it is possible to turn away specific known undertakings in light of Spectre through programming patches. Much of the time, the product fixes for these vulnerabilities will negatively affect framework execution. Algorithm 1 [Speculative Execution Handling] If (Token72 dB. Yield clamor of Moo Commotion AFE is reenacted with QPSS is appeared in Fig. 3. Calculated SNR, SNDR, and ENOB of entirety plan four distinctive modes of second arrange CT - satisfactory to upgrade a less/few frail mV neural flag toward the tens of mV flag pivotal to meet resulting

310

M. A. Raheem and K. Manjunathachari

Fig. 3 Gain and phase margin of an amp

Fig. 4 SNR, DR and ENOB values of SNR plot

Fig. 5 ECG sign sample at 100 Hz

ADC ENOB determination and lively sort needs are appeared within Figs. 4 and 5. Table 3 compare the flag accomplished point execution with most recent earlier craftsmanship. Expectedly, NEF is utilized for comparison of differing biopotential signals and it is planned as take after [14] 8 NEF = Vrms, in

2 Itotal πVt · 4kT · Bandwidth

(8)

I/P-referred noise, Itotal is whole/total current strained out from the given supply, thermal voltage, BW is the transmission capacity of Front conclusion plan AFE with chopping technique. Generally state-of-the-art AFE circuits fulfill a NEF of

A Bio Potential Sensor Circuit of AFE Design with CT

CMFB, 9%

Two Modes Modulator 544%

 - Modulator

311

Progammable amplifier with switches, 36%

Low Noise Amplifier 7% output stage, 4%

Figs. 6 and 7 Power consumption complete design AFE with

 M

run would  be a 2.5–10. The entire full IC format plan cushion four channel AFE connect M is given in Fig. 6. Add up to range of the circuit which is involved by the plan is 218.1100 μm × 820.7200 μm = 0.1790 mm2 power association of the different pieces is illustrate in Fig. 7. As appeared within  the chart, extra control assets is given to the single mode of second arrange CT -mod. Chopping switch and clock are devoured more control when compared to speaker fitting to commotion thought one of the lion’s share basic parameter in assess the measuring the biomedical intensifier is its execution beneath assorted circumstances.

7 Conclusion Investigate article, a 4-channel bio flag is outlined which is fitting portrayal for non-wet and biopotential application is realistic. Incapable Itotal plan with other plan this article a novel engineering framework level plan is appropriate for biomedical devices focused on to ECG range 10–100 mV. Particularly, article multiVt idea executed within design ckt which diminishes control utilization and pickup accomplished by speaker 52.9 and 72 dB to NEF 3.0, clamor  execution is palatable. Besides, vital approach the two distinctive second arrange CT M to make strides execution.

References 1. Y. M. Chi, Y.-T. Wang, Y. Wang, Ch. Maier, T.-P. Jung, and G. Cauwenberghs, “Dry and non-contact EEG sensors for mobile brain-computer interfaces,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 20, no. 2, Mar. 2012. 2. M. Steffen, A. Aleksandrowicz, and S. Leonhardt, “Mobile noncontact monitoring of heart and lung activity,” IEEE Transactions on Biomedical Circuits and Systems, vol. 1, no. 4, pp. 250-257, Dec. 2007.

312

M. A. Raheem and K. Manjunathachari

3. J. Kranjec, S. Beguš, J. Drnovšek, and G. Geršak, “Novel methods for non-contact heart rate measurement: A feasibility study,” IEEE Transactions on Instrumentation and Measurement, vol. 63, no. 4, April 2014. 4. Y. G. Lim, K. K. Kim, and K. S. Park, “ECG recording on a bed during sleep without direct skin-contact,” IEEE Transactions on Biomedical Engineering, vol. 54, pp. 718–725, 2007. 5. S. Leonhardt, and A. Aleksandrowicz, “Non-contact ECG monitoring for automotive application,” 5th International Summer School and Symposium on Medical Devices and Biosensors, ISSS-MDBS, June 2008, pp. 183–185. 6. X. X. Chen, Y. Lv, R. R. Zhen Fang, Sh. H. Xia, H. Li, and Li Tian, “A wireless non-contact ECG detection system based on capacitive coupling,” IEEE 14th International Conference on e-Health Network. 7. W. Wattanapanitch, et al., “An energy-efficient micropower neural recording amplifier,” IEEE Trans. on Biomedical Circuits and Systems, vol. 1, pp. 136-147, June 2007. 8. S. Kao, et al., “A 1.5 V 7.5 μW programmable gain amplifier for multiple biomedical signal acquisition,” IEEE Biomedical Circuits and Systems Conference, 2009, pp. 73–76. 9. T. Denison, et al., “A 2 μW 100 nV/rt Hz chopper-stabilized instrumentation amplifier for chronic measurement of neural field potentials,” IEEE J. of Solid-State Circuits, vol. 42, pp. 2934–2945, Dec. 2007. 10. T. Yoshida, et al., “A high-linearity low-noise amplifier with variable bandwidth for neural recording systems,” Japanese J. of Applied Physics, vol. 50, pp. 1–4, April 2011. 11. Christopher J. Mandic, Debashish Gangopadhyay and David J. Alls, “A 1.1 μW 2.1 μVRMS Input Noise Chopper-stabilized Amplifier for Bio-medical Applications”, Circuits and Systems (ISCAS), 2012 IEEE International Symposium on 20–23 May 2012, Seoul, South Korea 12. Y. M. Chi, T.-P. Jung, and G. Cauwenberghs, “Dry-contact and noncontact biopotential electrodes: methodological review,” IEEE Reviews in Biomedical Engineering, vol. 3, 2010 13. Y.-Ch. Chen, B.-Sh. Lin, and J.-Sh. Pan, “Novel non-contact dry electrode with an adaptive mechanical design for measuring EEG in a hairy site,” IEEE Transactions on Instrumentation and Measurement, vol. 64, no. 12, pp. 3361 – 3368, Dec. 2015. 14. Y. M. Chi, Ch. Maier, and G. Cauwenberghs, “Ultra-high input impedance, low noise integrated amplifier for non-contact biopotential sensing,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 1, no. 4, Dec. 2011.

Image Encryption Based on Transformation and Chaotic Substitution S. N. Prajwalasimha and L. Basavaraj

1 Introduction Digital image is a two dimensional matrix with set of elements called as pixels. Due to intrinsic properties such as high redundancy, bulk data capacity and strong correlation among the adjacent elements, traditional and conventional encryption algorithms are not suitable for images [1]. Chaos based encryption schemes are more popular, now a day. The key stream generated by chaotic generator is independent from host and cipher images. Due to this, spatiotemporal chaotic maps are not definably secure [2]. Blake2 hash algorithm has been adopted for image encryption by Li and Lo Block ciphering method is used to truncate the image into 8 × 8 blocks and then subjected for orthogonal transformation. This substitution-diffusion approach gives maximum entropy in the cipher image [3]. Substitution box (S-box) plays a major role in diffusion process. Silva-García et al. proposed chaos based S-box generation algorithm for diffusion process [4]. The S-box is constructed using ten 128 bits blocks of plain text. Due to chaotic non-linear differential equations, high degree of randomness can be observed in S-box. A hybrid system with the combination of piecewise linear chaos and cubic Sbox has been implemented by Zhang for diffusion process [5]. The two stage implementation provides high degree of resistivity against differential attacks. The algorithm is designed with 512 bits of key length to resist brute force attacks. A blockwise truncation and piecewise linear chaotic diffusion based transformation and substitution algorithm has been proposed by Zhang and Wang [6]. The original image is first divided into sub-blocks and then subjected for piecewise linear

S. N. Prajwalasimha () · L. Basavaraj Department of Electronics and Communication, ATME Research Centre, Mysuru, Karnataka, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_29

313

314

S. N. Prajwalasimha and L. Basavaraj

chaotic substitution for each block. High entropy in the cipher images has been achieved by this algorithm. In beta chaotic map based permutation and substitution, beta function provides high degree of randomness [7]. These chaotic maps are used for diffusion process and observed better results for differential attack resistance test. A combined chaotic map including chaotic Sine, Tent and Logistic maps are used to construct integrated chaotic map by Lan et al. [8]. By this diffusion process, better entropy has been achieved. A combined transformation and chaotic diffusion based algorithm has been proposed in which modified Pseudo Hadamard transformation (MPHT) and Tinkerbell chaotic equation with constant co-efficient are used for image encryption. The chapter is organized as follows: Section 2 describes the proposed scheme. Section 3 with statistical and differential analysis followed by conclusion in Sect. 4.

2 Proposed Scheme Two stages per round are involved in the encryption process: transformation and diffusion. Modified Pseudo Hadamard transformation is used in the first stage and random sequence generated by Tinkerbell chaotic generator is used in the substitution stage.

2.1 Encryption Algorithm Step 1: The host and block truncated substitution images are first subjected for modified Pseudo Hadamard transformation.     O  x, y = O (m + n + 5) mod 2n , (m + 2n + 5) mod 2n    S  x, y =S t (m1 + b1 + 5) mod 2n ,  1 < m, m1, n, n1 < 2n (m1 + 2n1 + 5) mod 2n

(1) (2)

where, O(m, n) is the host image of size 2n × 2n S(m, n) is the substitution image of size 2n × 2n  O (x, y) is the transformed image of size 2n × 2n t S (x, y) is the transformed truncated image of size 2n × 2n Step 2: Both transformed images are subjected for bitwise XOR operation       C x, y = O  x, y ⊕ S  x, y where, C(a, b) is the first stage cipher image of size 2n × 2n

(3)

Image Encryption Based on Transformation and Chaotic Substitution

Host Image i (256 x 256)

MPHT Transformation

315

Tinkerbell random sequence 256 X 1 Cipher image r| (α, β) (256 x 256)

Substitution Image s (256 x 256)

Block truncation & MPHT Transformation

S-box (Containing Secrete key) 256 X 1

Fig. 1 Flow diagram of proposed encryption algorithm

Step 3: The cipher image from first stage is subjected for diffusion with random sequence generated Tinkerbell chaotic equation with a constant co-efficient.   x  = x 2 + y 2 + 5x + 7y + 22 mod 2n

(4)

  y  = 2xy + 12x + 15y mod 2n

(5)

    C  x, y = C x, y ⊕ x 

(6)



where, C (a, b) is the cipher image after diffusion of size 2n × 2n. Step 4: The cipher image after diffusion is subjected for substitution using S-box (Fig. 1).     C  x, y = C  x, y ⊕ Sbox

(7)

2.2 Decryption Algorithm Step 1: The obtained cipher image XORed with the elements of S-box used for encryption.     C  x, y = C  x, y ⊕ Sbox

(8)

Step 2: The cipher image is then subjected for XOR operation with random sequence generated Tinkerbell chaotic equation with the same constant coefficient. x  = x 2 + y 2 + 5x + 7y + 22

(9)

y  = 2xy + 12x + 15y

(10)

316

S. N. Prajwalasimha and L. Basavaraj

    C x, y = C  x, y ⊕ x 

(11)

Step 3: The block truncated substitution image is subjected for MPHT and then XORed with cipher image from previous stage.   S  x, y = S t (m1 + b1 + k) mod 2n , (m1 + 2n1 + k) mod 2n       O  x, y = C x, y ⊕ S  x, y

(12) (13)

Step 4: The obtained image from previous step is subjected for inverse MPHT to get original image.    (14) O (m, n) = O  2x − y − 5 mod 2n , (y − x) mod 2n

3 Experimental Results Matlab software is used for experimental analysis and implementation. Standard test images are considered from Computer Vision Group (CVG), Department of Computer Science and Artificial Intelligence, University of Granada, Spain. Security analysis is made on the basis of Information Entropy, Correlation between host and cipher images, Number of Pixel Changing Rate (NPCR) and Unified Average Changing Intensity (UACI). ⎡ U ACI =

1 ⎢ ⎣ M ×N

M,N 

      C1 i, j − C2 i, j 

Maximum P ixel I ntensity i,j =1    i,j D i, j NP CR = × 100 M ×N

⎤ ⎥ ⎦ × 100

(15)

(16)

where, C1 and C2 are two cipher images with the size M × N. If, C1 (i, j) =| C2 (i, j), then D (i, j) = 1; Otherwise, D (i, j) = 0. From the security tests it has been observed that the correlation between the adjacent pixels has been effectively broken by PHT, but the entropy is unchanged since the pixel values are unaltered. Still very less correlation between the adjacent pixels has been observed after substitution phase along with high entropy value indicating that the pixel values are altered in the encrypted image. Also, obtained UACI and NPCR values are very close to ideal values [14] (Figs. 2 and 3 and Table 1).

Image Encryption Based on Transformation and Chaotic Substitution Experimental

Mare

Mesa

Papav

Tulips

Mare

Papav

Tulips

Mapasp

Mesa

Fiore

Mapasp

Fiore

Vacas

Pallon

Leopard

Montage

Foto

Galaxia

Donna

Carnev

Elaine

Cameraman

Plane

Pappers

Lena

99.68 99.66 99.64 99.62 99.6 99.58 99.56 99.54 99.52 99.5 Baboon

Percentage NPCR

Ideal

317

Standard Test Images

Fig. 2 Number of pixel changing rate of cipher images Ideal

Experimental

Percentage UACI

33.6000 33.5000 33.4000 33.3000 33.2000

Vacas

Pallon

Montage

Leopard

Galaxia

Foto

Donna

Carnev

Elaine

Cameraman

Plane

Pappers

Baboon

33.0000

Lena

33.1000

Standard Test Images

Fig. 3 Unified average changing intensity of cipher images

Inference 1: The average number of pixel changing rate (NPCR) is 99.61465% for cipher images is greater than the ideal value of 99.6093% [14]. The NPCR value floats between 99.5636% to 99.6628%. The average value of unified average changing intensity (UACI) is 33.4288% for cipher images and it is very close to the ideal value of 33.4635% [14] with a hair line difference of 0.034%. The UACI value floats between 33.1979% to 33.5241%. Inference 2: The average entropy value of cipher images is 7.9972. It is almost 99.96% close to ideal value. The average correlation coefficient between cipher images with respect to their host images is 4.22E-04. Very minimum correlation has been observed between host and final cipher image after both transformation and substitution stages (Figs. 4 and 5).

318

S. N. Prajwalasimha and L. Basavaraj

Table 1 Comparison of entropy and correlation between standard and encrypted images Entropy = 8 [14] 5.5407 (Blow Fish) [12] 5.5438 (Two Fish) [12] 5.5439 (AES 256) [12] 5.5439 (RC 4) [12] 7.5220 [10] 7.9970 [11] 7.9958 [9] 7.6427 [13] 7.9972 [9] 7.9972 Baboon 7.9974 [16] 7.9954 [9] 7.9969 Peppers 7.9973 [16] 7.9966 [9] 7.9973 Plane 7.9967 Cameraman 7.9976 Elaine 7.9976 Carnev 7.9976 Donna 7.9974 Foto 7.9973 Galaxia 7.9970 Leopard 7.9966 Montage 7.9973 Pallon 7.9971 Vacas 7.9974 Fiore 7.9974 Mapasp 7.9971 Mare 7.9973 Mesa 7.9965 Papav 7.9971 Tulips 7.9973 Images Lena

Correlation 0.0021 [5]

UACI ≥ 33.4635% NPCR ≥ 99.6093% [14] [14] 33.4303 [10] 99.6063 [10]

0.0021 [11]

33.4060 [16]

99.5895 [16]

6.5019e-04

33.4338 [15]

99.588 [15]

33.4912

99.6246

33.2096 [16]

99.6094 [16]

33.3988 −6.9909e-04 33.5280 [16] 33.4541

99.6155 99.6185 [16] 99.6002

−0.0040 0.0033 6.4977e-04 −0.0015 −0.0018 9.8350e-04 3.2399e-05 0.0034 −4.3016e-04 0.0023 3.6508e-04 6.8884e-04 −0.0017 0.0021 0.0012 0.0070 0.0015

99.6094 99.5636 99.6201 99.6368 99.6262 99.6338 99.6002 99.6368 99.6323 99.6429 99.5819 99.6628 99.6140 99.5789 99.5911 99.5880 99.6338

−0.0056

33.5087 33.2954 33.3863 33.4450 33.1979 33.4339 33.5214 33.3428 33.5003 33.4889 33.3930 33.4625 33.4686 33.4003 33.4655 33.3977 33.5241

4 Conclusion In the proposed encryption scheme, a constant co-efficient is embedded into Tinkerbell chaotic equation to get pseudo random sequence for the substitution stage of encryption. Due to high degree of randomness obtained from modified equation, diffused pixel values in the resultant cipher image gives better entropy. Correlation between host and cipher images is very less indicating poor similarities between

Image Encryption Based on Transformation and Chaotic Substitution

319

8.00E-03

4.00E-03 2.00E-03

Tulips

Mesa

Mare Mare

Papav

Fiore

Mapasp Mapasp

Vacas

Fiore

Pallon

Vacas

Montage

Leopard

Foto

Galaxia

Donna

Carnev

Elaine

Plane

-4.00E-03

Cameraman

Pappers

-2.00E-03

Lena

0.00E+00 Baboon

Mean Correlaon

6.00E-03

-6.00E-03 -8.00E-03 Standard Test Images

Fig. 4 Mean correlation between original and cipher images Ideal

Experimental

8.001 8 Entropy

7.999 7.998 7.997 7.996

Tulips

Mesa

Papav

Pallon

Montage

Leopard

Galaxia

Foto

Donna

Elaine

Carnev

Cameraman

Plane

Pappers

Lena

7.994

Baboon

7.995

Standard Images

Fig. 5 Information entropy of cipher images

them. Average values of 99.61% number of pixel changing rate (NPCR) and 33.43% unified average changing intensity (UACI) are obtained for a set of 20 standard images and they are very close to the ideal values. Secrete key of 128 bits length is used to increase the level of difficulty high for brute force attacks. Further, more number of chaotic generators can be used to get random sequences for substitution stage of encryption to increase the level of security (Table 2).

320

S. N. Prajwalasimha and L. Basavaraj

Table 2 Illustration of Host images, Substitution image, Cipher images after transformation, diffusion and substitution

Image Encryption Based on Transformation and Chaotic Substitution

321

References 1. Leo Yu Zhang, Yuansheng Liu, Fabio Pareschi, Kwok-Wo Wong, Riccardo Rovatti and Gianluca Setti, “On the Security of a Class of Diffusion Mechanisms for Image Encryption,” IEEE Transactions on Cybernetics, Vol. 48, No. 4, 2018, pp. 1163–1175. 2. Xuanping Zhang, Zhongmeng Zhao and Jiayin Wang, “Chaotic image encryption based on circular substitution box and key stream buffer,” Signal Processing: Image Communication, Elsevier, Vol. 29, 2014, pp. 902–913. 3. Peiya Li and Kwok-Tung Lo, “A Content-Adaptive Joint Image Compression and Encryption Scheme,” IEEE Transactions on Multimedia, 2017, pp. 1–9. 4. V.M. Silva-García, R. Flores-Carapia, C. Rentería-Márquez, B. Luna-Benosoc and M. AldapePérez, “Substitution box generation using Chaos: An image encryption application,” Applied Mathematics and Computation, Elsevier, Vol. 332, 2018, pp. 123–135. 5. Yong Zhang, “The unified image encryption algorithm based on chaos and cubic S-Box,” Information Sciences, Elsevier, Vol. 450, 2018, pp. 361–377. 6. Xiaoqiang Zhang and Xuesong Wang, “Multiple-image encryption algorithm based on mixed image element and chaos,” Computers and Electrical Engineering, Elsevier, Vol. 62, 2017, pp. 401–413. 7. Rim Zahmoul, Ridha Ejbali and Mourad Zaied, “Image encryption based on new Beta chaotic maps,” Optics and Lasers in Engineering, Elsevier, Vol. 96, 2017, pp. 39–49. 8. Rushi Lan, Jinwen He, Shouhua Wang, Tianlong Gu and Xiaonan Luo, “Integrated chaotic systems for image encryption,” Signal Processing, Elsevier, Vol. 147, 2018, pp. 133–145. 9. Prajwalasimha S.N. (2019) Pseudo-Hadamard Transformation-Based Image Encryption Scheme. In: Krishna A., Srikantaiah K., Naveena C. (eds) Integrated Intelligent Computing, Communication and Security. Studies in Computational Intelligence, Vol. 771. Springer, Singapore. 10. Prajwalasimha S N and Basavaraj L, “Performance Analysis of Transformation and Bogdonov Chaotic Substitution based Image Cryptosystem,” International Journal of Electrical and Computer Engineering, Vol. 10, No. 1, 2019. 11. S N Prajwalasimha, et al, “Image Encryption based on Pseudo Hadamard Transformation with Constant Co-efficient,” 2019 4th IEEE International Conference on Recent Trends on Electronics, Information, Communication and Technology, RTEICT 2019 - Proceedings. 12. Delong Cui, Lei Shu, Yuanfang Chen and Xiaoling Wu, “Image Encryption Using Block Based Transformation With Fractional Fourier Transform,” 8th International Conference on Communications and Networking in China, 2013, pp. 552–556. 13. Prajwalasimha S N et. al., “On the Sanctuary of a Combined Confusion and Diffusion based scheme for Image Encryption,” International Journal of Engineering and Advanced Technology, Vol. 9, Issue 1, pp. 3258–3263, 2019 14. Xingyuan Wang, Xiaoqiang Zhu and Yingqian Zhang, “An Image Encryption Algorithm Based on Josephus Traversing and Mixed Chaotic Map,” IEEE Access Lett., Vol. 6, 2018, pp. 23733– 23746. 15. Prajwalasimha S N and Basavaraj L, “Design and Implementation of Transformation and nonChaotic Substitution based Image Cryptosystem,” International Journal of Engineering and Advanced Technology, Vol. 8, Issue 6, 2019. 16. Prajwalasimha S N, Kavya S R and Tanaaz Zeba Ahmed, “Design and analysis of pseudo hadamard transformation and non-chaotic substitution based image encryption scheme,” Indonesian Journal of Electrical Engineering and Computer Science, Vol. 15, No. 3, 2019, pp. 1297–1304.

An Efficient Geographical Opportunistic Routing Algorithm Using Diffusion and Sparse Approximation Models for Cognitive Radio Ad Hoc Networks A. V. Senthil Kumar, Hesham Mohammed Ali Abdullah, and P. Hemashree

1 Introduction Cognitive radio (CR) [1] is a novel wireless communication infrastructure where the transceiver automatically identifies the activity status of each channel and utilizes vacant active channels for communication without interference. CR optimizes the radio-frequency (RF) spectrum utilization and also reduces the interference to primary users based on the resolution to the remote networking at constrained access range and resource utilization [2] and different quality-of-service (QoS) requirements [3]. Cognitive radio ad hoc networks (CRAHNs) [4] are an advanced network utilizing both the features of CR and ad-hoc networks. CRAHNs are distributed multi-hop networks in which the dynamic topology and time-location of spectrum variation are primary requirements. The greedy forwarding and pairwise routing methods of wireless multimedia sensor networks [5] can be an inspiration to this work. From the literature, it is found that the CRAHNs is still developing and hopes for improvements. Most opportunistic routing strategy does not have an exhaustive comprehension of these exceptionally powerful connections with reliable communication over the network. This led to the development of SMOR [6]. Different amendments to the SMOR routing were proposed to improve its efficiency in terms of delay [7–9], error rate [10], packet loss, energy efficiency [11] and channel allocation [12] in previous research works.

A. V. Senthil Kumar · H. M. A. Abdullah · P. Hemashree () PG and Research Department of Computer Applications, Hindusthan College of Arts & Science, Bharathiar University, Coimbatore, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_30

323

324

A. V. Senthil Kumar et al.

2 SMOR Routing and Modified SMOR Routing Models 2.1 SMOR Routing Model The existing SMOR routing model [6] examines the system topology in addition to the principled links through a spectrum map in order to develop an efficient routing model. SMOR model was developed separately for large scale and regular CRAHNs. For regular CRAHNs, the arithmetical examination for communication delay of multi-hop is examined via Markov chain modeling with queuing network theory, and SMOR-1 process examines the cooperative connections with higher quality for opportunistic routes. For larger CRAHNs, a communication delay is examined through stochastic geometry and queuing network analysis, and SMOR-2 process enables global routing. The routing strategies of SMOR-1 and SMOR-2 are explained in [6]. The simplified algorithms of SMOR routing models are as follows: Algorithm 1 SMOR-1 and SMOR-2 i. Initialize nodes (100 for large-scale CRAHN) ii. Source collects nodes information iii. Source partitions traffic into batches iv. Source prioritizes forwarding nodes v. Source checks ACK from each node vi. Once ACK received, Source does not have control of packet vii. Forwarding node takes control of the packets viii. Acts as a source until packet forwarded ix. If ACK is not received, forwarding node initiates Source to next step. The SMOR routing model has the limitation that the source node has less control over transmitted packets once it is transmitted to the forwarding node. This criterion is generally effective but considering the complex structure of CRAHNs, it can be vulnerable at times. This approach can provide less delay during transmission but due to the vulnerability, the network throughput may be affected. Hence it becomes necessary to analyze throughput and its relationship with the delay has to be estimated to develop alternative approaches. This forms the basic foundation for the modified SMOR model.

2.2 Modified SMOR Routing The proposed model of SMOR is modified in the sense the ACK is obtained not only from the destination but also from all the nodes in the path to the destination. This can also be called as double ACK method. In this approach, each node sends ACK to the source based on the reception of data. Once ACK chain is broken by any node, it

An Efficient Geographical Opportunistic Routing Algorithm Using Diffusion. . .

325

is termed as abnormal and the opportunistic routing is reshuffled. Though Modified SMOR seems similar to SMOR routing, the difference in operation benefits the users. When the sum of nodes in CRAHN exceeds 100, the system is considered large. Based on this threshold, the routing models are initiated. The modified SMOR algorithm is simplified as follows: Algorithm 2 Modified SMOR-1 i. Number of nodes 100 ii. Source collects local as well as global information of the network nodes iii. The network is divided into smaller regions iv. Performs steps iii to ix of Algorithm 2 v. Routes inside each region with border nodes connecting each region to the destination Modified SMOR-1 evaluates the relay selection process based on packet delay quality metrics while Modified SMOR-2 employs the candidate list updating concept to provide global routing for efficient cooperative opportunistic routing.

3 Opportunistic Routing Using Modified SMOR Models 3.1 Modified SMOR-1 for Regular CRAHNs As the regular CRAHN targets the smaller networks, all CRs in the network are aware of each other CRs without much strain with the characteristics for all traffic pattern has to be examined. SMOR-1 uses Markov chain modeling and queuing network theory for this purpose. However, due to the problems in performance enhancement of SMOR model, this modified model focuses on developing the Diffusion Approximation based Markov Chain modeling [13, 14] with consideration of transmission delay and network throughput.

326

3.1.1

A. V. Senthil Kumar et al.

Diffusion Approximation Based Markov Chain Modeling

Considering Y(t) as a continuous time Markov process with range 0 to R real numbers with a discrete set or infinite real number line. In some rare cases, both these constraints may co-occur but it is neglected in this process to reduce complexity. When t > t0 , then P(y, t| y0 , t0 ) is considered as the transition likelihood, therefore the density function of Y(t) restricted on Y(t0 ) = y0 . Hence the Markov property is obliged on Chapman-Kolmogorov derivation     ∂P y, t|y0 , t0 = W · P y, t|y0 , t0 ∂t

(1)

Here W represents y-dependence linear operator. The constant range Markov progressions include subclasses in which W is differential ∂P ∂ 1 ∂2 = − a(y)P + b(y)P ∂t ∂y 2 ∂y 2

(2)

with two functions a(y), b(y) with b(y) > 0. It is fact that the higher order process is impossible and hence the constraints which represent the subclass will be Y  lim = a(y); lim t→0 t t→0

9 : (Y )2 t

;

< (Y )v = b(y); lim =0 t→0 t

(3)

Here y denotes the Y value at time t, ΔY = Y(t + Δt) − Y(t), the mean is evaluated with fixed y and Δt → 0 and v = 3, 4, . . . . Based on these constraints, Kolmogorov equation (Eq. 2) can be derived. However one constraint, will be interchanged in Lindeberg process    P rob · Y (t + t) − Y (t) > δ = O (t)

(4)

for any δ > 0. Only when these constraints are levelled, Eq. (2) holds good. But the problem occurs based on Kolmogorov’s proof which leaves the validity of this condition as void. Some researchers suggested that only when Y is break-free, Eq. (2) holds good and unless it starts following Eq. (4) due to the precise range. In order to resolve this condition, it is defined for any operation with Lindeberg condition must be accepted without further evaluation and hence the substituted W actually becomes an identity as well as approximation of the differential functions. This can be proved by deriving the master function as in Eq. (2). But when Eq. (2) is not constant as in few cases and hence higher derivatives are left over. It must be noted that this Eq. (2) is also equivalent to Langevin function y = a(y) +

= b(y)l(t)

(5)

An Efficient Geographical Opportunistic Routing Algorithm Using Diffusion. . .

327

Here l(t) denotes Gaussian white noise. This function can also be written as dy = a(y)dt +

=

b(y) dW (t)

(6)

Here W(t) denotes denoise process. It is also meant to determine the best result from this equation and is considered as systematic approximation pointed to the fundamental theory.    P rob · Y (t + t) − Y (t) > δ ∝ t

(7)

Equation (7) does not hold good for Lindeberg condition. Thus the extension of the principal equation spontaneously provides intended results. The final solution for the Lindeberg condition is given by central limit theorem with a defined constant D ⎡  2 ⎤   y − y 1 0 ⎦ P y, t0 + t|y0 , t0 = √ exp ⎣− (8) 4Dt 4π Dt P is differential as it obliges Lindeberg ∂ 2P ∂P =D 2 ∂t ∂y

(9)

Thus the delay and throughput can be examined using the diffusion approximation based Markov chain modeling.

3.2 Modified SMOR-2 for Large-scale CRAHNs As Large-scale CRAHNs are designated for large size networks it is determined through spatial node distribution. Similar to modified SMOR-1, modified SMOR2 employs the sparse approximation based stochastic geometry analysis [15, 16]. The characteristics of large-scale CRAHNs are examined via sparse approximation based stochastic geometry analysis and queuing model.

3.2.1

Sparse Approximation Based Stochastic Geometry Analysis

likelihood space “Let the stochastic function u(ω), defined on a appropriate  , F, P which is elongated into chaotic bases, i.e., u(ω) ≈ a ca ψ a (ω), with cardinality P. u(ω) is then sparse in PC basis {ψ a }. If this happens, for any constraints, the sparse PC coefficient c is estimated using N  P indiscriminate

328

A. V. Senthil Kumar et al.

  samples of u(ω) [18]. Let Ω, F, P be a broad likelihood space where  denotes the set of fundamental procedures, F represents a σ-field, and P denotes a likelihood quantity. When u(ω) is defined with constrained field D ⊂ RD with margin ∂D is considered  −∇. a (x, ω) ∇u (x, ω) = f (x) x ∈ D (10) P − a.s.ω ∈ Ω. The diffusion coefficient a(x, ω) is like u(ω) and is the source of uncertainty in Eq. (10). a(x, ω) is assumed to be specific by a condensed KarhunenLoeve-like extension a (x, ω) = a(x) +

d = 

(11)

λi φi (x) yi (ω)

i=1

where (λi , φ i ), i = 1, . . . , d, are the Eigen pairs of the covariance utility Ca (x1 , x2) ∈ L2 (D × D) of a(x, ω); a(x) is the average of a(x, ω). It is further assumed that a(x, ω) fulfills the subsequent state of affairs: Condition I: For all x ∈ D, there occur coefficients amin and amax 0 < amin ≤ a (x, ω) ≤ amax < ∞

P − a.s.ω ∈ Ω

(12)

Condition II: The covariance function Ca (x1 , x2 ) is entirely systematic on D × D [17] suggesting that there occur actual constants c1 and c2 provided for i = 1, . . . , d, 0 ≤ λi ≤ c1 e−c2 i

k

(13)

and ∀α ∈ Nd :

=

λi  ∂ α φi  L∞(D) ≤ c1 e−c2 i

k

(14)

where k := 1/D and α ∈ Nd is a fixed multi-index. The decay rates in Eqs. (13) and (14) will be algebraic if Ca (x1 , x2 ) has CS (D × D) regularity for some s > 0. By analyzing multiple opportunistic route direction, modified SMOR can enhance the accuracy of opportunistic routing with enhanced network performance and less delay.

4 Performance Evaluation The simulation of the Modified SMOR-1 and Modified SMOR-2 is done using MATLAB tool. The simulation environment is set as in [6–12]. The wireless Poisson network topology is established for the evaluation purposes. The performance of

An Efficient Geographical Opportunistic Routing Algorithm Using Diffusion. . .

329

the modified SMOR models is compared with that of the existing MOR algorithms based on end-to-end delay (EED), Bit error rate (BER) and throughput. Figure 1a shows the EED comparison against the lambda packets for the SMOR1 and Modified SMOR-1 while Fig. 1b for the SMOR-2 and Modified SMOR-2. It is seen that the Modified SMOR-1 and Modified SMOR-2 has less delay than the SMOR-1 and SMOR-2. Modified SMOR models illustrate minor delay as the system recourses are stationary in this setting. SMOR displays reasonably greater delay as the restricted range options accessibility while Modified SMOR has better delay handling due to the diffusion approximation concept. Figure 2a shows the BER comparison against the lambda packets for the SMOR1 and Modified SMOR-1 while Fig. 2b for the SMOR-2 and Modified SMOR-2. It is seen that the Modified SMOR-1 and Modified SMOR-2 has less BER than the SMOR-1 and SMOR-2. In SMOR, aggregating the span of training structure a

b

0.2 SMOR1 Modified SMOR

0.2

0.16

0.16

0.14

0.14

0.12 0.1 0.08

0.12 0.1 0.08

0.06

0.06

0.04

0.04

0.02 1

1.5

2

2.5 3 3.5 4 4.5 lambda (packets/sec)

SMOR2 Modified SMOR

0.18

EED (msec)

EED (msec)

0.18

0.02

5

1

1.5

2

2.5 3 3.5 4 4.5 lambda (packets/sec)

5

Fig. 1 (a) SMOR-1 vs. Modified SMOR-1 (End to end delay) and (b) SMOR-2 vs. Modified SMOR-2 (End to end delay)

b 0.08

0.08

0.07

0.07

0.06

0.06

0.05

BER

BER

a

SMOR1 Modified SMOR

0.05

0.04

0.04

0.03

0.03

0.02

SMOR2 Modified SMOR

0.02 1

1.5

2

2.5 3 3.5 4 4.5 lambda (packets/sec)

5

1

1.5

2

2.5 3 3.5 4 4.5 lambda (packets/sec)

5

Fig. 2 (a) SMOR-1 vs. Modified SMOR-1 (BER) and (b) SMOR-2 vs. Modified SMOR-2 (BER)

330

A. V. Senthil Kumar et al.

b 0.925

0.98

0.92

0.97

Throughput (Packets/sec)

Throughput (Packets/sec)

a

0.915 0.91

SMOR1 Modified SMOR

0.905 0.9 0.895 0.89 0.885

0.96 0.95 SMOR2 Modified SMOR

0.94 0.93 0.92 0.91 0.9 0.89 0.88

1

1.5

2

2.5 3 3.5 4 lambda (packets/sec)

4.5

5

1

1.5

2

2.5 3 3.5 4 lambda (packets/sec)

4.5

5

Fig. 3 (a) SMOR-1 vs. Modified SMOR-1 (throughput) and (b) SMOR-2 vs. Modified SMOR-2 (throughput) Fig. 4 Overhead comparison

increases bit error rate but it is reduced based on the impression formed by SNR. This proves the theory of operation that the modified SMOR model reduces the performance degradation through improved accuracy. Figure 3a shows the throughput comparison for the SMOR-1 and Modified SMOR-1 while Fig. 3b shows throughput comparison for the SMOR-2 and Modified SMOR-2. It is seen that the Modified SMOR-1 and Modified SMOR-2 models have higher throughput values than their counterpart SMOR models. The main issue with the modified SMOR models is the overhead caused due to the ACK messages from each node. Figure 4 shows the overhead comparison. It can be seen that both the modified SMOR models have a higher number of

An Efficient Geographical Opportunistic Routing Algorithm Using Diffusion. . .

331

control packets compared to the SMOR models. However, due to the diffusion approximation and stochastic approximation, the overhead is reduced to negligible. The percentage of the difference between the SMOR and Modified SMOR is around 4% which can be neglected considering the ultimate goal of efficient routing. From this evaluation, it is proved that the modified SMOR is significantly efficient than the existing models as the mathematical analysis of delay and throughput improves its performance.

5 Conclusion This paper developed Modified SMOR for the enhancement of geographical opportunistic routing in CRAHNs. The Modified SMOR-1 and Modified SMOR-2 models achieved reliable communication. Simulations proved the remarkable performance of the models which are enhanced due to the approximation models with the inclusion of throughput. Though the proposed models returned significant results, the real-time implementation still requires greater infrastructures and associated costs. The gap between the cost models has to be reduced as the implementation needs advanced components. The overhead problem, though minimized to negligible rate, has to be further examined to be reduced in future.

References 1. Fette, B. A. (Ed.). (2009). Cognitive radio technology. Academic Press. 2. Zheng, H., & Peng, C. (2005). Collaboration and fairness in opportunistic spectrum access. In Communications, 2005. ICC 2005. 2005 IEEE International Conference on (Vol. 5, pp. 3132– 3136). IEEE. 3. Xiao, H., Seah, W. K., Lo, A., & Chua, K. C. (2000). A flexible quality of service model for mobile ad-hoc networks. In Vehicular Technology Conference Proceedings, 2000. VTC 2000Spring Tokyo. 2000 IEEE 51st (Vol. 1, pp. 445–449). IEEE. 4. Akyildiz, I. F., Lee, W. Y., & Chowdhury, K. R. (2009). CRAHNs: Cognitive radio ad hoc networks. AD hoc networks, 7(5), 810–836. 5. Al-Ariki, H. D. E., & Swamy, M. S. (2017). A survey and analysis of multipath routing protocols in wireless multimedia sensor networks. Wireless Networks, 23(6), 1823–1835. 6. Lin, S. C., & Chen, K. C. (2014). Spectrum-map-empowered opportunistic routing for cognitive radio ad hoc networks. IEEE Transactions on Vehicular Technology, 63(6), 2848– 2861. 7. Abdullah, H. M. A., & Kumar, A. S. (2016). A Hybrid Artificial Bee Colony Based Spectrum Opportunistic Routing Algorithm for Cognitive Radio Ad Hoc Networks. International Journal of Scientific & Engineering Research, 7(6), 294–303. 8. Abdullah, H. M. A., & Kumar, A. S. (2017). HB-SOR: Hybrid Bat Spectrum Map Empowered Opportunistic Routing and Energy Reduction for Cognitive Radio Ad Hoc Networks (CRAHNs). International Journal of Scientific and Research Publications (IJSRP), 7(5), 284– 297.

332

A. V. Senthil Kumar et al.

9. Abdullah, H. M. A., & Kumar, A. S. (2017). HFSA-SORA: Hybrid firefly simulated annealing based spectrum opportunistic routing algorithm for Cognitive Radio Ad hoc Networks (CRAHN). In 2017 International Conference on Intelligent Computing and Control (I2C2), (pp. 1–10). IEEE. 10. Abdullah, H. M. A., & Kumar, A. S. (2017). Modified SMOR Using Sparsity Aware Distributed Spectrum Map for Enhanced Opportunistic Routing in Cognitive Radio Adhoc Networks. Journal of Advanced Research in Dynamical and Control Systems, 9(6), pp. 184–196. 11. Abdullah, H. M. A., & Kumar, A. S. (2018). Vertex Search based Energy-efficient Optimal Resource Allocation in Cognitive Radio ad hoc Networks. SPIIRAS Proceedings, 57(2). 12. Abdullah, H. M. A., & Kumar, A. S. (2018). Proficient Opportunistic Routing by Queuing Based Optimal Channel Selection for the Primary Users in CRAHN. ARPN Journal of Engineering and Applied Sciences 13 (5), 1649–1657. 13. Kleinrock, L. (1976). Queueing systems, volume 2: Computer applications (Vol. 66). New York: Wiley. 14. Van Kampen, N. G. (1982). The diffusion approximation for Markov processes. De Gruyter. 15. Halko, N., Martinsson, P. G., & Tropp, J. A. (2009). Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions. 16. Di Renzo, M., Lu, W., & Guan, P. (2016). The intensity matching approach: A tractable stochastic geometry approximation to system-level analysis of cellular networks. IEEE Transactions on Wireless Communications, 15(9), 5963–5983. 17. Beygi, S., Mitra, U., & Ström, E. G. (2015). Nested sparse approximation: structured estimation of V2V channels using geometry-based stochastic channel model. IEEE transactions on signal processing, 63(18), 4940–4955. 18. Doostan, A., & Owhadi, H. (2011). A non-adapted sparse approximation of PDEs with stochastic inputs. Journal of Computational Physics, 230(8), 3015–3034.

A.V. Senthil Kumar has obtained his BSc Degree (Physics) in 1987, P.G. Diploma in Computer Applications in 1988 and MCA in 1991 from Bharathiar University. He obtained his Master of Philosophy in Computer Science from Bharathidasan University, Trichy during 2005 and his Ph.D. in Computer Science from Vinayaka Missions University during 2009. He is working as Director & Professor in the PG and Research Department of Computer Applications, Hindusthan College of Arts and Science, India. He has to his credit 5 Book Chapters, 74 papers in International Journals, 2 papers in National Journals, 22 papers in International Conferences, 5 papers in National Conferences, and edited three books in Data Mining, Mobile Computing, and Fuzzy Expert Systems (IGI Global, USA). He is an Editor-inChief for 5 International Journals and also a key Member for India, Machine Intelligence Research Lab (MIR Labs). He is an Editorial Board Member and Reviewer for various International Journals. He is also a Committee member for various International Conferences. He is a Life member of International Association of Engineers (IAENG), Systems Society of India (SSI), member of The Indian Science Congress Association, member of Internet Society (ISOC), International Association of Computer Science and Information Technology (IACSIT), Indian Association for Research in Computing Science (IARCS). He has got many awards from National and International Societies.

An Efficient Geographical Opportunistic Routing Algorithm Using Diffusion. . .

333

Hesham Mohammed Ali Abdullah has obtained his BSc Degree in July 2006 from National University, Yemen, completed his Master of Computer Applications (MCA) degree in 2013 from Bharathiar University, India. He has obtained his Ph.D. in Computer Science during 2018 at Hindusthan College of Arts & Science, under Bharathiar University, India. His research interests include Cognitive Radio AdHoc Network, Wireless Communication, Network Security, Big Data, Machine Learning, and Visualization. He has published around 10 papers in national and international journals, edited book Chapter in Enabling Technologies and Architectures for Next-Generation Networking Capabilities (IGI Global, USA), and has published papers in IEEE Conferences. P. Hemashree has obtained her BSc (Computer Science, Mathematics, Statistics) in April 2012 from Mount Carmel College, Bangalore and obtained the MCA degree from Coimbatore Institute of Technology, Coimbatore in 2015. She is associated with Hindusthan College of Arts and Science as an Assistant Professor in the PG and Research Department of Computer Applications since 2016.

Traffic Violation Tracker and Controller S. P. Maniraj, Tadepalli Sarada Kiranmayee, Aakanksha Thakur, M. Bhagyashree, and Richa Gupta

1 Introduction Around the world, urban cities deal with a tremendous number of problems related to traffic. They have millions of vehicles running through their roads every moment. Traffic congestion is an enormous situation for everyone in the cities. This slows down the daily activities. As most of the principle source of development is linked with the roadways, it affects the production of a country. It goes with the saying, if one gets affected, everything related to it also gets affected. It also affects our environment drastically. Motor vehicles are one of producers of pollution worldwide. The transport sector is largely responsible for this. Vehicles which move slowly produce more amount of pollution due to fuel consumption than the ones which goes quicker. People traveling or exposed to this polluted environment face health issues. It can also be dangerous, when important vehicles like police vehicles, fire brigades, and ambulances, etc., gets stuck in traffic. With the increasing crowd and vehicles in cities, traffic congestion becomes more problematic. It is considered that the growing occupants and sparseness of public transport prompts traffic congestion [1]. Other reasons include obstacles on path, traffic violators, inadequate green light time, etc. Figure 1, shows a heavy traffic congestion in an urban city. The sector of the society which primarily gets affected are the traffic controller and commuters. These traffic controllers, despite of the weather and climate risk their lives on road to take control on the traffic flow. There should be a centralized system, without manual intervention to monitor the road activities. Monitoring the situation in the control room with the aid of security cameras on roads is a tedious

S. P. Maniraj () · T. S. Kiranmayee () · A. Thakur () · M. Bhagyashree () · R. Gupta () SRM Institute of Science and Technology, Chennai, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_31

335

336

S. P. Maniraj et al.

Fig. 1 Traffic congestion in an urban city

task to track the traffic violators. Thus, this provides an advantage to the traffic violators to violate until caught on some occasion. Therefore, a need for the new system arises due to the fact that the traffic handlers cannot be omnipresent, but technology can act as a proxy and report to them thereby reducing their legwork drastically. This paper puts forward a system where the traffic violators can be tracked automatically without the need of physical intervention, thereby reducing the road accidents and collisions. This ideology indeed gives a huge technological relief to the traffic controllers and handlers.

2 Literature Survey Smart traffic system (STS) is one of the predominant facet for future smart city. Smart city is a perception to assimilate various technologies in a safe style. STS is expensive and highly configurable to provide better services for public traffic management. Here, a Smart Traffic System (STS) [1] is proposed which provides a low cost future system to provide better services by deploying traffic updates instantly. This experimental set up has three different modules for overall application designing. They are the Internet of Things Module, the Big Data Module and the User Interaction Module. Here low cost sensors are placed, in the middle of the road, for every 500 m. These sensors detect the presence of vehicles. The data is collected by IoT quickly. Then the real time streaming data is sent for data

Traffic Violation Tracker and Controller

337

processing to Big Data Analytics. Several analytics scriptures are present to analyze the traffic density data and provide solution through predictive analysis. Thus, a mobile application is developed as user interface to explore the traffic density at various places and thus, provides an alternate way for managing the traffic via the internet. Thus, more crowded roads can be avoided hence reducing traffic congestion to an extent [2]. A design combining wireless technologies and sensors is suggested. The traffic density and traffic light timing can be predicted using the IR. Green path provided to the important vehicles for any emergency situations. XBee can be used, for automatic mode of operation [3–5]. An algorithm to provide priority to some vehicles like-ambulance, trucks, was also designed. The part of the road having quite denser traffic was placed second in the priority queue. Here, they have discussed about how can the present scenario be rectified [6–10]. This proposed method makes use of RFID and Raspberry Pi. Traffic congestion have major impacts to countries in several aspects: high fuel consumption, air pollution and slower economic growth [11, 12]. This system has focused on traffic clearance for any emergency situations and to give a notification about the stolen vehicles. This system is applicable both manually and automatically. The manual mode is taken care by an authorized individual of traffic controlling department using Raspberry Pi. And as mentioned by the other authors, the traffic density is predicted using IR sensors. The road activities are continuously monitored by the authorized personnel in the control room and the required actions are taken accordingly [13].

3 Proposed Methodology Figure 2, displays the present traffic signal system model. The present system is not that efficient as it doesn’t do much to reduce the traffic congestions. Traffic congestions are caused by multiple reasons like traffic collisions, inadequate green time, obstacles, etc. Many projects have been suggested to resolve these issues. But one of the predominant contributors of traffic congestion namely the traffic violators have been left out. Traffic violators are none but the law offenders who break the traffic rules by: • • • •

Not keeping to the left on a two way road. Crossing a junction by surpassing the signal when the signal light is red. Crossing the white solid line drawn on the road. Driving on both directions, of a one way road.

Traffic violators are difficult to catch in the present scenario. Presently, they are being caught manually by the traffic officers on road duty. Traffic personnels cannot be omnipresent on roads. Surveillance cameras also have been installed on the traffic signal junctions but, the controller sitting in the control station, cannot always be monitoring the dozens of surveillance monitors, all the time. Thus, a need for a new system arises which will be fully automated without the need of manual intervention for tracking. In [7], a wireless traffic light controller is proposed

338

S. P. Maniraj et al.

Fig. 2 Existing traffic signal model

for policeman during peak hours, which can ease traffic police control of traffic light [7]. Thereby, providing the traffic officers a great technological relief. This paper puts forward, a system where the traffic violators are caught automatically without the actual physical indulgence of a traffic officer. Whenever a Traffic Rule is broken like-crossing the junction, during a red light signal. The violators are tracked immediately by a scanner, that scans the Quick Response code off the vehicles. The information in the code is sent to the cloud for processing. The details of that particular person is acquired, and that particular person is warned via a notice if it’s for the tolerable, number of time. If it isn’t for the admissible number time, appropriate actions will be taken by the traffic officers. The traffic officer has the handle to the all these profiles and the details, stored on the cloud. Therefore, this system renders the traffic officers, a productive solution for tracking the violators with ease. The proposed traffic design model is displayed below in Fig. 3. Our methodology comprises of three working phases: • Phase 1—Detecting and monitoring the road activities. • Phase 2—Sending and Processing of the information gathered. • Phase 3—Controller interface. Figure 4, represents the Architecture Diagram for the Proposed Traffic Design Model.

Traffic Violation Tracker and Controller

Fig. 3 Proposed traffic design model

Fig. 4 Architecture of the proposed model

339

340

S. P. Maniraj et al.

3.1 Detecting and Monitoring the Road Activities In this First Phase, the road activities are monitored for any abnormalities. Every vehicle is provided with a Quick Response or the QR code. This is present on the number plate of every vehicle. Every road junctions, have a pair of two dimensional scanners. These scanners gets activated when the signal is red. So when a person tries to break the red light rule by crossing at that time. The data stored in the QR is read immediately, as the vehicle passes through the lasers, emitted by those scanners. This data is quickly sent to the microcontroller over the network from the scanners, which then sends it to the Cloud for processing. So the first phase involves all about sensing and collection of data. • Quick Response Code: QR code, is abbreviated as the Quick Response code. It is a matrix barcode. It’s also referred to as two dimensional barcode. It’s much more coherent than the traditional single dimensional Universal Product Code, which is one dimensional. For this project, we are making use of a QR code over the UPC barcode due to its quick readability and higher storage capacity. This code is placed on the number plate of every vehicle which will store in the vehicle number as the unique code. • Two Dimensional Scanner: Two dimensional scanners are used for scanning any two dimensional code like the QR code. It basically emits a non-harmful red laser which scans the code, and retrieves information that’s stored in it. Here, two dimensional Transmitters/Receivers are placed on both side of every road junction. The placement of the transmitter cum receiver is shown in Fig. 3. It covers all four road junction start points. This gets activated only when the signal light is red and gets deactivated when it goes to green or yellow. The indicator to turn on, comes from the microcontroller present inside of the traffic signal. The given diagram Fig. 5, below represents the mechanism of QR working. • Raspberry Pi: The microcontroller used here is the Raspberry Pi over the existing Single Board Computers like Arduino Yun and Intel Galileo, due to its low cost. It also has efficient networking features and this can also be configured as a web server. This microcontroller is present inside of the traffic signal. It functions like a switch here. It is programmed in such a way that, whenever the traffic light signal turns red it sends appropriate signals to the Transmitter/Receiver to turn on. When the red light goes off, it sends signal to turn off the red laser. If any information is read by the scanners, it immediately sends it to the Raspberry Pi (Fig. 6).

Traffic Violation Tracker and Controller

341

Fig. 5 Mechanism of QR working

Fig. 6 Raspberry Pi model B [4]

3.2 Sending and Processing of the Information Gathered This is the second phase, here the data read by the scanner, is sent to the Raspberry Pi microcontroller. After that this RPi communicates with Cloud and dispatches the data to it over a Wireless Network for further processing. The violators record is

342

S. P. Maniraj et al.

retrieved from the cloud and the traffic officers are able to view that on a controller interface given to them. • Microsoft Azure SQL DataBase: Complete details of the vehicles including the profiles of the owners of the vehicles are stored on this platform. It stores all the details from name, address to vehicle details including their track profile which holds information about how many times they have broken a certain rule or if they have been in any legal traffic issues previously. For storing all these, we need a cloud database which is nothing but a database that runs on a platform for cloud computing. For this we are using the Microsoft Azure SQL DataBase because of its high security and efficiency. The unique code present in the QR which is nothing but the Vehicle Number, acts like an Index in the DataBase from which the all the details can be retrieved from the cloud. The information recovered is sent to the interface which the officers use.

3.3 Controller Interface Traffic officers can view the details which the cloud sends back, in an interface given to them. It’s like a mini portal from where the Traffic officers get all the updates about all the activities happening. Also the traffic officers or person assigned to manage the database have the handle to the cloud. The track record in the database is frequently updated and monitored. Whenever the traffic officer gets any information about anyone violating the rule, track record of that person is analyzed and he/she gets notified. A warning message is sent via an email and sms by the mail id and phone number provided in that profile stating about the breaking of rule. It has some permissible limit, which will be decided by the officer, after which strict actions would be taken against them. This monitoring application given to the traffic officers is user friendly and can be installed on any operating system or any smartphones or network enabled device.

4 Conclusion The challenging traffic congestion or traffic related issues for the commoners are growing at a rapid pace. The present scenario is growing substandard with increasing crowd and vehicles in cities. These circumstances require certain measures to rectify it. The ramshackled present system needs to be modified to bring out the best. Since researches have been made on the fields for reducing traffic densities, to provide sufficient green light phase time, obstacle detection, prioritizing the

Traffic Violation Tracker and Controller

343

important vehicles, etc. One of the significant subjects—Traffic Violator has always been out of focus. Thus, this paper renders an ingenious technique to trace the pesky law breakers.

5 Future Work Any implementation of a new ideology needs some time to come into force. Future beholds, emerging technologies. Thus the soul motive of any idea may remain same but the technologies used stay dynamic. Similarly the above methodology can be implemented by involving different automation techniques, mechanization, computers, etc. Thus furthermore exploration can be done in this domain.

References 1. Hon Fong Chong, Danny Wee Kiat Ng, Lee Kong Chian,“Development Of IoT Device For Traffic Management System”, 2016 IEEE Student Conference on Research and Development (SCOReD). 2. Abida Sharif, Jianping Li, Mudassir Khalil, Rajesh Kumar, Muhammad Irfan Sharif, Atiqa Sharif, “Internet Of Things-Smart Traffic Management System For Smart Cities Using Big Data Analytics”, 2017 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). 3. Paul Jasmine Rani L, Khoushik Kumar M, Naresh K S, Vignesh S, “Dynamic traffic management system using infrared (IR) and internet of things (IOT)”, 2017 Third International Conference on Science Technology Engineering & Management (ICONSTEM). 4. Syed Misbahuddin, Junaid Ahmed Zubairi, Abdulrahman Saggaf, Jihad Basuni, Sulaiman A-Wadany, Ahmed Al-Sofi, “Iot Based Dynamic Road Traffic Management For Smart Cities”, 2015 12th International Conference on High-capacity Optical Networks and Enabling/Emerging Technologies (HONET). 5. Harshini Vijetha H, Nataraj K R, “Iot Based Intelligent Traffic Control System”, International Journal for Research in Applied Science & Engineering Technology (IJRASET). 6. I. Aldukali Salem, Riza Almselati, O.K. Atiq, Othman Jaafar Rahmat, “An Overview of Urban Transport in Malaysia”, Medwell Journals, vol. 6, no. 1, pp. 24–33, 2011. 7. K. Thatsanavipas, N. Ponganunchoke, S. Mitatha, C. Vongchumyen, “Wireless Traffic Light Controller”, Procedia Engineering, vol. 8, pp. 190–194, 2011. 8. Khalil M. Yousef, N. Jamal, AI Karaki, Ali Shatnawi, “Intelligent Traffic Light Flow Control System Using Wireless Sensor Networks”, Journal of Information Science and Engineering, vol. 26, pp. 753–768, 2010. 9. Diogenes Yuri, “Internet of Things security from the ground up”, Microsoft, [online] Available: https://azure.microsoft.com/en-us/documentation/articles/securing-iot-ground-up/. 10. Analiza funkcjonalna dla Inteligentnego Systemu Zarzdzania Transportem w Katowicach, z uwzglydnieniem funkcji metropolitalnej Miasta Katowice, BIT, Poznan, 2011. 11. Inteligentny System Zarzdzania Transportem Publicznym, praca zbiorowa, Zespol Automatyki w Transporcie, Wydzial Transportu, Politechnika Slska, Katowice, 2008. 12. http://www.raspbervvpi.org. 13. http://arduino.cc/en/ArduinoCertified/IntelGalileo.

PTCWA: Performance Testing of Cloud Based Web Applications M. S. Geetha Devasena, R. Kingsy Grace, S. Manju, and V. Krishna Kumar

1 Introduction Performance testing is the process of determining the stability of software and responsiveness under some work load. This paper deals with the performance testing of web applications in terms of resource utilization. The process of testing web application has to be done in prior to its release. This assures that the application does not suffer from being “slash-dotted” under a heavy user load [1]. Performance testing of web application analyses the working of application, web servers and databases under moderate and heavy user load. Since the web and internet have very dynamic performance capabilities and bottlenecks, performance testing of the web applications is important.

1.1 Pitfalls in Performance Testing of Web Applications Performance testing of web applications has some drawbacks. First and foremost, expecting performance tuning information as a result of the performance testing is not possible because it is designed only to provide response statistics [1]. The other drawback is the early occurrence of performance testing in the life cycle of the

M. S. Geetha Devasena · R. Kingsy Grace () · V. Krishna Kumar Department of Computer Science and Engineering, Sri Ramakrishna Engineering College, Coimbatore, India e-mail: [email protected]; [email protected]; [email protected] S. Manju Department of Computer Science and Engineering, CMR Institute of Technology, Bangalore, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_32

345

346

M. S. Geetha Devasena et al.

product. Introducing performance testing of web applications too early in the life cycle of a web application can result in additional time spent in script modification during subsequent runs of performance testing.

1.2 Cloud Testing and Its Benefits Traditional approaches to test a software incurs high cost to simulate user activity from different geographic locations. In case of applications where the deployment environment is not stable and rate of increase in number of users is unpredictable, cloud testing could be deployed. Though it is found to be effective than traditional methods, the information like testing techniques, suggestions, methods, and tools in cloud systems are not available explicitly like traditional software testing methodologies which are mostly based on best practices of testers and standards [2, 3]. The ease of accessibility and “pay as you use” approach of cloud’s highly available computing resources provides the ability to replicate real world usage of these systems among geographically distributed users. This also helps in analyzing varieties of user scenarios in traditional testing environments thus accounting to good scalability and reduction in capital expenditure [1]. Well-known examples of cloud platforms include Google’s App Engine, Amazon’s EC2, Microsoft’s Azure and IBM Smart Cloud. Performance testing measures system throughput and latency with a variety of concurrent users over long periods of time.

1.3 Challenges in Cloud Testing Even though cloud testing accounts to high performance when compared to traditional schemes it has some challenges to be met [4]. They main challenges are support for dynamic scalability, identification of test process for TaaS [2] to support on-demand automated testing and lack of standards. The rest of the paper is organized as follows. Section 2 reviews the existing testing tools and methods on cloud framework. Section 3 highlights the proposed performance testing scheme on web application in both public and private cloud. The implementation details are presented in Sect. 4. Section 5 lists the results obtained by the proposed tool and Sect. 6 concludes the paper.

2 Related Work This section presents the literature related to performance testing of web applications. Atif et al. have explored about the traditional software testing methods with different cloud services testing methods and stated that cloud testing can be more

PTCWA: Performance Testing of Cloud Based Web Applications

347

advantageous than the traditional testing mechanism [10]. The paper also coins on the lack of standards, tools and security policies in case of cloud testing. In [2] Vanitha et al. have investigated the existing cloud platform research trends, issues, commercial tools and clouds testing models. The potential risks faced in cloud testing are listed as security, lack of standards and difficulties involved in creating real time environments. The paper also analyzed various testing tools like SOASTA [10], iTKO LISA [11] and Cloud Testing [12]. SOASTA cloud test is the performance testing tool for web application which could simultaneously simulate thousands of virtual users in both public and private cloud. The free version of SOASTA is called as Lite. It allows hundred concurrent users. iTKO LISA is the cloud based application that aims at development, verification and validation. Jerry et al. have compared the software testing practices in both traditional and cloud system and have claimed that TaaS would be the easiest way of testing applications [3]. But the negative side of TaaS would be its lack of standards and lack of test models. Tammanah et al. have surveyed the various testing approaches on cloud environment. The paper has briefed the most important software testing frameworks that could be extended in cloud environment like test case prioritization, clustering, load balancing and security mechanisms. The “pay as you use” cloud paradigm is compared with traditional software testing which involves high cost [5]. Smara et al. have proposed a new acceptance testing to detect faults in component based cloud environment [6]. The research work of Atif et al. explores the difference between several software testing approaches in cloud platform with the traditional approaches. The yet another comparative study of Nachiyappan et al. has differentiated the various cloud testing tools [7] like SOASTA, iTKO LISA, LoadRunner [8], Blitz [9] and BlazeMeter [10]. LoadRunner tool [8] was developed by HP which is an automated application load testing tool. This examines the system behavior and performance by simulating thousands of concurrent users by inputting real life loads. Blitz is also the load testing tool to test web applications and Application Programming Interfaces to know its scalability [7, 9]. This tool is capable of testing about 50,000 concurrent users on a pay per test model. The self-serviced load testing tool designed for professional use is BlazeMeter [10]. This helps users to analyze performance and to fix errors at less time by simulating load up to 1 lakh concurrent users. Its significance lies in the use of graphical web environment to create our own test scripts and scenarios. Testing can be made easy when there exists proper test cases. The same can be done both manually and automatically. Manual generation of test cases might result in inaccurate results because of the inadequate test case generation skill. To bring an ease to such difficulty Priyanka et al. proposed a framework to generate test cases automatically [11]. This is achieved by integrating soft computing technique with Apache Hadoop MapReduce. When most of the researchers are working on cloud platforms for software testing, Khan and Amjad came up with the importance of enabling performance testing on web application [12]. Performance testing was done based on the predefined load experimented on the HP ALM tool which

348

M. S. Geetha Devasena et al.

analyses the execution speed of the system. Having taken this performance testing as a motivation we have come up with the performance testing tool in the next section.

3 A Performance Testing Tool for Cloud Infrastructure By inferring some issues and problems in cloud testing, this paper proposes a cloud based performance testing tool for web application which was both deployed in public and private cloud.

3.1 Private Cloud Setup Open stack was used to deploy cloud on the test bed. Installation was done on multiple nodes. Every step is needed to be followed to obtain a fully operational Open Stack installation on our test bed consisting of 1 controller and 5 compute nodes.

3.2 Fetching Performance Parameters Web application or infrastructure to be tested is hosted or remotely connected to the test bed, i.e. the private cloud through the SSH client remotely. Once when the application is connected the values or parameters are fetched and recorded based on the user increment value set by the tester. The values are stored separately for each instance that is under consideration. Parameters fetched are segregated based on the type and stored separately. The diagrammatic flow of fetching performance parameters is shown in Fig. 1.

3.3 Analysis of the Fetched Values The values fetched are taken for analysis and the performance bottlenecks are found based on the generated values. Analysis is done by aggregation mechanism where

Fig. 1 Parameters fetching process

PTCWA: Performance Testing of Cloud Based Web Applications

349

thresholds are found. Based on the threshold that is obtained the values are screened and analyzed, thus enabling to find the performance bottlenecks.

3.4 Report Generation Based on the analysis of the fetched values a graphical representation is drafted which helps in finding out the performance bottlenecks. These reports are based on the threshold values that are obtained. Report is stored and can be used for historical analysis.

4 System Implementation Difference in implementing performance testing and other types of testing lies in collecting more details and analysis. The effort involved in performance testing is more because tests are generally repeated several times involving increased effort. This results in increased effort which again involves high cost. A major challenge involved in performance testing is to find the right process so that the effort can be minimized. The workflow of the process is shown in Fig. 2.

Fig. 2 Performance testing workflow

350

M. S. Geetha Devasena et al.

4.1 Recording User Scenarios Proper user scenarios should be framed as an initiative to start performance analysis which has significant role until the analysis process completes. The user scenarios that are framed and recorded are based on the number of users connected and the network connectivity. The users are connected on increments to achieve maximum variation. When a cloud is taken into account, it is mandatory that the scenarios are recorded on increments to achieve scenarios for performance analysis. The recorded scenarios are maintained for historical analysis which are very useful when similar performance analysis process occurs. These scenarios are framed in such a way that they can be used for both manual and automated analysis.

4.2 Parameterization of Test Scripts Parameterization lets you run the same test again and again with different values. Parameterization is commonly used on Behavior Drive Development (BDD) frameworks for User Acceptance testing. Parameters are coined based on the user scenarios so that the test scripts obtained are used for a manual or an automated method of performance analysis. These test scripts form the basis of the performance analysis process. They are designed in such a way that the analysis is done accurately at the right region of analysis. Parameterization is done in two major ways like Global and Local form. Global Parameterization leads to test scripts which can be used on various test suites having unified formats. Local Parameterization leads to test scripts that can be used only with the developed tool.

4.3 Grouping User Scenarios Grouping is a yet another important part of the analysis process. The recorded user scenarios are grouped based on the priority. Test scenarios with higher priority are taken in for the first set of analysis process. These test scenarios also depend on the parameterization of the test scripts that are written for solving a specific purpose. Scenarios with less priority are taken for analysis only.

4.4 Load Generation This phase is to analyze with standing capacity of the cloud server and the load given to server should be highly varied. Whenever a cloud server is concerned the load given to it is its key performance analysis factor. At any instant of time the

PTCWA: Performance Testing of Cloud Based Web Applications

351

performance parameters of number of users connected to a cloud server are obtained with at most accuracy. The load generation process maintains the number of users with sequential variations with respect to the performance analysis process. The load generation can be done either manually or through automation.

4.5 Performance Analysis Process The grouped test scripts are then run and the vital performance parameters are obtained based out of the user scenarios, the load that is generated. The vital analysis process will have a generalized report which is logged for historical analysis. The report will contain important parameters which includes CPU utilization, Memory Utilization for each and every scenario for which the analysis is being done. By this process the bottle necks that arise are easily found and corrected.

4.6 Report Generation A detailed report on the whole performance analysis process is given, which contains the aggregate usage of the memory and the CPU utilization factor for the given user scenario and a graphical representation of the same is drafted. Reports are generated according to the type of cloud server under consideration. As stated earlier results are obtained by deploying the tool in both private and public cloud server.

4.6.1

Local Private Cloud Server

Table 1 shows the report generated on the local private cloud server whose performance parameters is based on the load increments given by the user. Graphical representation is drafted based on the utilization of the performance parameters in the private cloud server [8].

4.6.2

Public Cloud Server

Table 2 shows the report generated on the public cloud server whose performance parameters is based on the load increments given by the public cloud domain controller. Graphical representation is drafted based on the utilization of the performance parameters in the public cloud server [8]. Larger load increments can also be given to analysis and accurate yield can be obtained.

352

M. S. Geetha Devasena et al.

Table 1 Report from local private cloud server

Total memory 3,040,700

Used memory 1,536,068 1,565,848 1,565,880 1,566,168 1,565,440 1,565,400 1,564,788 1,565,216 1,565,144 1,564,792 1,565,180 1,565,356 1,565,400

Free memory 1,504,632 1,474,852 1,474,820 1,474,532 1,475,260 1,475,300 1,475,912 1,475,484 1,475,556 1,475,908 1,475,520 1,475,344 1,475,220

Buffers 48,912 49,328 49,328 49,328 49,328 49,336 49,336 49,336 49,336 49,336 49,344 49,344 49,344

Table 2 Report from public cloud server

Total memory 5,750,196

Used memory 1,672,096 1,766,876 1,767,228 1,767,200 1,767,448 1,767,608 1,767,572 1,767,988 1,768,032 1,768,052 1,768,024 1,768,132 1,768,024

Free memory 4,078,100 3,983,320 3,982,968 3,982,996 3,982,748 3,982,588 3,982,624 3,982,208 3,982,164 3,982,144 3,982,172 3,982,064 3,982,172

Buffers 30,052 33,080 33,080 33,080 33,088 33,088 33,096 33,096 33,096 33,096 33,104 33,104 33,104

4.7 Performance Analysis from a Mobile Platform The proposed tool allows the user to monitor the performance of a cloud server in real time from a mobile device like a smart phone or a Tablet powered with Android operating system which is shown in Fig. 3. As it is a mobile implementation it is highly portable and can be used at anyplace, anytime provided there are proper privileges to access the cloud server. The results generated at the mobile platform can be used at the normal desktop or server platforms and vice-versa. This is the first of its kind implementation to analyze performance parameters from a mobile device.

PTCWA: Performance Testing of Cloud Based Web Applications

Fig. 3 Report of an Android mobile platform

353

354

M. S. Geetha Devasena et al.

5 Results and Discussion This section deals with the results obtained from performance testing on both public and private cloud servers.

5.1 Performance Testing in Public Cloud Performance testing on public cloud server was implemented from www.salesforce .com which allows a maximum of 400 users at a total 4.5 GB memory. Tables 3 and 4 shows the performance parameters and the maximum CPU utilization memory usage in the cloud. The results obtained is for a maximum load of 400 users by uniformly incrementing 100 users at every iteration form the minimum scale of 100 users. The threshold memory value is calculated and it is found that linearity is achieved within the thresholds at various user increments.

5.2 Performance Testing in Private Cloud Performance testing on private cloud server was implemented with the help of CogCloud Foundation. The Private cloud deployed allows a maximum of 100 users at a total 4 GB memory. The performance parameters fetched in this setup is shown in Table 5 along with the maximum CPU utilization of 8.6%. Table 6 shows memory usage in the real time environment. The results obtained is for a maximum load of 100 users by uniformly incrementing 10 users at every iteration form the minimum scale of 10 users. The threshold

Table 3 System memory and user configuration of public cloud Public cloud provider Salesforce.com

Total memory 4.5 GB

Maximum users 400

Maximum CPU utilization 7.8%

Table 4 Performance analysis values for a public cloud of range 100–400 users Public cloud provider No. of users Salesforce.com 100 200 300 400

Average memory in kilobytes 1,664,785.04 1,774,656.92 1,810,418.2 1,844,219.13

Free memory in kilobytes 4,085,410.96 3,975,539.08 3,958,945.12 3,920,352.36

Table 5 System memory and user configuration of private cloud Private cloud provider CogCloud

Total memory 4 GB

Number of users 100

Maximum CPU utilization 8.6%

PTCWA: Performance Testing of Cloud Based Web Applications

355

Table 6 Performance analysis values for a public cloud Private cloud provider No. of users CogCloud 10 20 30 40 50 60 70 80 90 100

Average memory in kilobytes 1,745,098.4 1,730,010.6 1,743,689.6 1,814,401.5 1,797,536.8 1,814,077.13 1,699,625.94 1,811,712.4 1,823,686.88 1,823,685.55

Free memory in kilobytes 2,297,073.6 2,312,161.4 2,298,482.4 2,227,770.5 2,244,635.2 2,228,094.86 2,342,546.05 2,230,459.6 2,218,485.11 2,215,485.11

memory value is calculated and it is found that linearity is achieved within the thresholds at various user increments. Using the thresholds the report is generated with a graphical representation estimating memory consumed to the amount of free memory with a preset total memory.

6 Conclusion and Future Enhancements Performance testing of web applications, in its simplest form, is primarily designed to provide response statistics. Being done in a cloud, performance should be a key aspect due to its massive infrastructure which cannot analyze parameters accurately. The proposed tool helps in fetching and analyzing the performance parameters of a cloud server accurately. The tool is evaluated in both public and private cloud setup for 10 to 1000 users. Dynamic memory usage values are fetched, analyzed and report is generated. The proposed tool can be further extended with features such as mobile crowd sensing that helps in lowering the load barriers and automation of parameter analysis from a mobile device.

References 1. Elaine J. Weyuker, Filippos I. Vokolos, “Experience with Performance Testing of Software Systems: Issues, an Approach and Case study”, IEEE Transactions on Software Engineering, 26, (2000). 2. A. Vanitha Katherine, K. Alagarsamy, “Software Testing in Cloud Platform: A Survey”, International Journal of Computer Applications (0975-8887), 46, 6, 21–25, (2012). 3. Eljona Proko, Ilia Ninka, “Analysis and strategy for the performance testing in cloud computing”, Global Journal of Computer Science and Technology Cloud and Distributed, 10, (2012).

356

M. S. Geetha Devasena et al.

4. Jerry Gao, Xiaoying Bai and Wei-Tek Tsai, “Cloud Testing-Issues, Challenges, Needs and Practice”, Software Engineering: An International Journal (SEIJ), 1, 1, 513–520, (2011). 5. Tamanna Siddiqui, Riaz Ahmad, “A review on software testing approaches for cloud applications”, Perspectives in Science 8, 689–691, (2016). 6. Mounya Smara, Makhlouf Aliouat, Al-Sakib Khan Pathan, Zibouda Aliouat, Acceptance Test for Fault Detection in Component-based Cloud Computing and Systems, Future Generation Computer Systems, (2016). 7. S. Nachiyappan, S. Justus, Cloud Testing Tools and Its Challenges: A Comparative Study, 2nd International Symposium on Big Data and Cloud, 482–489, (2015). 8. HP LoadRunner—market leading performance testing software, http://www8.hp.com/in/en/ software-solutions/loadrunner-load-testing/ 9. Maximize your site’s performance with Blitz, https://www.blitz.io/ 10. BlazeMeter: Continuous Performance Testing for Dev Ops, http://blazemeter.com/ 11. Priyanka Chawla, Inderveer Chana, Ajay Rana, Cloud-based Automatic test data Generation Framework, Journal of Computer and System Sciences, 82, 712–738, (2016). 12. Rijwan Khan, Mohd Amjad, Performance testing (load) of web applications based on test case management, Perspectives in Science 8, 355—357, (2016).

Analysis of Regularized Echo State Networks on the Impact of Air Pollutants on Human Health Lilian N. Araujo, Jônatas T. Belotti, Thiago Antonini Alves, Yara de Souza Tadano, Flavio Trojan, and Hugo Siqueira

1 Introduction The World Health Organization reports that more than 6 million deaths are related to air pollution [1]. Therefore, it is mandatory the development of new researches addressing air pollution and relating it as one of the factors that cause diseases [2]. Studies on health risks caused by air pollution are generally carried out using statistical regression techniques. However, if the database does not contain all the data sequentially, these models cannot achieve satisfactory performance due to the inherent process of the free coefficients determination [3]. On the other hand, Artificial Neural Networks (ANN) can overcome this problems because they are universal approximators. This can lead to good results, which makes it a promising tool in this context. The Echo State Networks (ESN) were applied in several forecasting studies, including atmospheric pollution [4–6]. However, with regard to the evaluation of air pollution impact on human health, its use is still not widespread [7]. In light of the above, the ESN were applied with and without the regularization coefficient using two reservoir proposals, from Jaeger [8] and from Ozturk et al. [10] in the assessment of the impact of atmospheric pollution on human health. As a case study, we considered the impact of particulate matter with aerodynamic

L. N. Araujo Federal University of Technology—Parana (UTFPR), Ponta Grossa, Brazil Federal Institute of Parana (IFPR), Palmas, PR, Brazil e-mail: [email protected] J. T. Belotti · T. Antonini Alves () · Y. de Souza Tadano · F. Trojan · H. Siqueira Federal University of Technology—Parana (UTFPR), Ponta Grossa, Brazil e-mail: [email protected]; [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_33

357

358

L. N. Araujo et al.

diameter less than 10 μm (PM10 ) and meteorological variables (temperature and humidity) on the number of hospital admissions due to respiratory diseases in the city of São Paulo/SP, Brazil.

2 Echo State Networks (ESN) Echo State Networks (ESN) are architectures of Recurrent Neural Networks (RNN). They were proposed by Jaeger [8] to treat nonlinear problems and times series forecasting [6]. They are recurrent structures since they have feedback loops between the neurons in the hidden layer, which is called dynamic reservoir [8]. The ESN structure is composed of an input layer, a dynamic reservoir, and an output layer. The main difference between the ESN and a classic RNN models is that the weights of the dynamic reservoir are not tuned. The training process must find the coefficients of a linear combiner, used as output layer, based on a least squares problem, considering a reference signal. The theoretical element that guarantees the convergence of the ESN and the presence of memory in an ESN is known as the echo state property [8]. Figure 1 shows the general structure of an ESN in which the input vector is un = [un , un–1 , . . . , un–K+1 ]T , transmitted from the input layer Win to the dynamic reservoir by means of nonlinear combinations. The echo state vector xn is formed by the output of the input layer and the signal of the last echo state, multiplied by the matrix Win , which contains the recursion loops and are calculated by (1):   xn+1 = f Win un+1 + W xn , where, f (·) = (f1 (·), f2 (·), . . . , fn (·)) are the activations of the hidden neurons. The output of the network is the sample yn defined by (2): Fig. 1 The Echo State Network (ESN)

(1)

Analysis of Regularized Echo State Networks on the Impact of Air Pollutants. . .

  yn+1 = f out Wout xn+1 ,

359

(2)

where fout are the activation functions of the output neurons. Finally, this neural model training is summarized to find the matrix Wout , which can be done by the application of the Moore-Penrose inverse operation, as in (3):  −1 Wout = XT X XT d,

(3)

where X is the matrix which has the echo states and d is the vector containing the desired output. However, a technique that can be used to increase the generalization capacity of the model is to adopt the regularization. The strategy is to include a restriction term directly proportional to the Euclidean norm of the parameter vector in the MSE cost function. Therefore, the new expression to calculate the output layer is given by (4):  Wout =

−1 I + XT X XT d, C

(4)

where I is the identity matrix and C is the regularization coefficient. Monitoring the output weights, Wout measures the quality of the training result. Large weights evince that Wout increases the difference between the dimensions of xn , not reaching the conditions under which the network was trained. Extremely large Wout values can be an indication of a very sensitive and unstable solution [9]. In this work, we address two different designs of the dynamic reservoir. The first one is from Jaeger [8], in which the author suggests the creation of a sparse reservoir. In this case, all values are null or equal to 0.4 or −0.4, with probability to be chosen of 95%, 2.5% and 2.5%, respectively. The second approach was developed by Ozturk et al. [10]. In this case, the authors generated a rich repertoire of nonlinear dynamics, uniformly spreading the eigenvalues of the reservoir, in order to increase the entropy of the internal state vector.

3 Case Study The considered case study was to evaluate the performance of using the regularization coefficient in ESN applied to predict the number of hospital admissions due to respiratory diseases in São Paulo/SP, Brazil. To this purpose, two variations of ESN reservoir were applied, those from Jaeger [8] and those from Ozturk et al. [10]. According to the latest census conducted by the Brazilian Institute of Geography and Statistics (IBGE) in 2010, the city of São Paulo had more than 11 million inhabitants, one of the largest cities of Latin America [11].

360

L. N. Araujo et al.

The prediction of hospital admissions were performed using as input variables 1096 daily samples collected between January 1, 2014 and December 31, 2016. The inputs are composed of the following fields: • Date: date of measurement day; • PM10 : concentration of particulate matter with aerodynamic diameter less than or equal to 10 μm; • Temperature: average temperature recorded on measurement day; • Humidity: air Relative humidity recorded on the measurement day; • Day: day of the week (Sunday—1, Monday—2, . . . , Saturday—7); • Holiday: if the day is a holiday (Yes—1 or No—0); • Number of hospitalizations: number of hospital admissions due to respiratory diseases on the day of measurement. The PM10 concentration and climatic variables data were obtained by air quality monitoring stations belonging to Environmental Company of the State of São Paulo (CETESB). The number of hospitalizations for respiratory diseases was obtained from the Brazilian National Health System (DataSUS), following the International Classification of Diseases (ICD-10-J00-J99). The data of day of the week and holiday were included because hospital admissions decreases during holidays and weekends, then include this variables helps the models adjustment. The following performance metrics were calculated to ESN: Mean Square Error (MSE), Mean Absolute Error (MAE) and Mean Absolute Percent Error (MAPE), given by Eqs. (5)–(7), respectively.

MSE =

N 2 1  dt − yt , N

(5)

N  1   dt − yt  , N

(6)

t=1

MAE =

t=1

 N  1   dt − yt  MAP E =  d , N t

(7)

t=1

where N is the number of samples in the set, dt is the desired output and yt is the output displayed by the network. Among these, the main metric considered was the MSE, so that the best neural network is the one with the lowest MSE [12]. This because such metricis the one that has its value reduced in the network optimization process. For the comparative study, four ESN were executed, two with Regularization Coefficient (RC) and two without it. The database was divided into 2 sets: training set, containing 932 samples (85% of the dataset)—from 01/01/2014 to 07/20/2016, and test set from 07/21/2016 to 12/31/2016, which presents 164 samples (15%).

Analysis of Regularized Echo State Networks on the Impact of Air Pollutants. . .

361

The adjustment of C follows the procedure described in [12]. In this case, 52 values were tested according to λ = [− 26, − 25, . . . , 24, 25], being C = 2λ . It is important to mention that a validation set is usually defined to increase the generalization capability of the network. However, in preliminary tests we observed that the inclusion of this set did not bring an increase in the networks performance. Instead, we used the training set again. The health problems caused by the inhalation of pollutants do not necessarily occur on the same day of the exposure [3]. Then, usually, epidemiological studies investigate the impact of pollution up to 7 days after exposure [13]. Therefore, the present study made predictions for 0 to 7 days after exposure to the pollutant (called lag days), being 0 the same day of exposure. Regarding the number of neurons (NN), each neural network was tested from 5 to 100 neurons with an increase of 5. For each number of neurons, 30 simulations were performed, the lowest MSE to the test set was selected. Table 1 shows the predictions results. The Friedman test was executed in order to verify if the results obtained by the models are significantly different. We found p-values close to zero, indicating that changing the predictor leads to different results [14].

Table 1 Comparison of the predictions with and without the regularization coefficient Lag days 0

1

2

3

4

ANN ESN Jaeger [8] ESN Ozturk et al. [10] ESN Jaeger [8] (RC) ESN Ozturk et al. [10] (RC) ESN Jaeger [8] ESN Ozturk et al. [10] ESN Jaeger [8] (RC) ESN Ozturk et al. [10] (RC) ESN Jaeger [8] ESN Ozturk et al. [10] ESN Jaeger [8] (RC) ESN Ozturk et al. [10] (RC) ESN Jaeger [8] ESN Ozturk et al. [10] ESN Jaeger [8] (RC) ESN Ozturk et al. [10] (RC) ESN Jaeger [8] ESN Ozturk et al. [10] ESN Jaeger [8] (RC) ESN Ozturk et al. [10] (RC)

NN 15 20 25 35 20 20 20 80 25 5 25 30 45 10 85 5 5 5 20 5

MSE 1751.992 1717.016 1737.789 1704.304 1993.260 1920.340 1965.131 1807.706 1964.157 1948.163 1919.540 1994.338 1980.691 1912.533 1923.549 1864.688 1943.833 1803.500 1914.637 1912.468

MAE 31.273 31.167 31.475 31.230 34.580 33.412 34.315 32.586 34.147 34.592 34.619 34.107 33.528 34.191 32.897 33.956 33.765 32.991 33.444 33.412

MAPE (%) 41.08 40.77 40.77 40.39 45.23 44.38 45.15 42.88 44.93 45.31 45.23 44.48 44.47 45.57 44.16 44.26 45.54 43.85 45.74 45.53

RC – – 26 5 – – 4 3 – – 26 26 – – 4 26 – – −4 −1

(continued)

362

L. N. Araujo et al.

Table 1 (continued) Lag days 5

6

7

ANN ESN Jaeger [8] ESN Ozturk et al. [10] ESN Jaeger [8] (RC) ESN Ozturk et al. [10] (RC) ESN Jaeger [8] ESN Ozturk et al. [10] ESN Jaeger [8] (RC) ESN Ozturk et al. [10] (RC) ESN Jaeger [8] ESN Ozturk et al. [10] ESN Jaeger [8] (RC) ESN Ozturk et al. [10] (RC)

NN 5 5 5 5 35 70 50 5 5 5 10 5

MSE 1992.897 1927.205 1978.608 1934.771 2105.801 2029.169 2119.642 1970.161 1976.348 1971.901 1974.717 1954.852

MAE 35.186 34.025 33.793 33.734 34.657 33.524 34.261 34.371 33.208 32.983 33.147 32.893

MAPE (%) 45.95 44.48 45.74 45.13 46.11 45.04 46.53 45.89 44.07 44.38 44.59 44.38

RC – – −2 −1 – – 6 2 – – −1 26

Fig. 2 Boxplot to results of forecasts made for lag 0

According to Table 1, it can be verified that the models using RC obtained the best results in 75% of the experiments. It is also noted that the reservoir from Ozturk et al. [10] was better than Jaeger’s [8] in 7 of 8 scenarios. The best prediction among all the lags was the one performed by the ESN from Ozturk et al. [10] and with regularization coefficient to lag 0 and 35 neurons. Figure 2 shows the box-plot of the 30 executions obtained to lag 0. It can be seen that ESN from Ozturk et al. [10] has reached the best average MSE, and the leads lower dispersions. Finally, Fig. 3 presents the comparison between the actual number of hospital admissions for respiratory diseases and the prediction performed by ESN Ozturk et al. [10] (RC) of 35 neurons to lag 0. The difference between prediction and real hospital admissions occurs because meteorological variables and PM10 concentration are not the only variables that lead to respiratory diseases.

Analysis of Regularized Echo State Networks on the Impact of Air Pollutants. . .

363

Fig. 3 ESN [10] (RC) forecast to lag 0

4 Conclusion The prediction of hospital admissions for respiratory diseases is commonly treated in the literature as a problem of statistical regression. However, in this study we addressed the neural inspired Eco State Networks (ESN) to predict the number of hospital admissions in São Paulo/SP, Brazil using particulate matter concentration and climate variables as input variables. In view of a best evaluation of the proposal, we analyzed two constructive aspects of the ESN: the reservoir design from Jaeger [8] and from Ozturk et al. [10], using the regularization coefficient (RC). The computational results showed that the proposal from Ozturk et al. [10] together with the regularization achieved the best overall results. Future works can be developed in the use of other neural models and the application of databases from other important cities around the word.

References 1. World Health Organization – Regional office for Europe, http://www.euro.who.int/data/assets/ pdf_file/0019/331660/Evolution-air-quality.pdf 2. Bibi, H., Nutman, A., Shoseyov, D., Shalom, M., Peled, R., Kivity, S., Nutman, J.: Prediction of Emergency Department Visits for Respiratory Symptoms Using an Artificial Neural Network. Chest. 122, 1627–1632 (2002) 3. Tadano, Y.S., Siqueira, H.V., Alves, T.A.: Unorganized Machines to Predict Hospital Admissions for Respiratory Diseases, In: 3rd IEEE Latin American Conference on Computational Intelligence (LA-CCI), #16774805. IEEE Press, Cartagena (2016) 4. Araujo, L., Belotti, J.T., Antonini Alves, T., Tadano, Y.S., Siqueira, H.: Ensemble method based on Artificial Neural Networks to Estimate Air Pollution Health Risks. Environmental Modelling and Software. 123, 104567 (2020)

364

L. N. Araujo et al.

5. Polezer, G., Tadano, Y.S., Siqueira, H.V., Godoi, A.F., Yamamoto, C.I., de André, P.A., Pauliquevis, T., Andrade, M.F., Oliveira, A., Saldiva, P.H.N., Taylor, P.E., Godoi, R.H.M.: Assessing the Impact of PM 2.5 on Respiratory Disease Using Artificial Neural Networks. Environmental Pollution. 235, 394–403 (2018) 6. Hung, M.D., Dung, N.T.: Application of Echo State Network for the Forecast of Air Quality. Vietnam Journal of Science and Technology. 54, 54–63 (2016) 7. Kassomenos, P., Petrakis, M., Sarigiannis, D., Gotti, A., Karakitsios, S.: Identifying the Contribution of Physical and Chemical Stressors to the Daily Number of Hospital Admissions Implementing an Artificial Neural Network Model. Air Quality Atmosphere & Health. 4, 263– 272 (2011) 8. Jaeger, H.: The “Echo State” Approach to Analysing and Training Recurrent Neural Networks – with an Erratum Note. Fraunhofer Institute for Autonomous Intelligent Systems, Sankt Augustin/Germany (2010) 9. Aswolinskiy, W., Reinhart, F., Steil, J.J.: Impact of Regularization on the Model Space for Time Series Classification. New Challenges in Neural Computation (NC2) 49–56, (2015) 10. Ozturk, M.C., Xu, D., Príncipe, J.C.: Analysis and Design of Echo State Networks. Neural Computation. 19, 111–138, (2007) 11. Brazilian Institute of Geography and Statistics (in Portuguese), https://cidades.ibge.gov.br/ brasil/sp/sao-paulo/panorama 12. Siqueira, H., Boccato, L., Luna, I. Attux, R., Lyra Filho, C.: Performance Analysis of Unorganized Machines in Streamflow Forecasting of Brazilian Plants. Applied Soft Computing. 68, 494–506, (2018) 13. Tadano, Y.S., Ugaya, C.M.L., Franco, A.T.: Methodology to Assess Air Pollution Impact on Human Health Using the Generalized Linear Model with Poisson Regression. In: Khare, M. (ed.) Air Pollution – Monitoring, Modelling and Health. pp. 281–304. InTech, Rijeka (2012) 14. Luna, I., Ballini, R.: Top-Down Strategies Based on Adaptive Fuzzy Rule-Based Systems for Daily Time Series Forecasting. International Journal of Forecasting. 27, 708–724, (2011)

Detection of Cancer by Biosensor Through Optical Lithography K. Kalyan Babu

1 Introduction • Bio sensors[i] are the devices which are playing key role in Bio medical Instrumentation. Their main intention is to find disease causing pathogens in human body. • Biosensors are the combination of Ligands and Transducers. • Ligands refer to Biological elements and transducer refers to conversion of biological signal to an electrical signal. The electrical signal refers to current in milli amperes, micro amperes, nano amperes, pico amperes and femto amperes. • The output of amperometric biosensor is current, whereas the output of potentiometric biosensor is voltage and lastly the output of conductometric biosensor is thermal voltage. • DCCV[j] refers to Direct Current Cyclic Voltammetry invented by Graham Devis in 1984 to calculate bioelectric potentials in human body. • Nano material known Graphene[a] is used in my work to detect cancer. • The experimentation involves DCCV technique which is the direct measure of the current for the cancer in terms of the oxygen output. • The oxygen gas is synthesised in terms of current.

K. K. Babu () ECE Department, GITAM Deemed to Be University, Visakhapatnam, AP, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_34

365

366 Fig. 1 Flow diagram of AMPEROMETRIC BIOSENSOR

K. K. Babu

BioSensor

LIGAND + TRANSDUCER

System with Electro com Soware

Detecon of Cancer pathogen and plong the Graphs using Origin soware.

1.1 Figure 1 The below diagram shows the core part of this research paper. The biosensor used is AMPEROMETRIC BIOSENSOR[i] which is an electrochemical device which converts a biological signal into current signal. The cancer is also known as carcinoma that is fed as input in form of a blood sample. The ligand used is glucose oxidase chemical. This acts as a catalyst. This speeds up the reaction between blood sample and ligand[b]. The system is having electrocom software which starts working when the substrate appears in blood solution [1–3]. Biosensor used in this work is amperometric biosensor. The special software for this work is ELECTROCOM software which gives the readings of biosensor output. After getting points, the graphs are plotted using Origin 17 software. The ligand used in this work is GODx chemical which is very cost effective and it comes after 6 months after placing the order [4–7]. The place of manufacturing this chemical is in Japan [8].

1.2 Formulae[i]: The main equation used in my work is Michaelis–Menten equation which is given by E + S → ESC + Product.

Detection of Cancer by Biosensor Through Optical Lithography

367

E stands for Enzyme, S stands for Substrate, ESS stands for Enzyme Substrate Complex and P stands for product. Enzyme is nothing but the Ligand which is GODx chemical. All enzymes are proteins but not all proteins are enzymes. Proteins play a key role in building the healthy molecules of human body [9]. DNA[a] called as Deoxyribonucleic acid is the basic building block of humans, so along with DNA, proteins are added building blocks of basic living cell [10]. Coming to substrates, the construction involves Glass substrate or Silica Substrate. The glass substrate is used in 99% of Biosensors. The substrate is exposed to UV light to the surface of glass very tough and moulded properly to carry the terminals of biosensor. The terminals of Biosensor are Working electrode, Reference electrode and Counter electrode. The working electrode is made of AU(Gold), Glassy carbon(GC) or Stainless Steel. The reference electrode is made of Ag(Silver) or AgCl(Silver Chloride). The counter electrode is made of Platinum. Coming to product the output of Amperometric Biosensor is Current, this current is direct conversion of a Oxygen Gas. The process is similar to photo synthesis in plants. In this process of photo synthesis, the input is LIGHT and the plants convert Carbondioxide to Oxygen gas. The process is irreversible in biosensors.

1.3 Table 1 Table 1 refers to various light inputs to the biosensor, the various light inputs are UV-Ultra violet, IR refers to infrared, Sun light refers to normal light input, LASER refers to LIGHT amplification and Stimulation Emission and Radiation. The last light input is visible light where eyes cannot get affected. While using UV, IR, LASER and Sun light the experimentation is processed with HIGH LIGHT RESISTIVE SPECTICALS.

1.4 Governing Equations [References—h, g] The equation of light emitted is given by

Table 1 Different lights to biosensor Sl no. 01 02 03 04 05

Input UV Infrared Sunlight Laser Visible light

Mediator GODx GODx GODx GODx GODx

Output Very high current High current Medium current Low current Very low current

Conclusion NO cancer NO Cancer Chances of cancer Definite cancer Confirmed cancer

368

K. K. Babu

  Energy Eˆ = h · c/ where h stands for Planck’s constant, c stands for Velocity of light, wavelength of light. The above equation can be extended as !   Eˆ = hc/ + 1/2 mv exp 2 + mgh

(1) stands for

(2)

in case of NON CANCEROUS LIVING CELLS. Where m stands for mass of carriers of charge, v stands for velocity of output carriers of living cells and g stands for acceleration due to gravity and h stands for height of the median. Equation (1) as specified above in case of CANCEROUS LIVING CELLS, the cancer cells are direct transparent cells where light can easily penetrate into them and carry no mass as output light. This is because of Zero mass, no moment in cancerous living cells.

2 Simulation Results Figure 2 is for cancerous living cell. It is plotted for wavelength versus energy. The energy for cancerous cells is of order of femto joules for 0.01 nm wavelength. The cancerous cells are very flexible and become loose with loss of RBC and WBC and proteins. Figure 3 shows the simulation results of non cancerous cell for energy versus wavelength in nanometers. The energy is found to be in order of nano joules. This value has come because of potential energy and kinetic energy present in living cell. The healthy living cell has very good value of potential energy and kinetic energy along with good light emission properties.

Fig. 2 Cancerous living cell

Detection of Cancer by Biosensor Through Optical Lithography

369

Fig. 3 Analysis of Wavelength Vs Energy

3 Conclusion In this research paper I have traced the cancer disease through light emission and absorption property. It is found that light emission is very good in healthy living cells against nanometers of wavelength. The cancerous cells have less amount of emission and radiation which yields very low amount of current because of their losing grip with the human body. Acknowledgement No human/animals are involved in this work. I would like to thank the Cancer cells source: Sree Mahatma Gandhi hospital for cancer research, MVP Colony, Visakhapatnam, AP for providing the blood samples for my research work.

References 1. Simulations in Nano Biotechnology—KILHO EOM. Chapter 1, pp 1–10. Modeling the Interface between Biological and Synthetic Components in Hybrid Nanosystems—Rogan Carr, Jeffrey Comer, and Aleksei Aksimentiev. 2. Exploring the Energy Landscape of Biopolymers Using Single-Molecule Force Spectroscopy and Molecular Simulations—Changbong Hyeo. 3. Nature’s Flexible and Tough Amour: Geometric and Size Effects on Diatom-Inspired Nanoscale Glass. Andre P. Garcia, Dipanjan Sen, and Markus J. Buehler. 4. A New Nanobiotechnological Method for Cancer Treatment Using X-Ray Spectroscopy of Nanoparticles, Sultana N. Nahar, Anil K. Pradhan, and Maximiliano Montenegro. 5. Nano sensors—Health care, defense and Industry—LIM. Protein Thin Films: Sensing Elements for Sensors, Lauro Pastorino and Svetlana Erokhina. 6. Bioinformatics, Nano-Sensors, and Emerging Innovations, Shoumen Palit and Austin Datta. 7. Optical Lithography—VLSI System design—Pucknell, chapters 1–3, pp 10–24. 8. Applied physics basics—WIKIPEDIA. 9. Mathematical Modeling of Biosensors Based on an Array of Enzyme Microreactor—R. Baronas, chapter 1, pp 1–10. 10. Direct current cyclic voltammetry—Graham Devis Patent.

Paradigms in Computer Vision: Biology Based Carbon Domain Postulates Nano Electronic Devices for Generation Next Rajasekaran Ekambaram, Meenal Rajasekaran, and Indupriya Rajasekaran

1 Introduction On going through the protein folding of different magnitude of carbon value, a unifying spatial convenience in protein existing in nature despite there is difference in structure and alteration. The nature of forces involved for the molecular conformations of proteins is found out some time earlier by Kauzmann [1]. Hydrophobic forces dominate to be in parallel by an important role in structure creation, stability and for the function rather for electrostatic dominant force [2]. Carbon dominates hydrophobic forces understandably. Role of carbon distribution in different order of sequences make them to have different nature. It is found out to be the dominant forces of attraction for proteins trying to maintain specific value of carbon along the sequence over all [3, 4]. Carbon seems to be related to arrangement of amino acid on the structure of protein. Each amino acid of protein to fold is accountable a lot to its carbon value. So the improvement can perform better in its role of carbon atoms in 20 amino acids are to be descended carefully rather simple classification as polarity of nature. Carbon compound can perform the magneto biochemistry when they are producing another biochemical action when in contact with another magnetic compound of carbon. If it were the magneto biochemistry, scientists can bring this to a simple platform for large scale production of transformer at

R. Ekambaram () Department of Chemistry, V.S.B. Engineering College, Karur, India M. Rajasekaran Department of Electrical and Electronics Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India I. Rajasekaran Institute of Fundamental Medicine and Biology, Kazan Federal University, Kazan, Tatarstan, Russia © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_35

371

372

R. Ekambaram et al.

the interface of folded protein. Scientist bring that the mechanism of nano fibre fold relies on the transfer of electron for the magnetic substrate to the carbon compound. The study to capture this phenomenon of protein fold one would argue that magneto biochemistry becomes reality of modern world of transformer for the production of silicon replaced nano chip for computers. It is widely accepted that graphite and other forms of carbon can have ferromagnetic properties, but the effects are so weak. After measuring the magneto biochemistry of biomolecules, one can bring lot of application in computer industry so perform well in the years to come. Arguably, Coey and coworkers [5] were able measure this tiny magnetic moment by using magnetic force microscope and applying to nanotube. It is 0.1 Bohr magnetons per carbon in room temperature. It is 2.2 Bohr magnetons for iron. This magnetic property of carbon opens up a new era of science and avenue for silicon replaced chips for modern computer yet to come. The nature and extent of carbon distribution in proteins structure give clue on the nature of protein to fold which is investigated here. The crystal structure of globular proteins are taken and analyzed with a question, how the carbon value varies around individual atoms in individual structure of protein?

2 Methodology 2.1 Three Dimensional CARd Analysis The structure of proteins are retrieved from Internet. The globular amphipathic domains (GAD) are computed using CARd3D program performing well for the last 5 years [6]. Details are recorded in Pubmed of NIH some time earlier. Slight modifications are carried out to capture the overall carbon optimised domain (OCOD), which is being equivalent to GAD. Modification includes an average values for variety of diameters for up to 45 Å. Here it is taken for short range of diameters (7–18 Å) only that captures the phenomenon of Internal COD (ICOD) and External COD. OCOD is combination of these two. Here in particular, it is taken only for 16 Å that captures the OCOD phenomena. ICOD domain is a one where participation of neighbor residues is dominant whereas it is low in nondomain portion that is being carbon-high hydrophobic portion usually.

2.2 Bond of All Analysis Bond length of all possible bonds are calculated and compared. The reference protein is analysed with a term that back bone bonds are common to all residues which are adequately responding to the adduct forming ICOD regions. The bond length of some commonly existing back bone bonds (such as N–CA, CA–HA, CA–

Paradigms in Computer Vision: Biology Based Carbon Domain Postulates. . .

373

C, C=O and C–N) are measured and the standard deviations are captured using standard bond length value considered in normal peptides. But then only the CA– C bond is taken here as a measurable bond as it responds well for a domain and non domain regions. It is computed as by dividing the measured value by a fact of 1. A PERL program which can accommodate all this phenomenon of capturing standard deviation of bond length is written named as BondAll.pl and captured the bond length variations.

3 Results 3.1 CARd Analysis OCOD value computed using CARD3D program is shown in Fig. 1. It is a plot of OCOD value versus amino acid number. The plot is shown only for dia 16 Å computed. Electrical conductors and least conductors are demonstrated that the possible electron flow (high OCOD) regions and least conducting (near zero OCOD) regions. Data shown here are for B chain of hemoglobin of human. There are electron flow regions-domains of carbon value of 0.3145 while other lower OCOD regions are non electrical conductors. Lower conducting and flow of electron regions based on carbon values is observed with the help of bond length as shown in Fig. 2. Note the bond lengths alteration at carbon domains that conduct current, arranged based on OCOD value as in Fig. 1. A difference in domain near 60–65 is due the presence of porphyrin ring adjacent to the functional unit that alters the bond lengths. Otherwise seems everything is demonstrated here are in agreement with expected value. Overall the bond length reduced due to the presence of OCOD and vice versa, note worthy of carbon value in domain formation. The presence of ICODs based on carbon value

Fig. 1 OCOD of individual amino acids in a surrounding of 16 Å diameter. Electrical conductors and insulators demonstrate that there are possible electron flow (high OCOD) regions and low conductors (near zero OCOD) regions. Data shown here are for B chain of hemoglobin of human

374

R. Ekambaram et al.

Fig. 2 Conducting and lower conducting regions in B chain of hemoglobin of human based on relative bond lengths. Note the bond lengths alteration at electrical domains that conduct current, arranged some form of art

that function as an independent unit that share electrons themselves due to which the bond lengths got reduced is captured here. It appears to be a domain where ever there is reduction of relative bond lengths. It is a classic evidence for presence of domain in protein regions. Otherwise it is carbon rich regions where least conductance takes place. Electro-magnetic radii by principle carbon electrons alter the bond lengths. Analysis of venomous protein (1UMR) reveals that the carbon distribution is not uniform everywhere in the protein, for which the non-domain comes in apart. The temperature factor also confirms this result. This factor of non-domain can be better utilized for handling transformer which needs to be fabricated for silicon free chip to be made available for future computers. Fabrication based on this spade work of pattern formation, role of binding, carbon role [7–9], role of uracil [10, 11] in embedding large hydrophobic residues [12] in viral proteins can be better exploited for fabrication technology of transformers for computer and allied nature. For a charged side chain atoms, when competition comes between ICOD and electrostatic interaction, the later dominates. So the ICOD may be weaker than electrostatic force. But the overall cohesive force arises due to ICOD in protein is probably be strong. Question is larger the protein leading to more cohesive force that leads to stronger than the small one that are in form. The side chains of the aromatic and charged residues are stabilized by cohesive force which depends upon the neighboring residues. Analysis confirms that the metal ion encourages the bound charged residues to take part in ICOD including side chains. Most of the time the side chains of the charged residues are trying to be interact with external counter ion, leading to be away from main chain which is not contributing to ICOD. From COD analysis of SOD using dia 16 of protein structure having Cu and Zn ion confirms that the metal ion does alter the binding affinity considerably. It takes part in quenching the carbon disorder and encourages ICOD. Both Cu and Zn alter the ICOD at active

Paradigms in Computer Vision: Biology Based Carbon Domain Postulates. . .

375

site significantly which again can change the binding capability. This can be better understood at molecular level for planting biochips for future robot that recognize human value.

4 Discussion In order to measure the deformation of amino acids in protein structure, there exists a scale called ICOD value. Higher the value tougher the pattern is. Refractory protein is one that retains strength and stability, electrical conductance, thermal toughness, chemical inertness etc. One can classify patterns based on refractoriness that is OCOD value (at least three ranges). Wear is term to distinguish the burial or exposure of patterns with specific OCOD value. High OCOD patterns are exposed while the disorder OCOD regions are trying to be buried inside. Porosity can also be introduced based on number of COD atoms in a given diameter. Higher the number of atoms, denser the pattern is and richer in domain formation. Domains are an n-type semiconductor which are low thermal effect portion. The amino acids involved in OCOD, form a compact structure. The atoms are facing each other in such ways that meet the optimum value of carbon all over structure. The arrangement is to share electrons among them. The bonds length reduces due to this in the structure. It’s like a Fermion behaves like an n-type semi conductor, either conductor or nonconductor, but given the biological condition it allows electrons to go through and following the Fermi paradox and Sommerfeld model of atom. The adduct structure formed by this is behaving as if aromatic compound that can work as independent unit that repel water. The domain structure of OCOD comprise of amino acids having the above character in varying length. They are made from carbon and polar atoms of those amino acids organised in some facts of form. OCOD parts are small and may be complex and delicate geometry but others are large in sizes that are in the form of OCOD blocks. There are varieties of OCOD forming compositions evolved in a variety of shapes and forms which have been adapted to a broad range of applications in proteins. Where ever it occurs in protein, they are subjected to water repellent and conductivity. That is the linings of amino acids in OCOD build to water shield and electrical conductance. As variety of composition of OCODs instruct a variety of performance and properties, many OCODs have been evolved in different shape-size. Large varieties of OCOD patterns are available. Only small ones are attracted with other interacting molecules. Larger OCODs are not important. Small 5 amino acids long will do for most of the various interactions. Lightest and small amino acid series for OCODs are available. One can go for redevelopment of protein with higher stability and activity using this. It is a mark of honor for biological scientists and technologist involved in protein science for long time.

376

R. Ekambaram et al.

4.1 Semiconductor and Carbon Value The percentage of carbon (31.45%) allows the stretch to semi-conductance. Hydrophilic portions are conductors and hydrophobic stretches are least conductor. Valence band to conduction band is the principle behind hydrophobic to COD. It is an n-type conduction band rather than p-type. In n-type, the gap between valence band and Fermi level is greater while it is low between conduction band and Fermi level. Based on this principle of conduction and carbon value (31.45%) one can design nano devices for core processors and memory devices. One can even assume a 5 nm node to end Moore’s law. The challenge is that the carbon rich portions are attenuators from normal one, let them work as switch. The plan is to make the system function electronically that turn on and off in a reliable and operable way (just like silicon). Coming up with molecules able to do these trick is the domain of carbon value. Fabrication of such a nano technology device, electro-magnetic electrical domain that works based on the nature of carbon value that acts as electronic devices can be under taken using a viral synthesis. Argument is that quantum effects that could eventually set fundamental limit on silicon devices. The tunnel through silicon devices however be cleared by carbon value. At the same time the current flow may be allowed without any barrier by electro-magnetic electrical domain. Segregation of hydrophobic groups in protein can be exploited for division of two electrodes and for avoiding tunnel effect. The protein is graded as an electronic material. Using such advancement in protein one can develop to have a nano wires of couple of nanometers along. Time to develop such technological advancement is limited by a factor of complexity of the working atmosphere limit. However advanced laboratories can perform well in this line of research. India is lacking in this direction of research. Based on the quality of research work, however, finishing is impossible. One can think in a larger spectrum of work based on previous work of research spectrum. It didn’t make sense to do all in one go that to right away. So, decided to work in much larger spectrum of research that works better. One day or the other the nano devices are going to pop up for better world. Larger wires are to go for betterment of quality of research atmosphere. And, decided to go for betterment in quality of research atmosphere. A perfect fit for nano wires are structure of electro-magnetic electrical domains. This is a few nanometers in diameter, can pass through electrons speeding through a molecular circuit. The problem is that associated carbon rich regions adjacent to domain cause disarray. Making it in a array form is classical art of living being. The fabrication technology is improved a lot. Fabricate complex structure by circuit in nature seemingly impossible here. Building any structures in nano form is now feasible at an affordable price of money. Turning these simple devices into complex logic circuits and integrates them into computers are so simple here. One can fabricate new technology based devices for betterment in the world of art of living being. It is hoped that the nano derived electronics based on carbon value will inherently be full of defects free like in silicon devises. At the scale of individual nano molecules, chemistry is so strong here providing enough evidence of research

Paradigms in Computer Vision: Biology Based Carbon Domain Postulates. . .

377

work here. Given to statistical fluctuations-sometimes in other chemical devices, it is hoped to work well in advancement of technology improvement. But it is here that the content we have identified is most important breakthrough. The predictions are already in developmental stage of research. Molecular computer based in electromagnetic electrical domain s is hoped to work fine in biology of living systems.

4.2 Nanotechnology for World of Signaling Releasing these carbon based domain with properties and comparison, one might develop a tiny sensors and actuators that detect and react intelligently to biological and chemical clues. Biochips incorporating sensors and actuators made out of electro-magnetic electrical domains are hoped serve needs of the body and respond by discharging an appropriate. Pioneer of molecular electronics, the true potential of the field could be realized in bringing the world of nanoelectronics together with the world of biomolecules. Molecular electronics could be the puzzle that finally helps to bridge the material gap between living being and robots. Having found the intercalation of porphyrin structure in protein inner portion that bridges the gap between ordered portion with defective elements, the intercalation of macromolecule that resist flow of current, showing least conductor in a passing electron portion for a miraculous stopping of current flow. Molecules with desired properties based on carbon value are hoped help in molecular computing an achievable target of research. Short devices are yet to be realizable. Nanoscopic plan of research are to come fast in the future world order of the current flow of nano technology program devices that can arrange for an amphipathic domains continuously for production of biochip.

5 Conclusion Demonstrations are here to help building nanochip for better tomorrow. Functional system development procedure is given for better computer design with forceful nano device that attenuate biochip called electro-magnetic nano world. Synthesis of molecule of interest for future technology world with emphasis on macromolecular attenuation and conductance present in protein system which are demonstrated here are to be given importance for future technological advancement in nano chip. Research work that needs to be taken up here as an extension to this work will be decided soon. Good will persist for family of research team and coherence persistence for research learning overall. Work needs to be continued in all of research ambience. Progress has been made to continue research in this formulation of research topics for advancement in career of bluechip companies that change the world of better tomorrow.

378

R. Ekambaram et al.

References 1. Kauzmann, W.: The three dimensional structures of proteins, Biophys J. 4, 43–54 (1964). 2. Rajasekaran, E., Jayaram, B. and Honig, B.: Electrostatic interactions in aliphatic dicarboxylic acid: a computational route to the determination of pKa shifts, J. Amer. Chem. Soc. 116, 8238 (1994). 3. Rajasekaran, E.: CARd: Carbon distribution analysis program for protein sequences, Bioinformation 8(11), 508–512 (2012). 4. Rajasekaran, E. and Vijayasarathy, M.: CARBANA: Carbon analysis program for protein sequences, Bioinformation 5(10), 455–457 (2011). 5. Coey, J.M.D., Venkatesan, M., Fitzgerald, C.B., Douvalis, A.P., Sanders, I.S.: Ferromagnetism of a graphite nodule from the canyon diablo meteorite, Nature 420, 156–159 (2002). 6. Rajasekaran, E.: Domains based in carbon dictate here the possible arrangement of all chemistry for biology, Int. J. Mol Biol-Open Access, 3(5), 240–243 (2018). 7. Akila, K., Rajendran, K. and Rajasekaran, E.: Carbon distribution to toxic effect of toxin proteins, Bioinformation 8(15), 720–721 (2012). 8. Akila, K., Sneha, N. and Rajasekaran, E.: Study on carbon distribution at protein regions of disorder, Int. J. Biosci. Biochem. Bioinfo. 2(2), 58–60 (2012). 9. Amri, E., Mamboya, A.F., Nsimama, P.D. and Rajasekaran, E.: Role of carbon in crystal structures of wild-type and mutated form of dihydrofolate reductase-thymidylate synthase of P. falciparum, Int. J. Appl. Biol. Pharm. Tech. 3(3), 1–6 (2012). 10. Rajasekaran, E., Jacob, A. and Heese, K.: Magnitude of thymine in different frames of messenger RNAs, Int. J. Bioinfo. Res. 4(3), 273–275 (2012). 11. Anandagopu, P., Suhanya, R., Jayaraj, V. and Rajasekaran, E.: Role of thymine in protein coding frames of mRNA sequences, Bioinformation 2(7), 304–307 (2008). 12. Jayaraj, V., Suhanya, R., Vijayasarathy, M., Anandagopu, P. and Rajasekaran, E.: Role of large hydrophobic residues in proteins, Bioinformation 3(9), 409–412 (2009).

A Secure Authenticated Bio-cryptosystem Using Face Attribute Based on Fuzzy Extractor S. Aanjanadevi, V. Palanisamy, S. Aanjankumar, and S. Poonkuntran

1 Introduction 1.1 Biometrics Biometrics is a recent technology which is used to identify and authorise the person by their physical (face, fingerprint, iris, retina etc.) and behavioural and behavioural physical (voice, gesture, keystroke, body odour etc.,) traits. Biometric system consists of two phase as follows. (a) Enrollment: In the enrollment phase user have to enrol their biometric template by capturing the Biometric features using required tools and the captured template is stored in database for verification process. (b) Verification: In verification process the present biometric sample is compared with the image stored in the dataset the match is found between the current and the previous image the user is authorized if not the person cannot be authorized.

S. Aanjanadevi () · V. Palanisamy Department of Computer Applications, Alagappa University, Karaikudi, India S. Aanjankumar Department of Computer Science and Engineering, Sri Raaja Raajan College of Engineering and Technology, Amaravathiputhur, India S. Poonkuntran Department of Computer Science and Engineering, Velammal College of Engineering and Technology, Madurai, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_36

379

380

S. Aanjanadevi et al.

(c) Issues in biometrics: The following issues are to be consider in biometric systems are • Non revocability • Compromise in privacy Three types of privacy compromises are to be consider as follows • Compromise in biometric data privacy • Compromise in information privacy • Compromise in identity privacy

1.2 Cryptography It is an algorithmic process of converting plain or readable message into cipher or unreadable message. Encryption and decryption are the two main techniques used in cryptography. • Encryption—process of changing normal readable form of message into unreadable form. • Decryption—process of transforming unreadable form of message into readable form i.e., reverse process of encryption. There are two primary approaches to encryption are as follows 1. Symmetric key encryption—use same key for encryption and decryption 2. Asymmetric key encryption or public key encryption—use two different keys for encryption and decryption. (a) Drawbacks of cryptography: 1. 2. 3. 4.

Difficult to access even for legitimate user. High availability. Selective access control. Threats that emerge from the poor design of system.

1.3 Biometric Cryptosystem Biometrics and cryptography are two different techniques which are combined together to provide more security to the entire system. Fortunately these two systems has common characteristic to propose improved and high secure system. In bio cryptosystem the user identity is taken to generate cryptographic keys for encryption and decryption without compromising the user’s privacy.

A Secure Authenticated Bio-cryptosystem Using Face Attribute Based on. . .

381

Biometric cryptosystem

(a) Advantages of biometric cryptographic system: 1. 2. 3. 4. 5. 6.

Enhance security and privacy. No maintenance of biometric template/image. Multiple/cancellable/revocable identifiers. Enhance security in authentication. Improve security in confidential data and communication. Public confidence, acceptance, data protection law and use a compliance with privacy become better.

2 Related Works New face detection system for color images using fuzzy logic to find fine edges in still images by template matching and skin color detector. Here the face image is detected from the still images by using various edge detector methods but fuzzy logic will provide better performance and efficiency in detecting the face template on still images [1, 2] time facial expression recognition using fuzzy emotion model which recognize the face image in video to facilitate image in current trace method used for face localization and fuzzy emotion model is used to support four basic recognition of neural face template and aloes combine two or model to improve security and confidentiality to the system [3, 4]. Crypto biometric system using iris template with fuzzy extractor to allow user to recover already stored key by their own biometric feature for providing more security to the system by selecting the parameter’s properties and efficiency analysis. Face recognition system along with neuro fuzzy inference system to deliver improved precision in collation to various techniques for pre-processing of biometric features for recognition [5]. Fuzzy vault for face based cryptographic key generation consist of 2D quantization of distance vector between face attribute and couple of random vectors. Here the windowing process is involved to permit the difference in biometric signals by this process the

382

S. Aanjanadevi et al.

second element, biometrics and keys are varied and zero error rates can be achieved. To utilize the facts that are obtained from biometric attributes as cryptographic keys without deviations to provide more security to the entire system for transforming data through network [6–8]. A techniques for depicting the extricated face attributes to bits and bit streams that are used as key for encryption and decryption process. Fuzzy binded programme to integrate the biometric templates with arbitrarily created keys by XOR operation. Here ECC (Error Correcting Codes) methods are used sufferance difference of biometric attributes. Imposed homogeneous procedure in iris recognition problem and produce exact security but it is not clear to give absolutely similar number of bits as keys which are extracted or taken from face image of an individuals [9, 10]. To generate strong keys from biometric and other noisy data to provide a secure representation used to replicate bug disposed biometric information without collecting the security threats essential to keep.

3 Existing System The existing system is design to provide security by using fuzzy vault mechanism by generating encryption key for changeable biometric signals. In that system they used only two dimensional biometric templates to make cryptographic keys taken by measuring 2D quantization distance vector of iris template for minimizing the variations of biometric signals and also it consists of two factor methods to evaluate the biometric templates on aspects of user dependent and user independent factors to provide zero Equal Error Rate but they don’t consider about FAR and FRR and this system only focused on providing security but not minimum time and storage requirements. The main drawback of the system is it takes more time for processing which leads to computational complexity.

4 Proposed System To overcome the problems in the previous biometric encryption system this proposed system “A Secure Authenticated Bio-cryptosystem Using Face Attribute Based on Fuzzy Extractor” is introduced. In this system the biometric face template is used to generate cryptographic key using bio hashing. Here two phases are used one for enrolment (enroll the person) and another for verification (to authenticate the valid user). At enrolment phase face image of an individual is captured and features of face is extracted using fuzzy extractor and the key is used as a cryptographic key for encryption and decryption of data. After extracting facial features bio hashing mechanism is used to generate bio code consists of bits taken from the biometric template and stored in database. Once the enrolment phase the face of individuals is taken and bits are extracted and stored the process move to verification phase. In verification process again the face image is captured, facial features are

A Secure Authenticated Bio-cryptosystem Using Face Attribute Based on. . .

383

extracted and key is generated, the key generated from facial features undergoes matching process. At matching process the key generated at the time of verification is compared with the key stored in the database which generated at the time of enrollment. Both keys are coincides with each other the system authenticate the valid user and allow to access the key for further process otherwise it does not allow the person to access. After the key is accessed by the authenticated user they can use the key to encrypt the data which are transmitted over the internet. By this process security of the data is enhanced and makes the system more robust and confidential. This system overcomes the problem of computational complexity and strong keys

FACE TEMPLATE

FUZZY EXTRACTOR

KEY

DATABASE BIO HASHING

BIO CODE

ENROLLMENT PHASE FACE TEMPLATE

FUZZY EXTRACTOR

KEY

MATCH

BIO HASHING

YES

BIO CODE

VERIFICATION PHASE

NOT AUTHENTICATE INVALID USER FOR ACCESS

NO

YES/ NO

AUTHENTICATE VALID USER FOR ACCESS

ENCODE AND DECODE THE DATA

Fig. 1 Architecture flow diagram of proposed system

384

S. Aanjanadevi et al.

are generated from the biometric template which cannot be theft by the third parties other than the user thus improves privacy to the data over internet. FRR and FAR are obtained (Fig. 1).

5 Conclusion This proposed work forms a methodological structure to overcome the problem faced by existing system. The proposed system uses fuzzy extractor tool and biohashing method for generating strong bio crypto keys for cryptographic process (Encryption and Decryption) from biometrics facial features which undergoes major biometric process such as enrollment and verification. Here we use face template keys to encoding and decoding of data for enhancing privacy, confidentiality, robustness and security to the system and recover computational complexity by achieving FAR and FRR. Acknowledgement I would like to express my appreciation to Prof. and HOD Dr. V. Palanisamy for his guidance during the research work without his valuable assistance this research work cannot be completed. I am also indeed to the college where I carried out my research at Alagappa University in the department of computer applications my brother S. Aanjankumar, my family and staff for their support cooperation. Finally I would thank my management for the valuable assistance during the course of my study.

References 1. Rajandeep Kaur, Vijay Dhir: Fuzzy Logic Based Novel Method Of Face Detection. International Journal of Latest Research in Science and Technology. Vol. 2. Issue. 1. 558–566. (2013) 2. Natascha Esau, Evgenija Wetzel, Lisa Kleinjohann, Bernd Kleinjohann: Real-Time Facial Expression Recognition Using a Fuzzy Emotion Model. IEEE. (2009) 3. Alvarez Marino, R., Hernandez Alvarez, F., Hernandez Encinas, L.: A crypto-biometric scheme based on iris-templates with fuzzy extractors. (2012) Elsevier Inc. 4. Shweta Mehta, Shailender Gupta, Bharat Bhusahan and Nagpal, C.K.: Face Recognition Using Neuro Fuzzy Inference System. International Journal of Signal Processing. Image Processing and Pattern Recognition. Vol. 7. 331–344. (2014) 5. Yongjin Wang, Plataniotis, K.N.: Fuzzy Vault for Face Based Cryptographic Key Generation. IEEE biometric symposium. (2007) 6. Bodo, A.: Method for producing a digital signature with aid of biometric feature. German Patent DE 42–43. (1994) 7. Chang, Y.J., Zhang, W., Chen, T.: Biometrics-based cryptographic key generation. Proc. of IEEE Int. Conf. on Multi-media and Expo. 2203–2206. (2004) 8. Juels, A., Sudan, M.: A fuzzy vault scheme. Proc. of IEEE Int. Symp. On Info. Theory. 408. (2002) 9. Hao, F., Anderson, R., Daugman, J.: Combining crypto with biometric effectively. IEEE Trans. on Computers. Vol. 55. 1081–1088. (2006) 10. Yevgeniy Dodis, Leonid Reyzin, Adam Smith: Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data. International Association for Cryptologic Research. 523–540. (2004)

Implementation of Scan Logic and Pattern Generation for RTL Design R. Madhura and M. J. Shantiprasad

1 Introduction As we know testing process for sequential circuit is complicated when compared to combinational circuit. The aim of Scan design is to make this more simpler by adding test logic. A design with internal-scan logic operates in one of two modes: scan shift mode and scan capture mode, Scan shift mode (Shift mode). In scan shift mode, the entire scan chain operates as a long shift register. In this mode, test vector values are shifted in from the external scan-in input, and the contents of the scan flipflops are shifted out on the external scan-out output. Scan capture mode (Capture mode) in capture mode, all flip-flops in a scan chain operate as a standard flip-flop. Flip-flops are preloaded in scan shift mode to control the inputs to combinational logic islands. Once the capture mode is selected, a functional test vector is applied to the primary inputs, and the steady-state output response from the combinational logic is captured into the flip-flops by pulsing the system clock. In effect, ATPG can treat scan flip-flop outputs as primary inputs and scan flip-flop inputs as primary outputs. In addition to the scan chain giving access to the internal states of a circuit, the scan chain may also be considered as a means of partitioning a circuit into less complex combinational logic blocks. Thus, a scan design technique enormously simplifies automatic test pattern generation. Generally, ATPG has two phases:

R. Madhura () Department of Electronics and Communication, Dayananda Sagar College of Engineering, Bangalore, India M. J. Shantiprasad Department of Electronics and Communication, Cambridge Institute of Technology, Bangalore, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_37

385

386

R. Madhura and M. J. Shantiprasad

1. Random generation of patterns to cover easy-to-detect faults efficiently ATPG fault-simulates each random pattern to determine which fault it detects and calculates the overall fault coverage achieved with each new pattern. 2. Deterministic generation of patterns to cover specific stuck-at faults that is otherwise hard to detect. In the second phase, ATPG analyzes the remaining undetected faults, one at a time, and tries to derive patterns to detect each of those faults. D-algorithm is one of the most popular test generation algorithms. Although random generation of patterns is efficient to cover easy-to-detect faults, it often results in inclusion of patterns that do not contribute to improving overall fault coverage. If the same fault can be detected by test patterns A and B and if pattern A cannot detect any other fault, then pattern A is unnecessary. Thus ATPG can compact the set of test patterns. This means fewer patterns are needed to achieve the same fault coverage and that the test time is reduced. Here we have used deterministic ATPG to generate patterns.

2 Existing Work The high fault coverage and reduction of transitions is achieved with low hardware overhead test pattern generator (TPG) for scan-based BIST is proposed [1]. Security based scan architecture is proposed for achieving high test quality [2]. In this paper they are able to reduce dynamic power dissipation and test data using Illinois scan architecture [3]. Reduction of test data on stored on chip by using deterministic ATPG is proposed [4]. Reduction of power consumption by using pixel block scanning architecture is proposed [5]. They have proposed low power scan operation using EDT technique and to reduce switching activity in all phases of scantest during loading, capture and unloading in a test compression environment [6]. They have used pseudorandom test pattern generation and reseeding for low power scan based BIST [7]. The proposed paper minimizes toggle activity while scanning test patterns by using bit inversion technique and has features of full diagnosability of single stuck-at faults along the scan chain path [8]. Switching activity is reduced in scan cells by disabling idle flip-flops during scanning using a control signal, thereby reducing excessive power consumption [9]. By parallel loading of test vectors into scan chains, they are able to reduce test application time with new scan architecture [10]. By making use of 28 nm design technology on top module they have implemented scan insertion and compression [11]. By analyzing the RTL description for functional information, they have generated new ways of inserting scan structures to improve the delay fault coverage or volume of test data [12].

Implementation of Scan Logic and Pattern Generation for RTL Design

387

3 Proposed Method In this paper work have taken three Functional RTL designs with top module name as UART, DmaWr, Idma, the designs which are frequently used in the industry. Design 1 has one clock with 36 memory elements. The proposed UART design is composed of a baud rate generator, a receiver module, and a transmitter module with a top module name as UART. In this design there are 36 memory elements in which 4 memory elements are scan cells and remaining are 32 are non scan cells. There is no controllability on clocks and resets for 32 flops. It has 32 S1 violations and 32 D5 violations. The concept of test logic insertion is used to resolve this issue. The command “set test logic” inserts test logic to control the set, reset, clock to make them scan able when scan chains are inserted. The command used to fix this issue is “set test logic -set on -reset on -clock on”. Then we have generated patterns which gives the stuck at coverage and analyze the faults with suitable controllability or observability. Design 2 has one clock with 40 memory elements. The proposed DMA design uses DmaWr as a top module name. In this design there are 40 memory elements in which 36 memory elements are scan cells and remaining are 4 are non scan cells. There is no controllability on clocks and resets for four flops. It has 4 S1 violations and 4 D5 violations. The concept of test logic insertion is used to resolve this issue. The command “set test logic” inserts test logic to control the set, reset, clock to make them scan able when scan chains are inserted. The command used to fix this issue is “set test logic -set on -reset on -clock on”. In this case we have adopted EDT technique to generate EDT patterns which is comprised of decompress or and compactor logic, which gives the stuck at coverage and analyze the faults with improved compression of scan test data and test time. Design 3 has 3 clocks with 130 memory elements (one is positive, one is negative, using both edges), where we have tried to insert scan logic in 4 steps (a) Use default scan insertion. (b) Mix clock domain and insert scan. (c) Mix edges and clock domains and insert scan. (d) Use a single test clock and insert 4 scan chains. Issue here is there are 3 clocks with clock mixing and edge mixing we need to add the lockup latches to avoid hold violations. The tool doesn’t have scan equivalent cell for a latch and is treated as non-scan model due to which latches reduce the test coverage of the design. It is resolved using command which is used to add lockup latches “add cell modelsTLATX!-type DLAT G D-no invert Q” and the command for clock mixing and edge mixing is “insert test logic -number 4-clock merge -edge merge”

388

R. Madhura and M. J. Shantiprasad

4 DFT Flow for Scan Insertion and Compression RTL Design: Register Transfer Level design is a method in which we can transfer data from one register to another and able to construct a digital design using Combinational and Sequential circuits in HDL like Verilog or VHDL which can model logical and hardware operation (Fig. 1). Gate Level Net list: The Gate level net list is the net list obtained after the synthesis of RTL design. Here we have used Synopsys tool to synthesize RTL design into Gate level net list. Scan Insertion: First invoke the tool using tessent_shell. Synthesized netlist, Library commands and tool commands are read as inputs for scan insertion. Synthesised netlist has Dflip-flops with DFF standard cell and check all combinational gates with standard cells. A library cell has all combinational and sequential logic gates. Observe DFF flops with cells and SDFF with cells. Tool commands: Read netlist with Verilog Read library with .mdt extension Define clocks, Define reset and add tool commands. Outputs obtained are scan inserted netlist, ATPG dofile and ATPG test procedure file, scan def and reports Scan compression: It is comprised of two steps. In the first step generating EDT RTL using tessent tool with scan inserted netlist, library files and dofiles as inputs and produce EDT RTL files, EDT and Bypass dofile including test procedure files using dc_shell script to do synthesis using Synopsys tool. Fig. 1 DFT working flow diagram

Implementation of Scan Logic and Pattern Generation for RTL Design

389

Secondly create synthesis directory and copy dc_script.tcl into synthesis directory with inputs as EDT RTL files, library files, dc_script.tcl and produce EDT netlist and scan inserted net list as output. Invoke dc_shell and then source dc_script.tcl. ATPG: ATPG reports stuck at faults and log files and generates serial and parallel patterns, where dofiles, scan inserted netlist and test procedure files are fed as inputs we are able to get better. Test coverage and fault coverage with below mentioned formula To Calculate Test coverage and Faults coverage (a) Test coverage = Detectable faults/total number of testable faults (b) Faults Coverage = Detectable Faults/total of faults in the design (c) Total number of testable faults = Total number of faults − (RE + UU + BL + TI)

5 Results and Discussion Results for scan insertion

5.1 Design 1 Has 32 S1 Violations and 32 D5 Violations In Fig. 2 shows the design with 32 S1 violations and 32 D5 violations. In Fig. 3 shows that the violations are cleared by adding test logic by a command “set test logic -set on -reset on -clock on” where muxlogic is inserted is to have control over the clocks and resets by replacing storage elements by scan cell, when scan_en pin

Fig. 2 S1 and D5 violations

390

R. Madhura and M. J. Shantiprasad

Fig. 3 Insertion of test logic Fig. 4 S1 and D5 violations

is made high. Now design will switch over from normal mode to test mode, where clock and reset pins are controlled using test_en pin. When test en = 1, it operates in test mode if not it operates in normal mode.

5.2 Design2 Has 4 S1 Violations and 4 D5 Violations In Fig. 4 shows the design with 4 S1 violations and 4 D5 violations, as Q output is directly connected to reset pin (SN) of next register where we not having control over the pins. In Fig. 5 shows that the violations are cleared by adding test logic by a command “set test logic -set on -reset on -clock” so Q output is connected to mux to have control over the clock and reset pins.

Implementation of Scan Logic and Pattern Generation for RTL Design

391

Fig. 5 Insertion of test logic

Fig. 6 Decompressor logic

5.3 Scan Compression for Design 2 Tessent TestKompress is the tool that can generate the decompressor and compactor logic at the RTL level. The architecture consists of a decompressor and a compactor embedded on the chip. The decompressor drives the scan chain inputs and the compactor connects from the scan chain outputs when Tessent TestKompress logic is inserted. In the design we have 8 scan chains, and 2 EDT channels which is indicated in Figs. 6, 7, and 8. Upon EDT logic we are able to reduce test application time and test data volume as it requires less number of clock cycles so as test cost here as we have used 5 clock cycles and able achieve compression ratio of 4 from the formula

392

R. Madhura and M. J. Shantiprasad

Fig. 7 Compactor logic

Fig. 8 Controller logic

Fig. 9 Bypass logic

Compression ratio = Internal scan chains/external channels

In Bypass mode, it operates with the functional clock not with the EDT clock as it does not undergo scan. The cells which are not scannable pass through this mode, which is indicated in Fig. 9.

Implementation of Scan Logic and Pattern Generation for RTL Design

393

Fig. 10 Statistics report for Design 1

5.4 Result for Obtaining Coverage Report for Design 1 and 2 Statistics report of stuck at faults for Design 1 (Fig. 10). Statistics report of stuck at faults for Design 2 (Fig. 11). Table 1 indicates the test coverage, fault coverage and ATPG effectiveness for design 1 and design 2. Result table with test and fault coverage.

394

R. Madhura and M. J. Shantiprasad

Fig. 11 Statistic report for Design 2 Table 1 Coverage report Design UART DmaWr with EDT logic

Test coverage 98.5% 97.14%

Fault coverage 96.25% 89.86%

ATPG effectiveness 99.55% 100%

5.5 Results for Scan Insertion for Design 3 In this Figs. 12 and 13 indicates the 3 clocks those are Proclk, st81, FastClk with clock mixing and edge mixing, where we added the lockup latches to avoid hold violations as tool doesn’t have scan equivalent cell for a latch and is treated as nonscan model. It is resolved using command which is used to add lockup latches is “add cell modelsTLATX!-typeDLAT G D-no invert Q” and the command for clock mixing, edge mixing, Insertion of 4 scan chains is “insert test logic -number 4-clock merge -edge merge”.

Implementation of Scan Logic and Pattern Generation for RTL Design

395

Fig. 12 Indicates the addition of lock up latch to avoid hold violations

Fig. 13 Insertion of 4 scan chain

6 Conclusion From the experimental results, we found that better controllability and observability each and every node of the design can be done and able to analyse DRC violations with scan insertion using tessent scan tool. Also able to adopt EDT technique using Tessent TestKompress tool to reduce test data, test time, test cost, by using less number of clock cycles when compared with design without EDT logic as it able to handle large number of scan chains with less number EDT channels and able achieve better fault and test coverage. Better usage of Mentor Graphics and Synopsys tools is done in our paper. Further will try to implement better compression techniques to get more coverage with less test time and power.

396

R. Madhura and M. J. Shantiprasad

References 1. Seongmoon Wang, “Generation of Low Power Dissipation and High Fault Coverage Patterns for Scan-Based BIST”, ITC International Test Conference. 2. Marcin Gomulkiewicz, Maciej Nikodem, Tadeusz Tomczak, “Low-cost and Universal Secure Scan: a Design-for-Test Architecture for Crypto Chips”, Proceedings of the International Conference on Dependability of Computer Systems, IEEE. 0-7695-2565-2, 2006. 3. Anshuman Chandra, Felix Ng and Rohit Kapur, “Low Power Illinois Scan Architecture for Simultaneous Power and Test Data Volume Reduction”, Proc. Design Automation Conference, pp. 166–169, Las Vegas, June 2001. 4. Dong Xiang, Yang Zhao, Krishnendu Chakrabarty, and Hideo Fujiwara, “A Reconfigurable Scan Architecture With Weighted Scan-Enable Signals for Deterministic BIST”, IEEE Transactions On Computer-Aided Design Of Integrated Circuits And Systems, Vol. 27, No. 6, June 2008. 5. Keita Okazaki, Naomi Nagaoka, Tatsuya Sugahara, Tetsushi Koide and Hans Jürgen Mattausch, “Low Power and Area Efficient Image Segmentation VLSI Architecture Using 2-Dimensional Pixel-Block Scanning”, International Symposium on Intelligent Signal Processing and Communication Systems. (ISPACS2008) Swissôtel Le Concorde, Bangkok, Thailand. 6. Dariusz Czysz, Mark Kassab, Xijiang Lin, Grzegorz Mrugalski, Janusz Rajski, and Jerzy Tyszer, “Low-Power Scan Operation in Test Compression Environment”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 28, no. 11, November 2009. 7. Dong Xiang, Xiaoqing Wen, and Laung-Terng Wang, “Low-Power Scan-Based Built-In Self-Test Based Weighted Pseudorandom Test Pattern Generation and Reseeding”, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 25, no. 3, March 2017. 8. Binod Kumar, Boda Nehru, Brajesh Pandey and Virendra Singh, “A Technique for Low Power, Stuck-at Fault Diagnosable and Reconfigurable Scan Architecture”, IEEE Transactions, 9781-5090-1422-4, 2016. 9. G. Rajesh Kumar, K. Babulu, “A Novel Architecture for Scan Cell in Low Power Test Circuitry”, 2nd International Conference on Nanomaterials and Technologies (CNT), 2014. 10. Ramkumar Balasubramanian, “Parallel Load Scan Architecture—Impact on Test Application Time”, International Journal of Research in Electronics and Communication Technology (IJRECT), 2016, Vol. 3, Issue 4, Oct.–Dec. 2016. 11. Mohan P.V.S., Rajanna K.M., “Implementation of Scan Insertion and Compression for 28nm design Technology, IJEDR, Vol. 5, Issue 3, ISSN: 2321–9939, 2017. 12. Ho Fai Ko B., “Functional Scan Design at RTL”, 2004.

Optimization Load Balancing over Imbalance Datacenter Topology K. Siva Tharun and K. Kottilingam

1 Introduction In the current and recent works with the maximum improvement of the network, the data/information of the internet is more and more complex issue, and the traffic load become more and more higher. The online data centres may give all varieties of services for people, so they play vital roles in the networks nowadays. Some works proposes very important architectures for the internet data centre, and this current architecture might the normal binary tree issue. The issue is that the closer to the root of the binary triangular tree, with the higher the network traffic used a new approach to solve the problem, which would use more than 2 or even more to broadcast the internet data flows. The way to achieve the load balancing in the data center is main issues. To minimize the network latency and maximize the throughput is a issue in the data centre networks. This work proposes 2 models and objectives the hardware and software approaches.The hardware model proposes more network triangular topologies but basically this is cost effective. So the software approach apply load balancing methods in the triangular interconnected topologies for higher bandwidth utilization in the current networks. This approach needs to upgrade the current approaches of network traffic flows scheduling. The load balancing approaches can be divided into two classes, static and dynamic load balancing [1]. In data centre the network flows may change all the time, so the link costs of the network have to be changed and machine should learn and cost should change automatically to learn. The static load balancing approaches have no ability to get the information on realtime loads and to decrease time of dispatching data flows among nodes. So dynamic

K. Siva Tharun · K. Kottilingam () Department of Information Technology, SRM Institute of Science and Technology, Kattankulathur, India e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_38

397

398

K. Siva Tharun and K. Kottilingam

Fig. 1 SDN network

Controller

Open Switch Flow

Open Switch Flow

PC

PC

Fig. 2 Fat tree topology

load balancing can overcome the issue, they will bring more work for monitoring network statistics and scheduling data flows (Fig. 1). This proposed DCLB (Dynamic sub-topology load balancing) algorithm, which use the fat-free topology which figure shows a fat-free topology that Fig. 2 shows a fat-free topology [2, 3] that is a typical k = 4 fat-tree network, which has three layers and consists of (k/2)2 core layer switches and k pods, each pod has k number of k-port switches, In one pod, each ToR (To-of-Rack) switch is connected to every aggregation switch and (k/2) hosts. Each aggregation switch connects (k/2)2 to switches on the core layer. This work present a new DCLB approach for that triangular fat-tree. The algorithm can the current link cost and switch status to modify IP headers with shortest routing header with neutralization of connected fat-

Optimization Load Balancing over Imbalance Datacenter Topology

399

tree topology. This approach can utilize current connection (link) cost and exchange status to change the Dijkstra approach [4]. This work finally use lossy packets, bandwidth, jitter for evolution the emulation. SDN (Software defined network) is new network methodology. The key technology of the SDN is open flow. The open flow is a protocol that is used in SDN, and the SDN makes the control plane and data plane separated, Unlike traditional switches in the network, the roles that the open flow switches use to forward data packets re determined by the controller, which is the center component in the open flow network. The controller can easily control the switches by using any programming language to generated the forwarding the roles.

2 Related Work 1. POX: POX [5] is very light weight openflow controller which was implemented in python. It’s a frame work for ferocious implementation and prototype of topology control software using python. And at the bottom level this is widely used infrastructure to implement the open flow controller. The infrastructure for interacting with open flow migrations, it’s on the basis for with part of this outgoing work, which gets to make the emerging disciplinary of SDN in this work. This can be utilized for exploring and prototype with SDN debug, topology virtualization, design of controller and type of programming. POX needs java in this work we use triangular POX controller to fulfill the algorithms. 2. Mininet [6] is a topology for SDN which is having the capability of topology of virtual peers, switches, open flow controllers and links. Mininet hosts run standard operating system software and the belonging switches support open flow for huge feasible created routing SDN. Mininet supports R&D, testing, debugging, testing and other any tasks that could give benefit from creating the complete experimental topology over the system. The Mininet provides a small and less expensive topology testing bed for implementation of the open flow application, which can leads multiple concurrent development environments independently over the same network (topology). This supports even all levels of regression tests, which can be reused and flexible for packaged. Mininet enables more complex topology testing without the need of the physical network. It includes the CLI which is topology aware and openflow aware, for application debugging or to run the topology wide tests. It provides a straight forward and powerful java API for topology API for network framing and also for experimental purpose. Mininet provides an easy way to achieve the current triangular system behaviour and capacity, and to do experiment with networks. Mininet networks run real software code which can include standard OS network

400

K. Siva Tharun and K. Kottilingam

applications and relay kernel level topology stack with kernel extensions which might required for compatible topologies. Just because of this, entire Mininet will be used to build the fat-free network topology for this work. All the development codes developed with this work and tested with Mininet for an open flow controller, a changed switch or a peer, can be migrated to a original system with minimum changes. And these can be used for real-world testing, deployment and evaluation. 3. Open vSwitch [7] is a quality wise production and open source implementation of proper virtual distributed multiplayer switch. The main use of open vSwitch is to give a proper switching stack for hardware virtualization frameworks, while supporting more protocols and standards used in a pc network. In this work we use open vSwitch for switching in the fat-tree network. In this work [8], the authors suggested a solution (naming as DLB) in which only the existing current link was totally ignored. This could lead to a result that the link load may be less, but the end link load might be high. So we suggest and propose a new DCLB approach to improve the situation. The DCLB utilize the modified Dijkstra approach to produce the shortest paths. The new approach can be better solve the issues which exist in DLB approach. In work [9], the authors proposed a new approach that is GLB (Global Load Balancing). This approach has a huge time complexity, the approach will find the out all connected links between two hops, and calculate all the costs of these network links to get the low load path, and then route path will be generated to the switches by the controller. But when algorithm considered all the link loads, the controller must store more links information. A partial path with in a main path will stored repeatedly. This will cast much more resources and attributes for the controller to preserve the link information. The DCLB approach can ignore the issue. It will only preserve the topology information rather than the duplicate link information in a route.

3 System and Algorithm Design DCLB, is the new solution for load balancing for fat free of this work, which contains 3 objectives: the cost of the link module, shortest path module and sub topology discovery module. The approach can update dynamically the link cost of the entire topology. During the search of new routing path, the sub topology discovery module will extract the data/information from the full topology to build the sub topology, and then use the modified repaired Dijkstra approach with the sub topology to discover the shortest path with lowers link cost.

Optimization Load Balancing over Imbalance Datacenter Topology

401

3.1 The Cost of the Link Module In the DCLB approach, the cost of the link module can dynamically update the cost of the link of the full topology. The entire topology can be achieved during the open flow topology initiative time. The cost of the link module with the timer for monitoring purpose the entire topology change, and the module rechecks and recalculates the cost of the link for every threshold based time to check and update the loads of the link. Algorithm 1: Cost of Link Module The Cost of Link Module 1. IF full topology is ready THEN 2. DO updatecostlink() 3. END IF 4. ELSE 5. pause 6. updatecostlink(): 7. IF flowstatsreceived THEN 8. cost=(receivebytes-bytelasttime)/4 9. update(); 10. END IF

One and fifth lines of this Algorithm 1 describes that when the network topology is ready, the sub function updatecostlink() is called, else function updatecostlink() which will lead to execute the how to work. When the trigger or event flowstartreceived happens, the approach calculates the cost of the link function update() for updating the costs of the link of the entire network topology.

3.2 Shortest Path Module This module utilizes the enhanced Dijkstra approach with the sub topology to produce the shortest path, which will be constructed by the sub topology discovery module.

402

K. Siva Tharun and K. Kottilingam

Algorithm 2: Shortest Path Algorithm Shortest Path Algorithm Input: Sub- topology (G= (V, E)), switch status w[v]. Output: Shortest Path [Switches] 1. while(G) 2. u← Next Link in G 3. for v in G 4. IF d[v]>d[u]+w[v] THEN 5. v joins shortest path 6. d[v]←d[u] 7. END IF

Line 1 to 7 piece of code is partial code in the enhanced Dijkstra approach. The w[v] in line 4 indicates the bytes received by all the ways of the switch in threshold time. When searching for the preceding switch in the path, this approach not only consider the cost of the link but also considers the status of all the switches. So this enhancement will be more appropriately suitable for the DCLB algorithm.

3.3 The Sub Topology Discovery Module The sub topology discovery module is most innovative and important component/objective in the DCLB algorithm [9] solution only finds the all the paths between 2 peers, but the methodology having a demerit when the scale of the network topology was growing larger and data storage structures for preserving all the feasible paths were growing bigger at the same give time (Fig. 3). In this work, we propose a new approach to ignore this issue. The sub topology discovery module is used to preserve sub topology of all the switches and closely Fig. 3 Dynamic sub topology load balancing

core

aggregation

1

top of rack

2 Pod1

Pod2

Pod3

Pod4

Optimization Load Balancing over Imbalance Datacenter Topology

403

related links, which are established paths between two peers. By the time of network start up, the module stores the full topology at the beginning and then it discovers the which layer each switch and where it belongs to. After the discovery has been done, the sub topology will work when discovering the shortest path between two peers. The approach will be a working model from 2 ends, source and destination, to discover for the upper layer switch to the bottom switch. The sub topology module preserve the switches discovers during the discovery process until 2 ends reaches to the same layer. Algorithm 3: Sub Topology Discovery Module The Sub Topology Discovery Module Input : FullTopologyInfo, src, dst Output: Sub TopologyInfo 1. SourceMACSwitch joins src 2. DestinationMACSwitch joins dst 3. SRC←[] 4. DST←[] 5. for switch in src 6. for switchy in topo 7. IF switchxLayer>switchyLayer THEN a. switchy joins SubTopology b. switchy joins SRC 8. END IF 9. for switch in dst 10. for switchy in topo 11. IF switchxLayer>switchyLayer THEN a. switchy joins SubTopology b. switchy joins DST 12. END IF

Line 1 to 16 of this algorithm describes the whole work and execution flow. If the layer of the existing current switch from the source and layer of the current switch from the destination is same, the approach will end up. Else both source and destination will discover the same direction according the fat tree topology until they discover the current same layer where the switches which both sides are the same and approach ends up. The following figure shows the sub topology. From the below figure the host1 will broadcast the information to host2, and routing paths that the flow probably network traffic are the frames with switches. At the end these switches will frame the sub topology from the full topology.

404

K. Siva Tharun and K. Kottilingam

4 Open Flow Network Implementation To implement the network [10] (load balancing) there some important modules should be taken to considered. How to design the system and other is how to discover the shortest path with less link cost to dispatch the flow over the system.

4.1 System Design In our execution flow, when system starts up, the approach will first discovers the full topology of the network and then preserves all the topology information from the switches. When the network topology discovered the framework can do the emulation of this fat free network topology. Frame work can use host1 ping host2 instruction to discover the path with low cost. The entire process of path discovery is that when discovering it first sends the ARP packet to discover the MAC of the destination host and then approach will utilize both ends (source and destination) in the fat tree network. When both ends reaches to the same layer and the switches at the both ends gets the same discovery over the sub topology constructed. Then Algorithm 2 will utilize the sub topology to discover the path from source and destination with low cast. And the framework distributes the path to all the relevant switches. The central repository is used a framework based on the timer to monitor all the ports of the switches over the network. The timer will use the system time cycle that is T, and to set the time to threshold time (4 s). There are three important structures for data preserve name as full topology data structure. This will preserve the entire information of the network topology. And switch layer structure preserves the full topology of the entire network. This one preserves which layer each switch is belonging to in the topology. When Mininet initiates the full topology network also will be initiated. The layer of switch belongs to in the topology. System design just consider the switches which directly interacts and connects host nodes to be as first layer switches, and the switches connects first layer or second and so on. The switch layer structure will be utilized during sub topology discovery process. The sub topology module will be started while discovering the short path between hosts with enhanced Dijkstra approach. So the discovered result will be preserved in the sub topology structure.

4.2 OpenFlow Flow Dispatch We use the POX controller to be our open flow topology controller. The open flow switches utilizes the flow tables to compare the every flow. The comparing strategies or attributes can be IP of source and destination strategies, any combination of

Optimization Load Balancing over Imbalance Datacenter Topology

405

these attributes and strategies. In this framework, we use source MAC address and destination MAC address to compare the flow. When this flow reaches and OpenFlow port of switch, the matching and relevant function will functions. If the switch discovers a match in this process, it will send this flow to relevant port. Else the OpenFlow switch will packs this network packet in a Packet_in message and broadcast to POX controller. After that controller will unpack this data to get the source MAC IP address and destination Mac address. The controller will use all the algorithms to discover the shortest path with low cost of link. Now the controller sends the OFP_FLOW_MOD messages to relevant switches, which based on discovered shortest path.

4.3 Emulation Framework in Load Balancing Environment of Emulation: The Mininet is running on a PC server with 4-core CPU and it is emulation platform which can be used to construct the self created topology. The POX is running on a PC with 2-core CPU, and it is on java.

4.4 Design of Datacenter This emulation network of data center made of 60 switches and 120 hosts. This network topology is defined by java in Mininet. The most difficult issue we face is to generate the real network flow of traffic in the virtual network. Works [11, 12] proposed new algorithms which could be used to solve the issue in the virtual network.

4.5 Comparison of Algorithms In the emulation, this work compares the DCLB approach with DLB approach, the reason is they have similarities in may aspects. The main difference is to choose path strategy. The DLB approach will consider the cost of existing current link and ignores the cost of the link near to destination. But DCLB will consider full link cost with lowest cost it uses the enhanced Dijkstra approach to discover the link with low cost. In emulation this work will evaluate and compare the two approaches by real bandwidths and loss of packet with jitter.

406

K. Siva Tharun and K. Kottilingam

5 Result of Analysis Result of bandwidth: The bandwidth can be achieved by the Mininet server. We utilize the traffic load from 120M to 170M to achieve the real broadcast bandwidth of the DLB and DCLB approach.

Real Bandwidth (M/S)

80 70 60 50 40

DCL

30

DLB

20 10 0

10 20 30 40 50 60 70 80 90 100 Traffic Bandwidth (M/S)

Bandwidth efficiency Loss of packet: Loss of packet of UDP flow from client and server. The drastic loss of packet after 70M traffic load although the rates of the packet loss of the DCLB and DLB increases due to increased rates of the packets of broadcasting and changes of congestion.The loss rates of both becomes much higher. But DCLB’s is always lower than the DLB’s.

Loss of Packet (%)

7.0 6.0 5.0 4.0 3.0

DCL

2.0

DLB

1.0

Loss of packet rate

6 Conclusion and Future work In this work we propose a dynamic sub topology load balancing approach called DCLB for schedule OpenFlow flow for the fat free. The DCLB approach can

Optimization Load Balancing over Imbalance Datacenter Topology

407

discover for a shortest path by using the enhanced Dijkstra approach. We implement all three algorithms on the POX controller. We utilize Mininet network emulator to evaluate the dynamically created subtopology. By comparing the both, we get the results, which will exhibit the DCLB approach has the higher realistic transmission rate when the traffic load increases, and the DCLB approach has better jitter performance than the DLB approach. But the solution is not realistic so far. The most importance and immediate enhancement of it will be that should be considered the other alternative load balancing paths with low cost of link while discovering for the shortest path, because we just considered discovering in the bottom up way(direction) in the sub topology which is constructed from the full fat tree network topology.

References 1. Tong R, Zhu X. A load balancing strategy based on the combination of static and dynamic [C]//Database Technology and Applications (DBTA), 2010 2nd International Workshop on IEEE, 2010:1–4. 2. Leiserson C E. Fat-trees: universal supercomputing [J]. Computers, IEEE, 892–901. 3. Al-Fares M, Loukissas A, Vahdat A. A network architecture [C]//ACM SIGCO Review. ACM, 2008, 38(4): 63–74. 4. He C, Yeung K L, Jamin S. Packet-based load-balancing in fat-tree based data center networks [C]//Communications (ICC), 2014 IEEE International Conference on. IEEE, 2014: 4011–40. 5. OpenFlow. http://archive.OpenFlow.org 6. POX OpenFlow controller. http://www.noxrepo.org/pox/about-po 7. Mininet. http://mininet.org/ 8. Open vSwitch. http://openvswitch.org 9. Li Y, Pan D. OpenFlow based load balancing for Fat-Tree networks with multipath support [C]. 12th IEEE International Conference on Communications (ICC’13), Budapest, Hungary. 2013: 1–5. 10. OpenFlow-Based Global Load Balancing in Fat-Tree [J]. Advanced Materials Research, 2014, Vol. 2784 (827), 4794–4798. 11. Jehn-Ruey Jiang, Hsin-Wen Huan. Extending Dijkstra’s shortest path algorithm for software defined networking [C]. Network Operations and Management Symposium (APNOMS), 2014 16th Asia-Pacific. 12. Benson T, Anand A, Akella A, et al. Understanding data center traffic characteristics [J]. ACM SIGCOMM Computer Communication Review, 2010, 40(1): 92-99.

Text Attentional Character Detection Using Morphological Operations: A Survey S. Arun Kumar, A. Divya, P. Jeeva Dharshni, M. Vedharsh Kishan, and Varun Hariharan

1 Introduction Image processing, being the backbone of character recognition, is the domain in which the morphological operations are implemented. Using image processing operations we can process many images from any high resolution camera sources to execute certain functions. As a part of the character recognition, image processing functions of grayscale morphological operations are used to separate the text and non-text regions. Morphological operations being a mathematical way of finding a solution to process geometrical shapes from an image helps in the character separation from diffused or blurred images with improper backdrops. Other image damaging properties like light flares, blurs, undefined shapes of background layer leads to improper processing of image which might cause the failure of character recognition but by using basic morphological operations for grayscale images like opening, closing, dilation and erosion can help in the removal of all these irrelevant properties in the images to aid the proper text recognition. In this system the order of these morphological operations matter as the infimum and supremum in each image can make a big difference in recognising the edges and used in thickening and thinning of each element in the image. As grayscale images have only a single layer of colour, morphological operations can be easily applied as filters to each bitmap data stream of these images to arrive at properly structured bitmap information. And these bitmap data can be used to recognise characters using detection from images.

S. A. Kumar · A. Divya · P. J. Dharshni · M. V. Kishan () · V. Hariharan SRM Institute of Science and Technology, Kattankulathur, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_39

409

410

S. A. Kumar et al.

2 Literature Survey Although, many papers have been published and discussed about the various methods to recognise characters in rough imagery they all lack a sophisticated method for doing so. Each published papers and their fall-backs are mentioned: 1. In this paper Convolutional Neural Networks are explained to do the job of finding characters in images which use a probability to find characteristics of each alphabet to recognise them. Although the outputs confirm most of the characters, the use of CE-MSERS is useful only when the background and foreground entities are without any disturbances in the image. The foreground fonts cannot have any distortions as CNN uses a probability of characteristics which might get dragged down in the case of disturbances. 2. Although the fast Region-Convolutional Neural Network is used, this paper neglects the need to separate original image with the text regions in the image as it will fasten up the process and help in faster recognition of the text. It does not perform any pre-processing of images to separate out the non-text and text regions from each other. This might lead to lowered confidence in the CNN’s ability to reel out the text

2.1 Proposed System A feasible way to practically extract texts from natural scenes with text on the image is by extracting the characteristic structures of the texts present in the imagery. This can be done in image processing method called Grayscale Morphology, as it helps in the scanning of geometrical shapes present in the image and holds functions that can extract parts of these shapes and attain their characteristic entity to find similarities and to separate out the differences in the shape. Especially, when an image holds a text in it the structures of this text can help in identifying what character it is. These special structures can be extracted using the morphological operations. Morphological operations are being used in the character extraction as it can successfully scan the image for structures and then it can perform functions on these geometrical shapes to make their background-foreground ambiguities thin. This will lead to attain a smoother image in which the text, even when it is introduced with an undesired agent like a light flare or a blurred positioning of it, can still be detected. The grayscale operations increase this chance as it makes the scanning of differently coloured layer distinguish them easily. All the non-text regions and the undesired intrusions in the text region can be removed easily by the grayscale morphological operations of dilation (thickening) and erosion (thinning), as these operations modifies the geometrical shapes present in the image to make the shapes more definable to the Neural Network which is used to recognise these characters. Artificial Neural Networks being the fastest way of characterising the images helps in the recognition of the texts in the image through creating several layers of

Text Attentional Character Detection Using Morphological Operations: A Survey

411

the image by taking little parts of it as a window and performing scanning action and then gain knowledge about each and every window. This will lead to trained scanning of all these layers to arrive at a set of knowledge about the image which could be used in the recognition of the texts from it as the characteristics of each structure in the image is well pruned. As only the text regions are passed to the neural network from the previous operation, the success rate in finding the text from the image using ANN will be high and it will also be faster than scanning all the layers of an original natural scene. This does not only ensure that all the texts are extracted, it also makes certain that all the texts in the image are correctly detected and recognised. The plausibility of getting all the text from any natural imagery is high.

3 Modules 3.1 Image Pre-processing The pre-processing of the image helps in the smoothening of the image in order to perform text extraction from it. This also eliminates unwanted entities from the image. At first, the image is resized in order to perform other options. Then, the colour image is turned into a grayscale image which eliminates the 3 colour lattices of RGB and converts the image into a monochromatic image with only one lattice, i.e., gradients of black. In the process of converting an RGB image into a grayscale image the method of Luma coding is used which finds the weighted colour sum of the pixel and find its value for single lattice and set it as the co-efficient for each colors to arrive at the grayscale part of the pixel. Let I be the recurring loop of each pixels in any image. Then I will be expressed as:   I R, G, B = 0.299R + 0.587G + 0.114B This weighted sum is set for all the lattices and this can result in grayscale image. After being converted into a grayscale image the combination of morphological function is used in order to get an image which has eliminated the unwanted thin parts of these images which can be misinterpreted and to remove voids present in the image. This can be done by the operations named opening and closing. Each operation is a combination of morphological operation of Dilation and Erosion. Opening is the process of thinning followed by thickening of shapes while Closing is the process of thickening followed by thinning. Opening helps in the process of removing rough edges and thin parts of the image which can remove diffused edges and remove thin layers of flares in the image. The images after applying the opening operation will have diffused edges and the thinner parts of the shapes in the images are removed. The formula for grayscale opening is as follows:

412

S. A. Kumar et al.

Fig. 1 The opening process performed using a disc like structuring element with the centre (red dot) as the origin

Fig. 2 The operation closing of square with a disc like structuring element and the centre of disc as the origin of the element

  f ◦b = f b ⊕b Opening as an imagery example is given below (Fig. 1): Similarly, the process of closing operation is applied to the grayscale image to remove unattended voids and holes present so that the background terrain however bad it is, can be smoothened and all the ruggedness present in the natural imagery can be reduced in order avoid noises from the original image. The formula to perform it is as follows:   f •b = f ⊕b b Closing as a graphical representation is given as (Fig. 2): After these process, the difference is found between the both the opening operated image and the closing operated image to find the differences between each pixel and this results in the detection of each curves in the text and makes the text and non-text regions of the image distinguishable if the contrast is set correctly. This method can easily be followed by binarization to detect the texts.

3.2 Text Area Detection In this module for detecting the text area, the image from previous module is used. The images from difference, opening, and closing are used to create an image which has undergone through a threshold binarization process. This process will convert the image into a binarized form i.e., 0s and 1s. This will make the text area detection

Text Attentional Character Detection Using Morphological Operations: A Survey

413

Fig. 3 Different level of gradients

easier as the curves of these text can be easily retained from difference image and then these can be detected through the binarized image to select out the specific regions of the image in which a text prevails. Threshold Binarization creates multiple gradient based images in order to detect each texts present in the image and this can be relied to capture all the text in the imagery as all the possible gradients are applied and then processed to find the text area in the natural imagery. Different types and levels of gradients are listed using the image below (Fig. 3):

3.3 Non-text Area Removal This module is the second part in which the need of morphological operation. However, in this module the basic operations of grayscale morphology are only used i.e., the thinning and the thickening. In this, as most of the text regions are detected, the detection of the non-text area takes place as a part of avoiding unwanted sources. The erosion and the dilation process make sure that the structuring element eliminates all the unwanted regions of the image. Erosion makes sure that any area that does not fall in touch with the structuring element gets removed. This ensures the removal of holes and thinning of each curves and edges over which the unwanted region might get pulled off and the undesired agents and the background noises in the image as small intrusions can be neglected using this method. The thin light flares and the dots produced through binarization can be easily removed using thinning process. The erosion process uses the following formula:      ! f  b (x) = inf f y − b y − x y∈E

414

S. A. Kumar et al.

Infimum is used for the process. The graphical representation of the thinning method is (Fig. 4): Dilation is the process of thickening and as this operations performs functioning after the erosion i.e., removal of unwanted region, it helps in the thickening of the text region which might have lost their pixels in the process of binarization. The thickening helps in filling small holes present in the textual region which might save the unwanted unrecognition of any important information in the imagery. This will be done using a formula which is as follows:      ! f ⊕ b (x) = sup f y + b x − y y∈E

In which the supremum is taken into account. The possible dilation diagrammatical representation is given below (Fig. 5): After the dilation process the image only contains the text area with the texts curvatures and characteristics well highlighted and this can help the process of text recognition, as it makes the process easier for text recognition. Fig. 4 The erosion using a disc structuring element with the red dot as the origin. The thinning process reduces the size of the square

Fig. 5 The process of dilation done using a disc structural element and the centres of the disc as the origin of the structure

Text Attentional Character Detection Using Morphological Operations: A Survey

415

3.4 Text Recognition After the image is clear off the non-text regions and unwanted agents which might mislead to unwanted recognitions, the usage of Artificial Neural Networks in the image will lead to a recognition of sets of characteristics of each structures in the image which can create number of input layers for the network which will correspond to multiple hidden layers extracted from the sets and this will result in the corrected outputs of text from the image. Multiple hidden layers characterises the structures scanned from the image and these structures leads to each confidence level of guessing the characters. As in this method, the only region left to guess are the text regions, therefore, the neural network are sure to find the exact character from the image and provide us with the correct information in a textual format.

4 Conclusions As the usage of the morphological operation on multiple modules and layers to find the text region and to extract them from the whole image and the operations to remove nontextual regions and irrelevant entities, this will lead to a proper recognition of the text from the natural scenery of the image. Also, the usage of neural networks will make certain that all the texts are properly recognized and that they’re correctly recognized.

References 1. Tong He, Weilin Huang, Yu Qiao, and Jian Yiao, Text-Attentional Convolutional Neural Network for Scene Text Detection, 2016, IEEE Transactions on Image Processing. 2. Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman, Reading Text in the Wild with Convolutional Neural Network, 2016, Int J Comput Vis (Cross Mark). 3. Zheng Zhang, Wei Shan, Cong Yao, Xiang Bai, Symmetry-Based Text Line Detection in Natural Scenes, 2015, IEEE on CVPR. 4. Weilin Huang, Yu Qiao, and Xiaoou Tang Robust Scene Text Detection with Convolution Neural Network Induced MSER Trees, 2014, ECCV, Springer International Publishing. 5. Xu-Cheng Yin Xuwang Yin, Kaizhu Huang, and Hong-Wei Hao Robust Text Detection in Natural Scene Images, 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence. 6. Pan He, Weilin Huang, Yu Qiao, Chen Change Loy, and Xiaoou Tang Reading Scene Text in Deep Convolutional Sequence, 2015, arXiv.org, ChenChange. 7. Max Jaderberg, Andrea Vedaldi, Andrew Zisserman Deep Features for Text Spotting, 2014, robots.ox.ac.uk 8. Hailiang Xu and Feng Su A Robust Hierarchical Detection Method for Scene Text Based on Convolutional Neural Networks, 2015, IEEE on ICME.

416

S. A. Kumar et al.

9. MM Farhad, SM Naful Hossain, Ahmed Shehab Khan, Atiqul Islam, An Efficient Optical Character Recognition Algorithm using Artificial Neural Network by Curvature Properties of Characters, 2014, International Conference on Informatics, Electronics & Vision. 10. Richa Sharma, Arun Jain, Ritika Sharma, Jyoti Wadhwa, CDRAMM: Character and Digit Recognition Aided by Mathematical Morphology, 2013, IJCTA. 11. Sanatan Sukhija, Subhash Panwar, Neeta Nain, CRAMM: Character Recognition aided by Mathematical Morphology, 2013, IEEE Transaction on Image Infromation Passing. 12. Ruchi Tuli, Character Recognition in Neural Networks using Back Propagation Method, 2012, IEEE on Advance Computing Conference.

IoT Based Environment Monitoring System A. Vidhyavani, S. Guruprasad, M. K. Praveen Keshav, B. Pranay Keremore, and A. Koushik Gupta

1 Introduction An ongoing report from the World Health Organization, drawing on estimations and counts starting at 2016 from air checking stations in 4300 urban communities, builds up obviously that air contamination is a worldwide issue. An astounding 9 out of 10 individuals on Earth inhale very contaminated air, and in excess of 80% of urban occupants need to persevere through open air contamination that surpasses wellbeing gauges, as indicated by the WHO. A wide range of air contaminations can affect wellbeing—nitrogen oxide, carbon monoxide, and ozone, among them. Carbon Monoxide or CO, is a lethal gas that you can’t see or smell. CO is emitted at whatever point fuel or other carbon-based materials are scorched. CO ordinarily originates from sources in or close to your home that are not legitimately kept up or vented. All individuals are in danger for CO harming. Unborn children, babies, the elderly, and individuals with constant coronary illness, weakness, or respiratory issues are by and large more in danger than others. Breathing CO [6–8] can cause cerebral pain, wooziness, regurgitating, and queasiness. In the event that CO levels are sufficiently high, you may end up oblivious or bite the dust. Introduction to direct and elevated amounts of CO over significant lots of time has likewise been connected with expanded danger of heart ailments. Individuals who survive extreme CO harming may endure long haul medical issues [1].

A. Vidhyavani () · S. Guruprasad · M. K. Praveen Keshav · B. Pranay Keremore A. Koushik Gupta SRM Institute of Science and Technology, Chennai, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_40

417

418

A. Vidhyavani et al.

The web of things (IoT) [2] is a processing idea that portrays the possibility of regular physical items being associated with the web and having the capacity to distinguish themselves to different gadgets. The term is firmly related to RFID as the technique for correspondence, in spite of the fact that it likewise may incorporate other sensor advances, remote advances or QR codes [5]. The IoT is critical in light of the fact that a question that can speak to itself carefully [3] progresses toward becoming an option that is more noteworthy than the protest independent from anyone else. Never again does the question relate just to its client, yet it is presently associated with encompassing articles and database information. At the point when numerous articles demonstration as one, they are known as having “encompassing insight” [4].

2 Proposed System Our proposed system uses wireless sensors coupled with a single chip computer, data are collected using these sensors and it is sent to a cloud server. The data is then retrieved from the cloud server and its viewed in form of graphs and tables using a personal computer.

2.1 System Overview The data acquisition system has various sensors deployed at dense part of the city. These sensors include in Table 1. Table 1 Comparison parameters

Model Analog sensor v2 DHT22 MQ9 Sharp GP2Y10

Parameter Sound Temperature and relative humidity Carbon monoxide Particulate matter 2.5

IoT Based Environment Monitoring System

419

The sensors are connected to Raspberry Pi 3B+ chip, the chip has quad core processor, 4 GB of RAM, integrated Wi-Fi of both 2.4 and 5 GHz band, Ethernet speed up-to 300 mbps and Bluetooth 4.2. The web server implementation was carried out with the help of XAMPP, this software is used to run dynamic servers including MySQL, PHP.

2.2 Flow Chart

Fig. 1 Flow chart

420

A. Vidhyavani et al.

2.3 System Architecture

Fig. 2 System architecture

2.4 System Requirements Figure 1 shows that the flow chart and Fig. 2 shows that the system architecture for the proposed system and the system requirements are explained below: 1. Server In order to receive and upload data to the website a computer of core i5 processor, 4 GB RAM and hard disk space of 120 GB is required. 2. Nodes Raspberry Pi 3B+ is used to connect the sensors to the board to measure and send data to the remote server. 3. Sensors Multiple sensors are used to measure each and every element present. List of sensors used as (a) (b) (c) (d)

Analog sensor v2 DHT22 MQ9 Sharp GP2Y10

IoT Based Environment Monitoring System

421

4. Apache Apache is an open source HTTP web server for Unix stages, for example (BSD, GNU/Linux, and so on), Windows, Macintosh and others, which actualizes the HTTP/1.1 protocol and the thought of virtual site. 5. MySQL MySQL is the most famous open source, in reality, it is utilized by 81.9% all things considered. It is the database alternative for Web-based applications 6. Python Python is an open source coding language which is used to code the sensors connected to the Raspberry Pi.

3 Implementation 1. 2. 3. 4. 5.

Install Raspbian OS on your Raspberry Pi and configure your Raspberry Pi. Connect the sensors to the GIPO of your Raspberry Pi. Write the necessary python code to measure and get the data from the sensors. Data received from these sensors must be send to cloud server for storage. Desktop computer is used to evaluate the given data and plot them in the form tables or graphs. 6. Using the acquired data we can see where the pollutants are high in the city.

4 Result and Conclusion With this project we can create a cost effective model which can measure the pollutants present in the environment. With the acquired model we can track the pollutants at various locations of the city.

4.1 Advantages This task advances free access, ease advances, accessible in the market keeping in mind the end goal to screen some natural factors related with ecological contamination and nature of life in a given geological territory, at neighborhood level. • Each hub can quantify, putting away, transmitting and showing by Internet: Carbon Monoxide (CO), Temperature, Relative Humidity, PM2.5, Noise and UV Radiation, of hubs disseminated in the city • Both the hub and the stage, permits effectively join different sensors permitting different applications as the network needs.

422

A. Vidhyavani et al.

• Nodes are anything but difficult to utilize and enables it to end up a subject preparing instrument of nearby government and as a supplement to people in general approaches of natural checking. • Each hub produces 8640 records. From this huge volume of information, significant data was created. That data prompts the network at nearby level to be attention to it condition act concurring. In other hand, permits to the leaders to take better choices and plan methodologies to manage contamination at a few topographical scales.

References 1. Velásquez, Pablo, Lorenzo Vásquez, Christian Correa, and Diego Rivera. “A low-cost IoT based environmental monitoring system. A citizen approach to pollution awareness.” In Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON), 2017 CHILEAN Conference on, pp. 1–6. IEEE, 2017. 2. Ibrahim, Mohannad, Abdelghafor Elgamri, Sharief Babiker, and Ahmed Mohamed. “Internet of things based smart environmental monitoring using the Raspberry-Pi computer.” In Digital Information Processing and Communications (ICDIPC), 2015 Fifth International Conference on, pp. 159–164. IEEE, 2015. 3. Kiruthika, R., and A. Umamakeswari. “Low cost pollution control and air quality monitoring system using Raspberry Pi for Internet of Things.” In 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), pp. 2319–2326. IEEE, 2017. 4. Shah Jalpa, and Biswajit Mishra. “IoT enabled environmental monitoring system for smart cities.” In Internet of Things and Applications (IOTA), International Conference on, pp. 383– 388. IEEE, 2016. 5. Piersanti, Antonio, Gaia Righini, Felicita Russo, Giuseppe Cremona, Lina Vitali, and Luisella Ciancarella. “Spatial representativeness of air quality monitoring stations in Italy.” In Proceedings of ‘The 15th International Conference on Harmonization within Atmospheric Dispersion Modelling for Regulatory Purposes’, San José R, Pérez JL, editors. Madrid, Spain, pp. 108–112. 2013. 6. Piersanti Antonio, Gaia Righini, Felicita Russo, Giuseppe Cremona, Lina Vitali, and Luisella Ciancarella. “H15-66: Spatial representativeness of air quality monitoring stations in Italy.” In Proceedings of the 15th International Conference on Harmonisation within Atmospheric Dispersion Modelling for Regulatory Purposes, HARMO 2013. Environmental Software and Modelling Group, 2013. 7. Zanella, Andrea, Nicola Bui, Angelo Castellani, Lorenzo Vangelista, and Michele Zorzi. “Internet of things for smart cities.” IEEE Internet of Things Journal 1, no. 1 (2014): 22–32. 8. Skeledzija, Niksa, Josip Cesic, Edin Koco, Vladimir Bachler, Hrvoje Nikola Vucemilo, and Hrvoje Dzapo. “Smart home automation system for energy efficient housing.” In Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, pp. 166–171. IEEE, 2014.

Design and Development of Algorithms for Detection of Glaucoma Using Water Shed Algorithm Fazlulla Khan, Ashok Kusagur, and T. C. Manjunath

1 Introduction In this section, the design and development of the algorithms for the automatic detection of glaucoma using watershed algorithm is being presented. The methodology used in this research work to detect glaucoma is shown in the following flow chart in Fig. 1 [1].

1.1 Fundus Image Database A fundus image database was obtained from Friedrich-Alexander University, this database was captured using fundus camera Canon CR-1 which has 45◦ Field of view. The total DB consists of 60 images [2].

F. Khan EEE Department, VTU Research Centre, Government BDT College of Engineering, Davanagere, Karnataka, India ECE Department, HMS Institute of Technology, Tumakuru, Karnataka, India A. Kusagur Electrical and Electronics Engineering (EEE) Department, Government BDT College of Engineering, Davanagere, Karnataka, India T. C. Manjunath () ECE Department, DSCE, Bangalore, Karnataka, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_41

423

424

F. Khan et al.

Fig. 1 Flowchart of the process

start Fundus image database convert image to gray scale & resize fundus image as pixel stream pre-processing the image apply watershed algo to segment OC/OD find area of OC & OD find CDR determine if image is glaucoma or normal end

Fig. 2 Sample image

1.2 Convert Image to Grayscale and Resize The fundus image is in RGB and very large size (Fig. 2). The watershed algorithm requires the image to be in grayscale. The fundus image is converted to grayscale and the image is resized to (512 × 512) for easier image processing [3].

1.3 Sending the Image as a Pixel Stream Due to the limited number of input pins in an FPGA, the fundus image which is a very large 2D matrix must be sent through a serialised dataset. We send the fundus image pixel data as a serial stream and control signals. The control signals are used to indicate the location of each pixel within fundus image frame. The image is sent through a Pixel Control Bus, this bus contains 5 signals which are Boolean in nature. To illustrate the protocol of streaming pixels how it has been developed in our work,

Design and Development of Algorithms for Detection of Glaucoma Using. . .

425

an image 2-by-6 pixels of grayscale, with the pixel value is shown accordingly. The image is serialised one line at a time in left to right direction. The timing diagram is obtained, shows the control signals and pixel data that correspond to this image shown in Fig. 2 [4, 9].

1.4 Preprocessing the Image The received pixels stream must be then preprocessed, this is done by removing some noise and further using morphological erosion and dilation. In Dilation pixels are added to the objects boundary in an image, while pixels are removed from boundary of objects in erosion. In an image, the structuring element’s size and shape used to process the image determines the added or removed number of pixels from the objects [5, 9] (Fig. 3).

1.5 Grayscale Erosion The morphological erosion of the grayscale image is done on intensity values, this block uses the Van Herk algorithm to find the maximum shown in Fig. 4. This algorithm uses only 3 comparators to find the maximums of all the rows, then uses a comparison tree to find the maximum of the row results. For different structuring elements like line, rectangle, square of width more than 8 pixels, then the Van Herk algorithm is used. All pixels in the organizing component must be initialized to one. The structuring element is broken down into rows and then by using Van Herk algorithm, minimum of each row is found serially. The line memory also adds horizontal padding to a multiple of m, if the size of input frame is not a multiple of m pixels. Only 3 comparators total for all rows is used for this implementation. Then, it calculates the minimum of the row results using a comparison tree, if there is more than a single row. A minimum of running backwards and running forward on the neighbourhood of each rows is computed by using kernel of the Van Herk. So, the pixel order in the row has to be reversed and buffered. The latency is added relative to the implementation of the comparison tree [6, 9].

row line memory pixels extracts 1-by-m row pixels

Fig. 3 Van Herk implementation

Van-Her minimum / maximum 2m

line memory buffers n-by-1 column pixels

comparison tree log2(n)

426

F. Khan et al.

Fig. 4 Van Herk implementation

pixel controlled bus

counter

running the min/max

pixel data

line memory extracts n-by-m pixel neighbourhood

mirror buffer row-1

comparison tree log2 (m)

row-2

comparison tree log2 (m)

. . . . row-n

mod m

. . . .

FIFO delay balance merge & final min/max

running the min/max

mirror buffer

comparison tree log2 (n)

comparison tree log2 (m)

Fig. 5 Grayscale dilation

1.6 Grayscale Dilation Similar to Grayscale erosion when the structuring element used is larger than 8 pixels in width, the Van Herk algorithm is used. A comparison tree is used when the structuring elements have width smaller than 8 pixels. The dilation operation’s architecture is obtained next and is shown in Fig. 5 with the companion tree in Fig. 6. The maxima of each horizontal line is found using the algorithm parallelly. Then by using another comparison tree the rows maximum is calculated. If the neighbourhood selected is rectangle of m pixel width, m − 1 comparators over log2 (m) clock cycles is used by the comparison tree. Next, we show for a neighbourhood of rectangle shape which has a width of 7 pixels and the comparison tree contains 6 comparators over 3 clock cycles [9].

2 Watershed Algorithm The watershed algorithm used for the image segmentation is to find the watershed lines. Imagine, holes at each regional minimum and water is flooded from bottom into these holes with constant rate. Water level will rise in the topographic surface uniformly. When the rising water in different catchment basins is going to merge

Design and Development of Algorithms for Detection of Glaucoma Using. . . Fig. 6 Comparison tree

pixel 1

pixel 2

pixel 3

pixel 1

pixel 4

pixel 1

pixel 5

427

pixel 6

pixel 7

pixel 1

pixel 1

pixel 2

pixel 2 pixel 3

Fig. 7 Watershed algorithm (a)–(e)

with nearby catchment basins then a dam is built to prevent all merging of the water. Flooding of water will reach at the point when only top of the dams are visible above water line. These continuous dam boundaries are the watershed lines and this is the concept used for segmentation purposes in our research work [7, 9]. Watershed algorithms based on watershed transformation have mainly 2 classes. The first class contains the flooding-based watershed algorithms and it is a traditional approach whereas the second class contains rain falling based watershed algorithms. Many algorithms have been proposed in both classes but connected components-based watershed algorithm shows very good performance compared to all others. It comes under the rain falling based watershed algorithm approach. It gives very good segmentation results and meets the criteria of less computational complexity for hardware implementation [8, 9]. With reference to the Fig. 7a–e, the following concepts are used in our work. • The Input image: usually the grayscale image. • The Marker image: an image of the same dimensions as the input containing the seed points, each of them with a different label. They correspond usually to the local minima of the input image, but they can be set arbitrarily. • Optional Mask image: a binary image used to restrict the areas of application of the algorithm. Set to “None” to run the method on the whole input image. • Output: image containing the catchment basins and watershed lines (dams).

428

F. Khan et al.

The traditional watershed based algorithm uses hierarchical queue for flooding process. This queue requires non-uniform memory access sequences and complexity of the traditional algorithm becomes very high because of the need to manage a hierarchical queue. An efficient watershed algorithm based on connected components was developed by Bieniek and Moga. It does not require a hierarchical queue as a traditional algorithm implementation. This algorithm gives the same segmentation results as a traditional watershed algorithm and it has an advantage of lower complexity, simple data structure and short execution time. It connects each pixel to its lowest neighbour pixel and all pixels connect to the same lowest neighbour pixel, make a segment. It uses FIFO queue and stack to perform the same functionality as a hierarchical queue of the traditional algorithm. The disadvantages of this algorithm for hardware implementations are the requirement of extra accesses of FIFO queue and stack, which are very difficult and inefficient to implement in the FPGA. This algorithm is slightly modified to simplify the memory access by Maruyama and Trieu and it is used in this research work too. The concept of connected components based algorithm is explained. The original 6 × 6 image has 3 local minimum values indicated by gray boxes. If the component (pixel) is not a local minimum, then it is connected to its lowest neighbours as shown by arrows, where m indicates a local minimum. All components directed towards the same local minimum make a segment and are given a same label value [9].

2.1 Details of the Algorithm The pseudo-code of connected components based watershed algorithm is shown in the algorithm presented below. In this algorithm, p represents a pixel, f is the input pre-processed (processed by filter and morphological gradient) image, l is the segmented label image, f (p) means the gray level value of p, n is the neighbour pixel of p, f (n) represents gray level value of the respective neighbour pixel. The array l[p] is used to store the labels and v[p] is used to store a distance from the lowest plateau or pixels. LMAX and VMAX denote maximum value for label and maximum distance in the system respectively. VMAX defines distance between first pixels of the first row to last pixel of the last row. Scan Step 2 and Scan Step 3 are used to decide whether to continue or to stop image scan for step 2 and step 3 respectively [9].

2.1.1

Step 1

The lowest neighbourhood of each pixel is found in step 1. Initially, array v[p] has zero value for all array elements. Input image scan from top left to bottom right, and v[p] is set to ‘0’ if it is lower than or equal to neighbourhood values otherwise set it to ‘1’. As indicated by gray box, gray level value ‘2’ is the lowest in all neighbourhood and v[p] set to ‘0’ for it. The gray value ‘5’ has an equal neighbourhood so v[p] is set to ‘0’ for it. The small plateau with 3 values of ‘8’ is coloured with light gray [9].

Design and Development of Algorithms for Detection of Glaucoma Using. . .

2.1.2

429

Step 2

The fundamental of step 2 is that if pixel is on a plateau and its neighbouring point to one of the local minimum then the pixel points to its neighbour. To realize this fundamental, all pixels with v[p] not equal to 1 and have neighbour pixels on same plateau with v[p] set to 1 in the step 1 are considered, then shortest distance for each pixel on plateau respect to non-zero v[p] of neighbour plateau is calculated. All gray boxes are on same plateau. Here, value ‘2’ is directed to its left neighbour ‘1’, ‘3’ is directed to its left neighbour ‘2’, ‘4’ is directed to its neighbour ‘3’ and vice versa. During the upward scan, value ‘11’ directed to ‘10’, ‘12’ is directed to ‘11’and vice versa. The downward and upward scans continue until all the shortest plateau distances are calculated. There are only 2 scans required for this example to finish step 2. The assigned value of the distance d during the downward scan may be overwritten in the upward scan when the value of distance d of a previous scan is higher than lowest plateau located in the downward region [9].

2.1.3

Step 3

The labels are assigned to the array l[p] in this step. All elements of l[p] are initialized with zero. Labels are first given to the pixels on local minima plateau whose v[p] is zero, if their neighbourhood pixels with the same gray value have not been assigned any labels yet. These labels are propagated to their neighbourhood pixels according to values of v[p] to create a region until this region has centre of the local minima. Similar regions are created for all other local minima with the same procedure. There may be possibility for assignment of different labels on same local minimum plateau but they are overwritten in subsequent scans. For example, one figure refers to the first downward scan and gray boxes indicate new labels assignment. New labels propagate to the neighbourhood elements according to the values of v[p]. As shown by gray boxes, labels are delivered to the upward direction. Label ‘3’ (in bold letter) denotes that label ‘5’ was assigned that location in previous scan but it is overwritten by the correct value in this scan. These figures also shows the label propagation by gray boxes. No labels are modified compared to the previous scan and hence no more scans are needed to obtain image. Total 6 scans are required in the step 3 for the given sample input image [9]. Find the area of the cup and disc and their ratio: Here, the area of the optical cup and disc area is calculated using the segmented data, the area of the cup and disc are obtained suing different thresholds. Then, finally using the obtained area, the cup to disc ratio is calculated [9]. Determine the result: Now, if the obtained ratio is less than 0.4, then the fundus image is of a normal eye, if the ratio is higher, then it is determined that the fundus image is of a patient suffering from Glaucoma or not. After this, the data is sent back to the PC [9].

430

F. Khan et al.

3 Simulations Results for Watershed Algorithm The image was sent through the pixel bus after converting it into a grayscale image then after applying image processing algorithm and the simulation results were obtained. The targeted FPGA device was of product family Virtex-7 and part xc7vx485t ffg1761-2. Matlab 2017b was used to convert the images to GS and the image is sent through as a pixel stream. The Vivado 2018.1 was used to synthesize the design. ModelSim SE-64 10.5 was used to see the waveforms being generated. The fundus image database was obtained from Friedrich-Alexander University, this database was captured using fundus camera Canon CR-1 which has 45◦ Field of view. The total database consists of 60 fundus images of which 30 are of normal and other 30 of eye suffering from glaucoma [9]. The grayscale image is sent to the targeted design using the pixel bus, the image processing is done the targeted device and the final images along with the results is sent back. The pre-processing, segmentation and the calculation was done in targeted device and the results are sent back through the pixel bus. In this section, we will go through the images received at the early and final stages of image processing, the waveforms from the ModelSim, elaborated and schematic design generated by Vivado. The synthesized design is also obtained, the final results which will be seen in Matlab will also be observed [9].

3.1 Normal Eyes 3.1.1

Case 1: For Normal eye

For image 6 For image 1 For image 2 Similarly, we can observe form the following set of images which show the different stages when it is applied to fundus image of glaucoma eye, we can observe the cupping more easily.

3.1.2

Case 2: For Moderate Glaucoma

For image 1 For image 6 For image 4

Design and Development of Algorithms for Detection of Glaucoma Using. . .

3.1.3

431

Case 3: Glaucoma Eye

For image 2 For image 9 For image 13

3.1.4

Input and Output Images

Simulation was carried out for a set of 60 images taken from the fundus database. The developed code was run and the following results were observed. Only few images were taken here for the sake of simulation and the results are displayed. Similarly, the all input images taken from the database are considered, run and the results are displayed. The following images illustrate the different stages of the process. Three cases were considered, normal, moderate and the severe case and for each one, one set of results is being shown. These shows the results which will be obtained for the normal eye. We can observe that from Figs. 8, 9, 10, 11, 12, 13, 14, 15 and 16, the dilation of the cup is not occurring [9].

4 Schematic Design Finally, for the total database from the following performance was obtained (Fig. 17).

Fig. 8 (a) Original fundus image of normal eye, (b) grayscale image, (c) after segmentation of optic disc, and (d) after segmentation of optic cup

432

F. Khan et al.

Fig. 9 (a) Original fundus image of normal eye, (b) grayscale image, (c) after segmentation of optic disc, and (d) after segmentation of optic cup

Fig. 10 (a) Original fundus image of normal eye, (b) grayscale image, (c) after segmentation of optic disc, and (d) after segmentation of optic cup

Fig. 11 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of optic disc, and (d) after segmentation of optic cup

Fig. 12 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of optic disc, and (d) after segmentation of optic cup

Fig. 13 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of optic disc, and (d) after segmentation of optic cup

Design and Development of Algorithms for Detection of Glaucoma Using. . .

433

Fig. 14 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of optic disc, and (d) after segmentation of optic cup

Fig. 15 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of optic disc, (d) after segmentation of optic cup

Fig. 16 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of optic disc, (d) after segmentation of optic cup

Fig. 17 Schematic design

434

F. Khan et al.

Table 1 Performance characteristics using watershed algo-1 Type of images Normal eye Glaucoma eye Total

No. of images 30 30 60

Correct diagnosis 29 29 58

Percentage 96.67 96.67 96.67

Fig. 18 ModelSim signals

4.1 ModelSim Signals The following Table 1 indicates the results obtained. The table gives the comparative statistics of the final result analysis. Another table will give the results obtained for the glaucoma images (not shown). Another table is giving the results obtained for the healthy images (not shown). Simulations are carried for 60 images from the database, here only 30 are shown here for the sake of convenience [9] (Fig. 18).

5 Conclusions Research work was carried on the detection on glaucoma using the watershed algorithm implemented using the FPGA-HDL with the modelling done in ModelSim and Xilinx environment. The results show the efficiency of the methodology developed. Further the results are compared which shows the better results, thus proving the efficacy of the methodology developed (Figs. 19 and 20).

Design and Development of Algorithms for Detection of Glaucoma Using. . . Fig. 19 On-chip power statistics

435

on-chip power Dynamic 5.110 W 95 %

95 %

5%

41%

Signals

0.271 W

41%

38%

Logic

1.950 W

38%

16%

BRAM i/p-o/p

0.238 W 0.851 W

5% 18%

Device static : 0.284 W (5 %)

Fig. 20 Synthesis results

References 1. O. Sheeba and A. Sukeshkumar : Neural Networks in the diagnosis of Diabetic Retinopathy. Int. Conf. on Mod. & Simu. (MS’09), Trivandrum, pp. 256–259 (2009). 2. C. Angulo, L. González, A. Catalá, and F. Velasco: Multi-classification with tri-class support vector machines: A review. Springer’s Proc. 9th Int. Work Conf. Artif. Neural Netw., IWANN 2007, San Sebastián, Spain, pp. 276–283, 20–22 (2007). 3. N. Cristianini and C. Campbell: Dynamically adapting kernels in support vector machines. Proc. Adv. Neural Inf. Process. Syst., pp. 204 – 210 (1998). 4. E.H. Galilea, G. Santos-García, I.F. Suárez-Bárcena: Identification of glaucoma stages with artificial neural networks using retinal nerve fibre layer analysis and visual field parameters. Innovations in Hybrid Intelligent Syst. (Part of advances in soft computing), Springer, Berlin, Heidelberg, Vol. 44, pp. 418–424 (2007). 5. Sheeba O., Jithin George, Rajin P. K., Nisha Thomas, and Sherin George: Glaucoma Detection Using Artificial Neural Network. IACSIT International Journal of Engineering and Technology, Vol. 6, No. 2, pp. 158–161 (April 2014).

436

F. Khan et al.

6. Aliaa Abdel-Haleim, Abdel-Razik Youssif, Atef Zaki Ghalwash, Amr Ahmed Sabry AbdelRahman Ghoneim: Optic disc detection from normalized digital fundus images by means of a vessels’ direction matched filter. IEEE Trans. on Medical Imaging, Vol. 27, Issue 1, pp. 11–18 (Jan. 2008). 7. Meindert Niemeijer, Michael D. Abramoff, Bram Van Ginneken: Fast Detection of the Optic Disc and Fovea in Colour fundus photographs. Med. Image Anal., Vol. 13, No. 6, pp. 859–870 (Dec. 2009). 8. Nilanjan Dey, Anamitra Bardhan Roy, Moumita Pal, Achintya Das: FCM Based Blood Vessel Based Segmentation Method for Retinal Images. Int. Jour. of Comp. Sci. & Network, IJCSNS, ISSN 2277-5420, Vol. 1, Issue 3, pp. 109–116 (2012). 9. http://www.1hoosh.com

A Novel Development of Glaucoma Detection Technique Using the Water Shed Algorithm Fazlulla Khan and Ashok Kusagur

1 Introductory Part In this section, the development w.r.t. the designing of the algos for the automatic detection of the eye disease using watershed algorithm is being presented. The methodology used in this research work to detect glaucoma is shown in the following flow chart in Fig. 1 [1].

1.1 Fundus Image Database A fundus image database was obtained from Friedrich-Alexander University, this database was captured using fundus camera Canon CR-1 which has 45◦ Field of view. The total DB consists of 60 images [2].

F. Khan () EEE Department, VTU Research Centre, Government BDT College of Engineering, Davanagere, Karnataka, India ECE Department, HMS Institute of Technology, Tumakuru, Karnataka, India A. Kusagur Electrical and Electronics Engineering (EEE) Department, Government BDT College of Engineering, Davanagere, Karnataka, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_42

437

438

F. Khan and A. Kusagur

Fig. 1 Flowchart of the process

start Fundus image database convert image to gray scale & resize fundus image as pixel stream pre-processing the image apply watershed algo to segment OC/OD find area of OC & OD find CDR determine if image is glaucoma or normal end

Fig. 2 Sample image

1.2 Convert Image to Grayscale and Resize The fundus image is in RGB and very large size (Fig. 2). The watershed algorithm requires the image to be in grayscale. The input image will have to be converted to GS image the GSI will be resized to (512 × 512) for easier image processing [3].

1.3 Sending the Image as a Pixel Stream Due to the limited number of input pins in an FPGA, the fundus image which is a very large 2D matrix must be sent through a serialised dataset. We send the fundus image pixel data as a serial stream and control signals. The control signals are used to indicate the location of each pixel within fundus image frame. The image is sent through a Pixel Control Bus, this bus contains 5 signals which are Boolean in nature. To illustrate the protocol of streaming pixels how it has been developed in our work, an image 2-by-6 pixels of grayscale, with the pixel value is shown accordingly. The image is serialised one line at a time in left to right direction. The timing diagram

A Novel Development of Glaucoma Detection Technique Using the Water Shed. . .

439

is obtained, shows the control signals and pixel data that correspond to this image shown in Fig. 2 [4, 9].

1.4 Preprocessing the Image The received pixels stream must be then preprocessed, this is done by removing some noise and further using morphological erosion and dilation. In Dilation pixels are added to the objects boundary in an image, while pixels are removed from boundary of objects in erosion. In an image, the structuring element’s size and shape used to process the image determines the added or removed number of pixels from the objects [5, 9] (Fig. 3).

1.5 Grayscale Erosion When the morphological disintegration of the GS picture is done on force esteems, this square utilizes the Van Herk calculation to locate the most extreme appeared in Fig. 4. This calculation utilizes just 3 comparators to discover the maximums of the considerable number of columns, at that point utilizes an examination tree to locate the limit of the line results. For various organizing components like line, square shape, square of width in excess of 8 pixels, at that point the Van Herk calculation is utilized. All pixels in the sorting out part should be introduced to one. The organizing component is separated into columns and afterward by utilizing Van Herk calculation, least of each line is found sequentially. The line memory likewise

row line memory pixels extracts 1-by-m row pixels

Van-Her minimum / maximum 2m

line memory buffers n-by-1 column pixels

comparison tree log2(n)

Fig. 3 Van Herk implementation Fig. 4 Van Herk implementation

pixel controlled bus

counter

mod m

running the min/max

pixel data mirror buffer

running the min/max

FIFO delay balance merge & final min/max mirror buffer

F. Khan and A. Kusagur

line memory extracts n-by-m pixel neighbourhood

440 row-1

comparison tree log2 (m)

row-2

comparison tree log2 (m)

. . . . row-n

. . . .

comparison tree log2 (n)

comparison tree log2 (m)

Fig. 5 Grayscale dilation

adds flat cushioning to a various of m, if the size of information edge is certainly not a different of m pixels. Just 3 comparators aggregate for all columns is utilized for this execution. At that point, it figures the base of the line results utilizing an examination tree, if there is more than a single row. A minimum of running backwards and running forward on the neighbourhood of each rows is computed by using kernel of the Van Herk. So, the pixel order in the row has to be reversed and buffered. The latency is added relative to the implementation of the comparison tree [6, 9].

1.6 Grayscale Dilation Similar to Grayscale erosion when the structuring element used is larger than 8 pixels in width, the Van Herk algorithm is used. A comparison tree is used when the structuring elements have width smaller than 8 pixels. The dilation operation’s architecture is obtained next and is shown in Fig. 5 with the companion tree in Fig. 6. The maxima of each horizontal line is found using the algorithm parallelly. Then by using another comparison tree the rows maximum is calculated. If the neighbourhood selected is rectangle of m pixel width, then the m − 1 comparators over the f/n log2 (m) clock of the cycles is used by the comparison tree. Next, we show for a neighbourhood of rectangle shape which has a width of 7 pixels and tree of comparison will be containing six over three cycles of clock [9].

2 Watershed Algorithm The watershed calculation utilized for the picture division is to discover the watershed lines. Envision, gaps at each provincial least and water is overwhelmed from base into these gaps with consistent rate/s. The level of water will ascend

A Novel Development of Glaucoma Detection Technique Using the Water Shed. . . Fig. 6 Comparison tree

pixel 1

pixel 2

pixel 3

pixel 1

pixel 4

pixel 1

pixel 5

441

pixel 6

pixel 7

pixel 1

pixel 1

pixel 2

pixel 2 pixel 3

Fig. 7 Watershed algorithm (a)–(e)

in the topographical surfaces consistently. At the point when the rising water in various catchment bowls will converge with adjacent catchment bowls then a dam is worked to forestall all converging of the water. Flooding of water will reach exactly when just top of the dams are noticeable above water line. Then, the ceaseless dam limit-boundary are called as the watershed lines. These are the concepts used for segmentation purposes in our research work [7, 9]. Watershed calculations dependent on watershed changes have principally 2 different varieties. The five star contains the flooding dependent watershed calculations and it is a customary methodology while the below average contains downpour falling based watershed calculations. Numerous calculations have been enunciated in the 2 classes yet associated parts based watershed calculation demonstrates generally excellent execution contrasted with all other algos. It goes under the downpour falling based watershed calculation approach. It gives excellent division results and meets the criteria of less computational multifaceted nature for equipment usage, i.e., w.r.t. RTI points [8, 9]. With reference to the Fig. 7a–e, the following concepts are used in our work. • • • • •

The Input image: usually the GSI. The Marker’s image Optional Mask’s image Gradient image Output-ending results

442

F. Khan and A. Kusagur

The customary watershed based calculation utilizes progressive line for flooding process. This line requires non-uniform memory access arrangements and complexity of the customary calculation turns out to be extremely high due to the need to deal with a various leveled line. An effective watershed calculation dependent on connected segments was created by Bieniek and Moga. It doesn’t require a hello there erarchical line as a conventional calculation usage. This calculation gives a similar division results as a conventional watershed calculation and it has a bit of leeway of lower multifaceted nature, straightforward information structure and short execution time. It associates every pixel to its most minimal neighbor pixel and all pixels interface with the equivalent least neighbor pixel, make a section. It utilizes FIFO line and stack to play out a similar usefulness as a various leveled line of the conventional algorithm. The weaknesses of this calculation for equipment executions are the prerequisite of additional gets to of FIFO line and stack, which are troublesome and wasteful to actualize in the FPGA. This algorithm is slightly modified to simplify the memory access by Maruyama and Trieu and it is used in this research work too. The concept of connected components based algorithm is explained. The first 6 × 6 picture has 3 neighborhood least qualities demonstrated by dark boxes. On the off chance that the part (pixel) is certainly not a nearby least, at that point it is associated with its most minimal neighbors as appeared by bolts, where m shows a nearby least. All components coordinated towards a similar nearby least make a fragment and are given an equivalent name esteem [9].

2.1 The Algo Detail/s The pseudo-code of associated parts based watershed calculation is appeared in the calculation displayed underneath. In this calculation, p speaks to a pixel, f is the info pre-prepared (handled by channel and morphological slope) picture, l is the portioned name picture, f(p) implies the dim level estimation of p, n is the neighbor pixel of p, f(n) speaks to dark level estimation of the separate neighbor pixel. The cluster l[p] is utilized to store the marks and v[p] is utilized to store a good ways from the most reduced level or pixels. LMAX and VMAX mean most extreme incentive for name and greatest separation in the framework individually. VMAX characterizes separation between first pixels of the principal line to last pixel of the last push. Sweep Step 2 and Scan Step 3 are utilized to choose whether to proceed or to stop picture filter for stage 2 and stage 3 respectively [9].

2.1.1

Step 1

The least neighborhood of every pixel is found in stage 1. At first, exhibit v[p] has zero an incentive for all cluster components. Info picture examine from upper left to

A Novel Development of Glaucoma Detection Technique Using the Water Shed. . .

443

base right, and v[p] is set to ‘0’ on the off chance that it is lower than or equivalent to neighborhood esteems generally set it to ‘1’. As shown by dim box, dark level worth ‘2’ is the most minimal in all area and v[p] set to ‘0’ for it. The dim worth ‘5’ has an equivalent neighborhood so v[p] is set to ‘0’ for it. The little level with 3 estimations of ‘8’ is shaded with light dark [9].

2.1.2

Step 2

The crucial of stage 2 is that if pixel is on a level and its neighboring point to one of the nearby least then the pixel focuses to its neighbor. To understand this central, all pixels with v[p] not equivalent to 1 and have neighbor pixels on same level with v[p] set to 1 in the stage 1 are considered, at that point most brief separation for every pixel on level regard to non-zero v[p] of neighbor level is determined. All dim boxes are on same level. Here, esteem ‘2’ is coordinated to one side neighbor ‘1’, ‘3’ is coordinated to one side neighbor ‘2’, ‘4’ is coordinated to its neighbor ‘3’ and the other way around. During the upward check, esteem ‘11’ coordinated to ‘10’, ‘12’ is coordinated to ‘11’and the other way around. The descending and upward sweeps proceed until all the briefest level separations are determined. There are just 2 outputs required for this guide to complete stage 2. The relegated estimation of the separation d during the descending sweep might be overwritten in the upward filter when the estimation of separation d of a past output is higher than most reduced level situated in the descending area [9].

2.1.3

Step 3

The names are relegated to the exhibit l[p] in this progression. All components of l[p] are introduced with zero. Names are first given to the pixels on nearby minima level whose v[p] is zero, if their neighborhood pixels with a similar dim worth have not been doled out any marks yet. These names are engendered to their neighborhood pixels as per estimations of v[p] to make an area until this district has focus of the nearby minima. Comparable districts are made for all other neighborhood minima with a similar technique. There might be probability for task of various marks on same nearby least level yet they are overwritten in ensuing outputs. For instance, one figure alludes to the principal descending output and dim boxes demonstrate new marks task. New marks engender to the area components as per the estimations of v[p]. As appeared by dim boxes, marks are conveyed to the upward bearing. Mark ‘3’ (in striking letter) signifies that name ‘5’ was relegated that area in past sweep yet it is overwritten by the right an incentive in this output. These figures likewise demonstrates the name proliferation by dim boxes. No marks are altered contrasted with the past output and thus no more sweeps are expected to acquire picture. Complete 6 outputs are required in the stage 3 for the given example info picture [9].

444

F. Khan and A. Kusagur

Find the area of the cup and disc and their ratio: Here, the area of the optical cup and disc area is calculated using the segmented data, the area of the cup and disc are obtained suing different thresholds. Then, finally using the obtained area, the cup to disc ratio is calculated [9]. Determine the result: Now, if the obtained ratio is less than 0.4, then the fundus image is of a normal eye, if the ratio is higher, then it is determined that the fundus image is of a patient suffering from Glaucoma or not. After this, the data is sent back to the PC [9].

3 Simulations Results for Watershed Algorithm The image was sent through the pixel bus after converting it into a grayscale image then after applying image processing algorithm and the simulation results were obtained. The targeted FPGA device was of product family Virtex-7 and part xc7vx485t ffg1761-2. Matlab 2017b was used to convert the images to GS and the image is sent through as a pixel stream. The Vivado 2018.1 was used to synthesize the design. ModelSim SE-64 10.5 was used to see the waveforms being generated. The fundus image database was obtained from Friedrich-Alexander University, this database was captured using fundus camera Canon CR-1 which has 45◦ Field of view. The total database consists of 60 fundus images of which 30 are of normal and other 30 of eye suffering from glaucoma [9]. The grayscale image is sent to the targeted design using the pixel bus, the image processing is done the targeted device and the final images along with the results is sent back. The pre-processing, segmentation and the calculation was done in targeted device and the results are sent back through the pixel bus. In this section, we will go through the images received at the early and final stages of image processing, the waveforms from the ModelSim, elaborated and schematic design generated by Vivado. The synthesized design is also obtained, the final results which will be seen in Matlab will also be observed [9].

3.1 Normal Eyes 3.1.1

Case 1: For Normal Eye

For image 6 For image 1 For image 2

A Novel Development of Glaucoma Detection Technique Using the Water Shed. . .

445

Similarly, we can observe form the following set of images which show the different stages when it is applied to fundus image of glaucoma eye, we can observe the cupping more easily.

3.1.2

Case 2: For Moderate Glaucoma

For image 1 For image 6 For image 4

3.1.3

Case 3: Glaucoma Eye

For image 2 For image 9 For image 13

3.1.4

Input and Output Images

Simulation was carried out for a set of 60 images taken from the fundus database. The developed code was run and the following results were observed. Only few images were taken here for the sake of simulation and the results are displayed. Similarly, the all input images taken from the database are considered, run and the results are displayed. The following images illustrate the different stages of the process. Three cases were considered, normal, moderate and the severe case and for each one, one set of results is being shown. These shows the results which will be obtained for the normal eye. We can observe that from Figs. 8, 9, 10, 11, 12, 13, 14, 15 and 16, the dilation of the cup is not occurring [9].

Fig. 8 (a) Original fundus image of normal eye, (b) grayscale Image, (c) after segmentation of OD, and (d) after segmentation of OC

446

F. Khan and A. Kusagur

Fig. 9 (a) Original fundus image of normal eye, (b) grayscale image, (c) after segmentation of OD, and (d) after segmentation of OC

Fig. 10 (a) Original fundus image of normal eye, (b) grayscale image, (c) after segmentation of OD, and (d) after segmentation of OC

Fig. 11 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of OD, and (d) after segmentation of OC

Fig. 12 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of OD, and (d) after segmentation of OC

Fig. 13 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of OD, and (d) after segmentation of OC

A Novel Development of Glaucoma Detection Technique Using the Water Shed. . .

447

Fig. 14 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of OD, and (d) after segmentation of OC

Fig. 15 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of OD, and (d) after segmentation of OC

Fig. 16 (a) Original fundus image of glaucoma eye, (b) grayscale image, (c) after segmentation of OD, and (d) after segmentation of OC

4 Schematic Design Finally, for the total database from the following performance was obtained (Fig. 17).

4.1 ModelSim Signals The following Table 1 indicates the results obtained. The table gives the comparative statistics of the final result analysis. Another table will give the results obtained for the glaucoma images (not shown). Another table is giving the results obtained for the healthy images (not shown). Simulations are carried for 60 images from the database, here only 30 are shown here for the sake of convenience [9] (Fig. 18).

448

F. Khan and A. Kusagur

Fig. 17 Schematic design Table 1 Performance characteristics using watershed algo-1 Type of images Normal eye Glaucoma eye Total

No. of images 30 30 60

Fig. 18 ModelSim signals

Correct diagnosis 29 29 58

Percentage 96.67 96.67 96.67

A Novel Development of Glaucoma Detection Technique Using the Water Shed. . .

449

5 Conclusions Research work was carried on the detection on glaucoma using the watershed algorithm implemented using the FPGA-HDL with the modelling done in ModelSim and Xilinx environment. The results show the efficiency developed methods and the concepts. Further the simulation result is compared which shows the better results, thus proving the efficacy of the methodology developed (Figs. 19 and 20). Acknowledgement I would like to thank VTU Research Centre, Government BDT College of Engineering, Davanagere, Karnataka for providing dataset for this research work. All author states that there is no conflict of interest.

Fig. 19 On-chip power statistics

on-chip power Dynamic 5.110 W 95 %

95 %

5%

Fig. 20 Synthesis results

41%

Signals

0.271 W

41%

38%

Logic

1.950 W

38%

16%

BRAM i/p-o/p

0.238 W 0.851 W

5% 18%

Device static : 0.284 W (5 %)

450

F. Khan and A. Kusagur

References 1. O. Sheeba and A. Sukeshkumar: Neural Networks in the diagnosis of Diabetic Retinopathy. Int. Conf. on Mod. & Simu. (MS’09), Trivandrum, pp. 256–259 (2009). 2. C. Angulo, L. González, A. Catalá, and F. Velasco: Multi-classification with tri-class support vector machines: A review. Springer’s Proc. 9th Int. Work Conf. Artif. Neural Netw., IWANN 2007, San Sebastián, Spain, pp. 276–283, 20–22 (2007). 3. N. Cristianini and C. Campbell : Dynamically adapting kernels in support vector machines. Proc. Adv. Neural Inf. Process. Syst., pp. 204 – 210 (1998). 4. E.H. Galilea, G. Santos-García, I.F. Suárez-Bárcena : Identification of glaucoma stages with artificial neural networks using retinal nerve fibre layer analysis and visual field parameters. Innovations in Hybrid Intelligent Syst. (Part of advances in soft computing), Springer, Berlin, Heidelberg, Vol. 44, pp. 418–424 (2007). 5. Sheeba O., Jithin George, Rajin P. K., Nisha Thomas, and Sherin George: Glaucoma Detection Using Artificial Neural Network. IACSIT International Journal of Engineering and Technology, Vol. 6, No. 2, pp. 158–161 (April 2014). 6. Aliaa Abdel-Haleim, Abdel-Razik Youssif, Atef Zaki Ghalwash, Amr Ahmed Sabry AbdelRahman Ghoneim: Optic disc detection from normalized digital fundus images by means of a vessels’ direction matched filter. IEEE Trans. on Medical Imaging, Vol. 27, Issue 1, pp. 11–18 (Jan. 2008). 7. Meindert Niemeijer, Michael D. Abramoff, Bram Van Ginneken: Fast Detection of the Optic Disc and Fovea in Colour fundus photographs. Med. Image Anal., Vol. 13, No. 6, pp. 859–870 (Dec. 2009). 8. Nilanjan Dey, Anamitra Bardhan Roy, Moumita Pal, Achintya Das: FCM Based Blood Vessel Based Segmentation Method for Retinal Images. Int. Jour. of Comp. Sci. & Network, IJCSNS, ISSN 2277-5420, Vol. 1, Issue 3, pp. 109–116 (2012). 9. http://www.1hoosh.com

Solutions of Viral Dynamics in Hepatitis B Virus Infection Using HPM S. Balamuralitharan and S. Vigneshwari

1 Introduction The HBV was discovered in 1965 by Dr. Baruch Blumberg. For his discovery he won a Nobel Prize in 1976 [1]. Hepatitis B is an infectious disease that affects the liver. HBV is from Hepadnaviridae family. HBV is DNA virus. Malaise, anorexia, nausea are symptoms of hepatitis. Hepatitis B is contagious disease [2]. Hepatitis B virus is mainly transfers through organ transplants, contact with infected blood. Chronic HBV has affected 350 million people [3]. For testing hepatitis one sample of blood is needed. Hepatitis B panel has three parts. To understand whether a person is infected or uninfected, the three test results are necessary. The result of hepatitis B surface antigen is positive then the person has affected with HBV [4]. Hepatitis B surface antibody is to check if the person has immune to fight against the viruses. The result of hepatitis B core antibody shows the past or current hepatitis B infection [5]. Acute infection has no specific treatment. Some of the infected people will not show any symptoms [6]. Sometimes due to acute infection the liver may get damaged and causes death. And in the starting stage, the infection gets recovered by itself. One of the most dangerous thing in Hepatitis B virus is the infected person may not aware of the disease but they are in the risk to spread the disease. Chronic infection may develop hepatocellular carcinoma and liver cirrhosis [7]. Vaccination is mainly focused for people who are at the risk of developing chronic infection. Infants receive their first dose within 24 h and two boosters during period of childhood [8]. People, who work in health care, people in prisons, have chances of

S. Balamuralitharan () · S. Vigneshwari Department of Mathematics, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_43

451

452

S. Balamuralitharan and S. Vigneshwari

getting infected. Lamivudine, Interferon Alfa, Entecavir, Adefovir are the treatment options for hepatitis B virus [9].

2 Mathematical Modelling The study of virus dynamics is very helpful to explain experimental results and to understand the biological mechanisms [10]. The interaction of the in-host model between the infected and uninfected liver cells, virus is one derived from Fig. 1. Where T denotes the target uninfected cells, I denotes infected cells, V denotes Hepatitis B virus. Considering that β is the rate of target uninfected cells gets infected, and this happens when the target uninfected cells and virus interact with each other. Assuming that δ is the death rate of infected liver cells, and α is the production rate of virus. c is the death of virus. The new target cells are produced at the constant rate s and the target cells dies before getting infected at the rate dT and it’s the natural death [11]. Under this assumption the model becomes [12], dT dt dI dt dV dt

= s − βV T − dT T = βV T − δI

(1)

= αI − CV .

The initial and boundary conditions are T (0) = 1.1e6 , I(0) = 0.2, V(0) = 0.5 T1 = 0, I1 = 0, V1 = 0.

2.1 Basic Concepts of the Homotopy Perturbation Method Let LT be the linear terms of the sequence and NT be the nonlinear terms respectively. The general term constructed homotopy as follows: Fig. 1 Compartmental diagram for basic virus dynamics model

Solutions of Viral Dynamics in Hepatitis B Virus Infection Using HPM

     1 − p LT + p LT + NT = 0

453

(2)

Hence we have to find the like terms of the powers p0 , p1 , p2 , . . . . It gives linear equations and nonlinear equations. On further solving the linear equation and nonlinear equation the solution is obtained. For solving nonlinear differential equations using homotopy perturbation method [13], considering the three parameters of the terms in power series: T = T0 + pT 1 + p2 T2 + . . . I = I0 + pI 1 + p2 I2 + . . . V = V0 + pV 1 + p2 V2 + . . .

(3)

Then setting p = 1 T = lim T = T0 + T1 + . . . p→1

I = lim I = I0 + I1 + . . . p→1

(4)

V = lim V = V0 + V1 + . . . p→1

Then the approximate solution is obtained. Here the given sequence is convergent. So it is enough to take first two terms of the sequence of the series for this nonlinear system. This is the basic method for Homotopy Perturbation Method [14].

2.2 Analytical Solution of HBV In order to obtain the solutions of Eq. (1), we first construct a homotopy as follows:      + dT T + p dT − s + βV T + dT T = 0 1 − p dT dt dt      dI 1 − p dI + δI + p − βV T + δI =0 dt  dt     dV dV 1 − p dt + cv + p dt − αI + cv = 0

(5)

454

S. Balamuralitharan and S. Vigneshwari

The power series solutions of the system of Eq. (1) is T = T0 + pT 1 + p2 T2 + . . . I = I0 + pI 1 + p2 I2 + . . . V = V0 + pV 1 + p2 V2 + . . .

(6)

Substituting Eq. (6) into Eq. (5) we get 



+ p2 T





+ p2 T



T0 + pT 1 (1 − P ) 2 + . . . + dT T0 + pT 1 2 + ...    d T0 + pT 1 + p2 T2 + . . . − s + P dt    + β v0 + pv 1 + p2 v2 + . . . T0 + pT 1 + p2 T2 + . . .   =0 + dT T0 + pT 1 + p2 T2 + . . .      d I0 + pI 1 + p2 I2 + . . . + δ I0 + pI 1 + p2 I2 + . . . (1 − P ) dt    d I0 + pI 1 + p2 I2 + . . . + P dt    − β V0 + pV 1 + p2 V2 + . . . T0 + pT 1 + p2 T2 + . . .   =0 + δ I0 + pI 1 + p2 I2 + . . .      d V0 + pV 1 + p2 V2 + . . . + c V0 + pV 1 + p2 V2 + . . . (1 − P ) dt    d V0 + pV 1 + p2 V2 + . . . + P dt     =0 − α I0 + pI 1 + p2 I2 + . . . + c V0 + pV 1 + p2 V2 + . . . d dt

(7) From Eq. (7) we are comparing the coefficient of like powers of p, dT0 + dT T0 = 0 dt dI0 + δI0 = 0 p0 : dt dV0 + cV 0 = 0 p0 : dt dT1 + dT T1 = s − βV0 T0 p1 : dt p0 :

Solutions of Viral Dynamics in Hepatitis B Virus Infection Using HPM

455

dI1 + δI1 = βV0 T0 dt dV1 + cV 1 = αI0 p1 : dt

p1 :

Approximate solutions of the (1) are: T (t) = lim T = T0 + T1 p→1

I (t) = lim I = I0 + I1 p→1

V (t) = lim V = V0 + V1 p→1

The analytical solutions of the nonlinear system for the given modeling:  ⎞ 6 β(0.5) 1.1e ⎟ −dT t ⎜ −s +⎝ − ⎠e dT c ⎛

T (t) = 1.1e6 e−dT t

  β(0.5) 1.1e6 e−t(c+dT )

s + dT   c  6 β(0.5) 1.1e6 e−t(c+dT ) β(0.5) 1.1e + I (t) = 0.2e−δt − −c − dT − δ −c − dT + δ −ct −δt α(0.2)e α(0.2)e + V (t) = 0.5e−ct − δ−c −δ + c +

3 Results and Discussion From Fig. 2 as the infected rate decreases, the target uninfected cells increases for the value of β = 0.01, 0.05 and 0.1. We have taken new values for parameter estimation to compare the literature [6]. This results, the some of the virus in the liver cells gets infected in lower rate, and therefore the hepatitis B virus decreases gradually. From Fig. 2 as the infected rate increases, the infected cells increases for the value of β = 0.01, 0.05 and 0.1. We have taken new values for parameter estimation. This result, the virus in the liver cells gets damaged. And the hepatitis B virus increases and the infection persist. From Fig. 3 the target uninfected cells decreases so the death rate also decreases for the value of dT = 0.1, 0.06 and 0.01. We have taken new values for parameter estimation. From Fig. 3 as the death rate increases, the infected cells also increases for the value of dT = 0.1, 0.06 and 0.01.

456

S. Balamuralitharan and S. Vigneshwari

Fig. 2 Parameter estimation of target uninfected cells and infected cells for infected rate

From Fig. 4 as the death rate of infected cells increases, the target infected cells increases for the value of δ = 0.06, 0.10 and 0.25. This result, there will be damage of liver cells and hepatitis B virus persists. From Fig. 4 as the death rate of infected cells increases, the hepatitis B virus increases for the value of δ = 0.06, 0.10 and 0.25. We have taken new values for parameter estimation. This result, the infected cells starts infecting the liver cells spontaneously; the hepatitis B virus is increased.

Solutions of Viral Dynamics in Hepatitis B Virus Infection Using HPM

457

Fig. 3 Parameter estimation of target uninfected cells and infected cells for death rate

From Fig. 5, as there is a virus production, the hepatitis B virus is persisting. For the values of α = 45, 100 and 164. This result, hepatitis B virus increases and damaging the liver cells. From Fig. 5 the infected cells increases, so the virus death rate also increases for the value of c = 0.5, 0.28 and 1. From Fig. 6 the hepatitis B virus decreases and increases for the value of c = 0.5, 0.28 and 1. From Fig. 6 virus death rate increases, so the target uninfected cells are increases for the value c = 0.5, 0.28 and 1.

458

S. Balamuralitharan and S. Vigneshwari

Fig. 4 Parameter estimation for target infected cells and hepatitis B virus death rate of infected cells

Solutions of Viral Dynamics in Hepatitis B Virus Infection Using HPM

459

Fig. 5 Parameter estimation for hepatitis B virus for virus production rate and parameter estimation for infected cells for virus death rate

460

S. Balamuralitharan and S. Vigneshwari

Fig. 6 Parameter estimation for hepatitis B virus for death rate and parameter estimation for target uninfected cells of virus death rate

4 Conclusions In this proposed work, by using HPM the nonlinear differential equations are solved for a model of Hepatitis B virus. Analytical solution obtained by this method is satisfactory same as the exact results to these models. For solving nonlinear problems, HPM method is very helpful and gives quickly convergent approximations that lead to exact solution. The future work would be solving more than three equations using different methods like Homotopy analysis method, Variational iteration method, Modified variational method, etc.

Solutions of Viral Dynamics in Hepatitis B Virus Infection Using HPM

461

References 1. M. Aniji, N. Kavitha and S. Balamuralitharan, Approximate solutions for HBV infection with stability analysis using LHAM during antiviral therapy, Boundary Value Problems (2020) 2020:80, doi:https://doi.org/10.1186/s13661-020-01373-w. 2. S. Geethamalini, S. Balamuralitharan, Dynamical analysis of EIAV infection with cytotoxic T-lymphocyte immune response delay, Results in Applied Mathematics 2 (2019) 100025. 3. V. Geetha, S. Balamuralitharan, S. Geethamalini, M. Radha, and A. Rathinasamy, Analytic solutions of the deterministic SEIA worm model by homotopy perturbation Method, AIP Conference Proceedings 2112, 020100 (2019). 4. M. Radha, S. Balamuralitharan, S. Geethamalini, V. Geetha, and A. Rathinasamy, Analytic solutions of the stochastic SEIA worm model by homotopy perturbation method, AIP Conference Proceedings 2112, 020050 (2019). 5. M. Aniji, N. Kavitha, and S. Balamuralitharan, Analytical solution of SEICR model for Hepatitis B virus using HPM, AIP Conference Proceedings 2112, 020024 (2019). 6. S. Mayilvaganan and S. Balamuralitharan, Analytical solutions of influenza diseases model by HPM, AIP Conference Proceedings 2112, 020008 (2019) 7. S. Mayilvaganan and S. Balamuralitharan, A Mathematical Modelling and Analytical Solutions of Nonlinear Differential Equations Model with using Homotopy Perturbation Method, ARPN Journal of Engineering and Applied Sciences, 13, 18, 4966-4970, September (2018). 8. S. Geethamalini and S. Balamuralitharan, Semianalytical solutions by homotopy analysis method for EIAV infection with stability analysis, Advances in Difference Equations, (2018) 2018:356 9. Radha M. and Balamuralitharan S, Sensitivity and Stability Analysis of Reproduction Number on the Epidemic Model of Ebola Virus in comparing Environmental Changes with the Critical Immunity Levels in Population Size, Research Journal of Biotechnology, 13 (10) October (2018). 10. Geethamalini S. and Balamuralitharan S, Equine infectious anaemia virus dynamics and stability analysis: The role of agar gel immunodiffusion test and enzyme immunoabsorbent assay, Research Journal of Biotechnology, 13 (5) May (2018), 28-33. 11. S. Balamuralitharan and S. Geethamalini, Solutions Of The Epidemic Of EIAV Infection by HPM, IOP Conf. Series: Journal of Physics: Conf. Series 1000 (2018) 012023 doi:https:// doi.org/10.1088/1742-6596/1000/1/012023 12. A. Govindarajan, S. Balamuralitharan, T. Sundaresan, HPM Of Estrogen Model On The Dynamics Of Breast Cancer, IOP Conf. Series: Journal of Physics: Conf. Series 1000 (2018) 012095 doi:https://doi.org/10.1088/1742-6596/1000/1/012095 13. S. Balamuralitharan and S. Geethamalini, Parameter Estimation Of Model For EIAV Infection Using HPM, International Journal of Pure and Applied Mathematics, 113 12, 2017, 196-204. 14. S. Geethamalini and S. Balamuralitharan, Homotopy Perturbation Method for Solving A Model for EIAV Infection, International Journal of Control Theory and Applications (IJCTA), 9(28), 2016, pp. 439-446.

A Mathematical Modeling of Dengue Fever for the Dynamics System Using HAM S. Balamuralitharan and Manjusree Gopal

1 Introduction Dengue could be a painful mosquito-borne viral infection most occurs in tropical and subtropical areas around the world. DENV-1, DENV-2, DENV-3, and DENV-4 are the four serovars of the disease. The distinction is based on their antigenity. This virus is an RNA virus of the family Flaviviridae. It is transmitted by Aedes Aegypti mosquitoes. It is also famous as break bone disease because the patient will feel like his bones are about to break [1–4]. The diagnosis of dengue fever is based on the symptoms like nausea, pain behind the eyes, emesis, reckless, severe body pain and decrease in white blood cell count. The recovery from the disease may take up to 7 days. Some cases of the disease will become deadly and known as Dengue Hemorrhagic fever [7] which leads to heavy bleeding, decrease in blood platelet count and leakage of blood plasma, or dengue shock syndrome in which hazardous hypotension occurs [8–10]. Female Aedes mosquitoes are the primary carriers of the virus. If the mosquito is infected once, then it acts as a mediator for carrying the virus to humans for respite of their lifetime. The affected humans are most of the carriers. Since these mosquitoes are day time feeder, early in the morning and evening before sundown are their peak biting periods. Aedes albopictus is another type of mosquito that can act as a carrier of the virus. Micro biological laboratory tests are available to confirm dengue fever. The vaccine for dengue fever is produced by Sanofi and named as Dengvaxia. 15th June in every year is celebrated as Anti-Dengue day. Prevention of depend on control and protection from the mosquitoes that spreads dengue virus. The simplest way to stay

S. Balamuralitharan () · M. Gopal Department of Mathematics, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_44

463

464

S. Balamuralitharan and M. Gopal

safe from this disease is to ensure that mosquitoes do not breed in our surroundings. Proper detection and caring for the affected patients can reduce the mortality rate to a great extent [13]. It was estimated that about 400 million dengue fever cases were reported all over the world.

2 Mathematical Modeling The SIR mathematical model feigns the proliferation of dengue virus among the host and the carrier. Here X denotes the susceptible human population, Y denotes the infected human population and Z denotes the Recovered human population. Considering that α is the mortality rate of affected human community and the human birth rate is represented by b. Assuming that γ is cure rate of affected humans and also considered δ as death rate of vectors (Fig. 1). The SIR model used by Syafruddin and Noorani [1] is simplified to = b (1 − x) − αxz

dx dt dy dt

= αxz − βy

dz dt

= γ (1 − z) y − δz

(1)

2.1 Basic Concept of HAM Consider, R[M(t)] = 0, where R is any operator. Let M0 (t) denote the initial estimate of the exact solution M(t) = 0. Let k ∈ [0, 1] called as embedding parameter, h = 0 denoted as auxiliary parameter and H(t) = 0 as auxiliary function [5, 6, 11, 12]. Now from k ∈ [0, 1]we can establish Fig. 1 Compartmental diagram for dengue fever

A Mathematical Modeling of Dengue Fever for the Dynamics System Using HAM

!    ! (1 − k) LT ψ t; k − M0(t) − khH (t)R ψ t; k !   ˆ ψ t; k ; M0 (t) , H (t) , h, k . =H

465

(2)

We have the right to choose the auxiliary linear operator LT , h(t), H(t), and M(t). Applying (2) equals zero, i.e.,     (1 − k) LT ψ (t : k) − Mo(t) = khH (t)R ψ (t : k)

(3)

called as zero-order deformation equation. When k = 0, Eq. (3) becomes   ψ t; 0 = M0 (t),

(4)

  ψ t; 1 = M (t) .

(5)

when k = 1,

Using Taylor series, we can develop ψ(t; k) as ∞    ψ t; k = M0 (t) + Mn (t) kn

(6)

n=1

where

  1 ∂ n ψ t; k  Mn (t) =  n! ∂k n 

.

(7)

k=0



For n = 1, 2, . . .

∂ n ψ (t;k)  ∂k n 

exists. k=0

Equation (6) converges at k = 1. On these assumptions, the series become ∞    ψ t; 1 = M0 (t) + Mn (t) .

(8)

n=1

For higher order deformation equation,   − → M r (t) = M0 (t), M1 (t), M2 (t), . . . . . . , Mr (t) .

(9)

Finally deformation equation for nth order is     LT Mn (t) − χn Mn−1 (t) = hH (t) n Mn−1 (t)

(10)

466

S. Balamuralitharan and M. Gopal

where   n Mn−1 (t) =

1 (n − 1)!

 ! ∂ n−1 R ψ t; k (11)

∂k n−1

and * χn =

0, if n ≤ 1 1, if n > 1

2.2 Analytic Solution of the Dengue Fever Model To get the analytic solutions of (1), we construct the homotopy as follows, x˙ = b (1 − x) − αxz y˙ = αxz − βy z˙ = γ (1 − z) y − δz

(12)

This is the given Syafruddin and Noorani model. Let, >t Xm (t) =χm Xm−1 (t) −

 Xm−1 (τ ) + bX (τ )

0



m−1 

(13)

 !  Xk (τ ) Zm−1−k (τ ) − 1 − χm b dτ

k=0

If m = 1, we have >t

 !  X0 (τ ) + bX0 (τ ) + αX0 Z0 (τ ) − 1 − χ1 b dτ

X1 (t) = χ1 X0 (t) − 0

  X1 = b − bX0 − αX0 Z0 t

Now, >t Ym (t) = χm Ym−1 (t) − 0

⎡  ⎣Ym−1 (τ ) − α

m−1 

⎤ Xk (τ ) Zm−1−k (τ ) + βYm−1 (τ )⎦ dτ

k=0

(14)

A Mathematical Modeling of Dengue Fever for the Dynamics System Using HAM

467

If m = 1 >t Y1 (t) = χ1 Y0 (t) −

   Y0 (τ ) − αX0 Z0 (τ ) + βY0 (τ ) dτ

0

  Y1 (t) = αX0 Z0 + βY0 t Similarly, ⎡

>t

 ⎣Zm−1 (τ ) − γ Ym−1 ()

Zm (t) = χm Zm−1 (t) − 0



m−1 

⎤ Zk (τ ) Ym−1−k (τ ) + δZm−1 (τ )⎦ dτ

k=0

(15) If m = 1 >t Z1 (t) = χ1 Z0 (t) −

   Z0 (τ ) − γ Y0 (τ ) + γ Z0 Y0 (τ ) + δZ0 (τ ) dτ

0

  Z1 (t) = γ Y0 + γ Z0 Y0 + δZ0 t If m = 2, and substituting the values for X1 , Y1 and Z1 in the equations, we get >t X2 (t) = χ2 X1 (t)−

⎡ ⎣X1 (τ ) + bX1 (τ ) + α

1 

⎤ Xk (τ ) Z1−k (τ ) − (1−X2 ) b⎦ dτ

k=0

0

>t  X2 (t) = X1 (t)− (b − bX 0 − αX0 Z0 ) + b (b − bX 0 − αX0 Z0 ) t 0

 +α (X0 Z1 + X1 Z0 ) − (1 − 1) b dτ

    X2 (t) = − b b − bX0 − αX0 Z0 + αX0 γ Y0 + γ Z0 Y0 + δZ0  ! t 2 +αZ0 b − bX0 − αX0 Z0 2

468

S. Balamuralitharan and M. Gopal

>1 Y2 (t) = χ2 Y1 (t) − 0

>t Y2 (t)=Y1 (t) −



⎡ ⎣Y1 (τ ) − α

m−1 

⎤ Xk (τ ) Z1−k (τ ) + βY1 (τ )⎦ dτ

k=0

  !  αX0 Z0 + βY0 − α (X0 Z1 + X1 Z0 ) +β αX0 Z0 +βY0 t dτ

0

    Y2 (t) = αX0 γ Y0 + γ Z0 Y0 + δZ0 − αZ0 b − bX0 − αX0 Z0  ! t 2 −β αX0 Z0 + βY0 2 >t Z2 = χ2 Z1 (t) −

⎡ ⎣Z1 (τ ) − γ Y1 (τ ) + γ

0

Z2 (t) = Z1 (t) −

1 

⎤ Zk (τ ) Y1−k (τ ) + δZ1 (τ )⎦ dτ

k=0

2t     γ Y0 + γ Z0 Y0 + δZ0 − γ αX0 Z0 + βY0 t 0  !  + γ (Z0 Y1 + Z1 Y0 ) + δ γ Y0 + γ Z0 Y0 + δZ0 t dτ

  ! 2  Z2 (t) = γ X0 Z0 + βY0 − γ Z0 αX0 Z0 + βY0 t2    ! 2 − γ Y0 γ Y0 + γ Z0 Y0 + δZ0 δ + γ Y0 + γ Z0 Y0 + δZ0 t2 where, X = X0 + X1 Y = Y0 + Y1 Z = Z0 + Z1

A Mathematical Modeling of Dengue Fever for the Dynamics System Using HAM

469

Analytical solutions of the given nonlinear system:       X = b − bX0 − αX0 Z0 t− b b − bX0 − αX0 Z0 + αX0 γ Y0 + γ Z0 Y0 + δZ0  ! 2   + αZ0 b − bX0 − αX0 Z − β αX0 Z0 + βY0 t2     Y = αX0 Z0 + βY0 t + αX0 γ Y0 + γ Z0 Y0 + δZ0    ! 2 − αZ0 b − bX0 − αX0 Z0 − β αX0 Z0 + βY0 t2       Z = γ Y0 + γ Z0 Y0 + δZ0 t + γ X0 Z0 + βY0 − γ Z0 αX0 Z0 + βY0    ! 2 − γ Y0 γ Y0 + γ Z0 Y0 + δZ0 − δ γ Y0 + γ Z0 Y0 + δZ0 t2

3 Result and Discussion In Fig. 2, b represents human birth rate and we are considering the data for 150 days. As the value of b increases, the susceptible human population decreases. The α represents death rate of infected human community and as the value of α increase, susceptible human population decreases. Figure 3 gives the parameter estimation for susceptible human population for mortality rate of carriers. δ denotes the mortality rate of vectors. As the vector death rate increases, there comes a decrease in susceptible human population. There will not be any change in susceptible population as the recovery rate of infected population changes. The parameter estimation for infected human population for human birth rate is shown. There comes a decrease in infected human population as the birth rate increases. From Fig. 4 it is clear that the affected human death rate increases the affected human population decreases. Also parameter estimation for affected human population for death rate of vectors is given. There is a slight increase in affected human community as mortality rate of vectors increases. Figure 5 is the parameter estimation for infected human population for probability of uninfected being infected denoted by β. For infected human population, it will not be affected by change in recovery rate infected human population. In Fig. 6 as the mortality rate of affected human communityα increases, the recovered human population decreases. As the cure rate of infected human communityγ increases, the rate of recovered human population also increases. As the vector death rate δ increases the recovered human population decreases. Figure 7 represents parameter estimation for recovered human population for probability of uninfected being infected. As the β value increases recovered human population also increases. Homotopy analysis method is very useful for finding the solutions of non-linear differential equations. The analytic approximations given by this method is more accurate and useful compared to other methods. The parameter values we taken

470

S. Balamuralitharan and M. Gopal

−0.055

x

−0.060

−0.065

−0.070 −1.6

−1.4

−1.2

−1.0

−0.8

−0.6

t Fig. 2 Parameter estimation for susceptible human population for birth rate and for death rate of infected human population

−0.0239

y

−0.0240 −0.0241 −0.0242 −0.0243 −1.6

−1.4

−1.2

−1.0

−0.8

−0.6

t Fig. 3 Parameter estimation for susceptible human population for death rate of vectors and infected human population for human birth rate

A Mathematical Modeling of Dengue Fever for the Dynamics System Using HAM

471

0.95

z

0.90

0.85

0.80

− 1.6

− 1.4

− 1.2

− 1.0

− 0.8

− 0.6

t Fig. 4 Parameter estimation for affected human population for mortality rate of affected human population and infected human population for death rate of vectors

−0.054

x

−0.056 −0.058 −0.060 −0.062 −1.6

−1.4

−1.2

−1.0

−0.8

−0.6

t Fig. 5 Parameter estimation for infected population for probability of uninfected being infected and recovered human community for mortality rate of affected human population

472

S. Balamuralitharan and M. Gopal

−0.02385 −0.02390

y

−0.02395 −0.02400 −0.02405 −0.02410 −1.6

−1.4

−1.2

−1.0

−0.8

−0.6

t

Fig. 6 Parameter estimation for recovered human population for recovery rate of infected human population and for death rate of vectors

0.84

z

0.82

0.80

0.78 − 1.6

− 1.4

− 1.2

− 1.0

− 0.8

− 0.6

t Fig. 7 Parameter estimation for recovered human population for probability of uninfected become infected

A Mathematical Modeling of Dengue Fever for the Dynamics System Using HAM

473

are giving more precise solution. The values we considered are different from other methods. For comparing the susceptible, affected and cure rate of human community with respect to each parameter value, this method works more effectively. The main aims of such mathematical models are to get an idea about the spread of the disease and to take sufficient precautions and prevention from the virus. Thus the approach of HAM to this given mathematical model will be effectively useful for further studies.

4 Conclusion In this paper, we are solving non-linear differential equations using HAM for dengue fever model. The danger of dengue fever is explained using SIR models. The analytical approximations of the dengue fever model are compatible and authenticate the potential and powerfulness of Homotopy Analysis Method. By selecting acceptable values of auxiliary parameters, HAM gives an easy way to control and direct the convergence of the series. In future this method will be used in many non linear differential equations in many fields. We would try to find semianalytic solutions with stability analysis for more than three equations.

References 1. Side Syafruddin and Salmi Md. Noorani, A SIR Model for Spread of Dengue Fever Disease (Simulation for South Sulawesi, Indonesia, and Selangor, Malaysia), World Journal of Modelling and Stimulation. Vol. 9(2), 96–105, 2013. 2. Side Syafruddin and Salmi Md. Noorani, SEIR Model for Transmission of Dengue Fever in Selangor Malaysia, International Journal of Molecular Science, Vol. 9, 380–389, 2012. 3. Zhilan Feng and Jorge X. Velasco-Hernández, Competitive exclusion in a vector-host model for the Dengue fever, Journal of Mathematical Biology, Vol. 35, 523–544, 1997. 4. M. Derouich, A. Boutayeb, and E.H. Twizell, A model of dengue fever, Biomedical Engineering Online, 1–10, 2003. 5. Liao. Beyond Perturbation: Introduction to the homotopy analysis method. New York (US): Boca Raton, 2004. 6. Fadi Awawdeh, A. Adawi, Z. Mustafa, Solutions of the SIR models of epidemic using HAM, Elsevier, Vol. 42, 3047–3052. 7. N. Nuraini, E. Soewong, and K. A. Sidarto, Mathematical Model of Dengue Disease Transmission with Severe DHF Compartment, Bulletin of the Malaysian Mathematical Sciences Society, Vol. 30 (2), 143–157, 2007. 8. Agusto FB, Marcus N, Okosun KO. Application of optimal control to the epidemiology of malaria. Electronic Journal of Differential Equation Vol. 81, 1–22, 2012. 9. M. Derouich, A. Boutayeb. Dengue fever: mathematical modelling and computer simulation. Applied Mathematics and Computation Vol. 177(2), 528–544, 2006. 10. L. Esteva, C. Vargas. Analysis of dengue disease transmission model. Math Biosciences, 150, 131–135, 1998.

474

S. Balamuralitharan and M. Gopal

11. S. Geethamalini and S. Balamuralitharan, Semianalytical solutions by homotopy analysis method for EIAV infection with stability analysis, Advances in differential equations, 2018:356, 2018. 12. G. Arul Joseph and S. Balamuralitharan, A Nonlinear Differential Equation Model of Asthma Effect Of Environmental Pollution using LHAM, IOP Conf. Series: Journal of Physics: Conf. Series 1000 (2018) 012043. doi :https://doi.org/10.1088/1742-6596/1000/1/012043. 13. Cummings DAT, Lesser J. Infectious Disease Dynamics. In: Nelson KE, Masters Williams Cs (eds). Infectious disease epidemiology. Burlington, MA: Jones and Bartlett Learning, Vol. 138–139, 830–831, 2013.

Vision-Based Robot for Boiler Tube Inspection Md. Hazrat Ali, Shaheidula Batai, and Anuar Akynov

1 Introduction Due to the adaptability to the hazardous works in dangerous, unreachable, and narrow environments, wall-climbing robots are much more helpful in petroleum industries, and they have played a crucial role in robotics. Lots of wall-climbing robots are designed for maintenance or detection [1]. Many types of pipes are being utilized for the construction of essential lifelines in our contemporary society. When the defect in the big pipes is caused by rust and natural calamity, it is difficult to locate the defect. Hence, scheduled inspection is essential. Besides, the high temperature and the big pipe is the key obstacles for human inspection. If a robot can climb up and complete the inspection of the outside surface of a pipe, a fast and accurate inspection result can be obtained at low cost and will be a wonderful labor-saving process. In-line crack-detection robots are being focused for the past years. The inspection mechanism can be divided into two sections based on their operation and installation such as out-pipe robot and in-pipe robot [2]. The out-pipe robots are complicated in design and development compare to the in-pipe robots. The out-pipe robot may face various obstacles whereas the in-pipe robot may have the fix type obstacle inside the pipe. Only the requirement for the in-pipe robot is that during the inspection period, there should be no chemical, liquid and pipe needs to be empty. As for the out-pipe robots, it needs to get over the obstacles such as flange and fixtures. Even so, it has the advantage of easy installation. The key point in designing a wall-climbing robot is to choose an appropriate adhesion mechanism. There are four types of suction-mechanism used in designing tube inspection robots such as suction cup mechanism, magnetic adhesion mechanism, adhesive materials,

Md. H. Ali () · S. Batai · A. Akynov Department of Mechanical Engineering, Nazarbayev University, Nur-Sultan, Kazakhstan e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_45

475

476

Md. H. Ali et al.

and grasping mechanisms. And each of them has its advantages and disadvantages compared to others. For instance, the suction cup is able to conduct a nondestructive inspection task, but it needs the energy source to provide adhesion force during the task. Magnetic adhesion mechanism is capable of supplying large amount of adhesion force without bringing destruction to the surface. Nevertheless, this method can only apply to ferromagnetic surfaces. Adhesion materials as a suction mechanism need the surfaces as smooth as possible since they can hardly serve normally on the rough surfaces. Grasping methods can be used on a rather complicated wall surface, but they may bring harm to the surface. However, the air suction adhesion method gives more flexibility to the robot [3].

2 Past Work Several research teams have designed climbing robots based on adhesion mechanisms [4, 5]. An inch worm-type inspection robot for the provision of the pressure to heavy water reactor in a nuclear power plant is designed and developed by Lim et al. [5]. However, this robot was developed for the narrow pipe and had a speed of 1.5 mm/s while climbing. Suzuki et al. designed the magnetic-wheeled out-pipe robot which is capable of avoiding the flange on the pipe, even though it failed to climb along the curved pipe [6]. Li et al. presented the development of a robot which applies to plants in operation, and it is able to climb on both the noncurved as well as curved wall-surface pipes of various diameters with independent, differential drive mechanism [7]. The wall-climbing robots need to operate on a slope, vertical wall and horizontal ceiling, thus, the stability during the movement is critical and challenging in comparison with the traditional mobile robot. The change in the gravitational center may lead to adhesion failure and increase in energy consumption. In comparison with other adhesive mechanisms [7], it has advantages such as strong adaptability, lightness, less power-consuming, without noise, with reasonably simple structural features [8], and with no damage to the surface which it climbs on. Robots with electrostatic adhesion mechanism which designed by Chen et al. can be used on different types of wall surfaces, and various kinds of movements can be implemented together with the straight-line movement and turning movement [8]. Getting inspiration from the climbing gaits of geckos, this wall-climbing robot has kinematics, which is likely to a gecko’s moving principle. Unlike the climbing of the gecko by sticking on the wall surface with Van Der Waals force, the robot is designed by electrostatic adhesion force with the help of electrostatic adhesive footpad. Some plant buildings are made up of the steel wall, in particular, the chemical plants, oil tanks, and nuclear plants have steel walls. In these plants, the regular inspection and maintenance are essential; it is a compatible option to use a robot which can move smoothly and steadily on the steel wall, for such cases magnetic adhesion mechanism is preferable. Magnetic adhesion mechanism is popular because of its competence in the stability, as well as flexibility in movement.

Vision-Based Robot for Boiler Tube Inspection

477

A robot which is in direct contact with the surface material by the magnetic wheel was designed by Ishihara [9]. This robot is made up of six wheels including two driving wheels and four passive wheels in possession of suspension and caster mechanism as well as magnetic tires. Also, the robot can get over a gap of 10 mm, which needs improvement. Boonyaprapasorn et al. developed a robot which carries the electromagnetic acoustic transducer probe and cameras to deal with the inspection of the actual location [10]. Moreover, this robot has a mechanism which can switch it from vertical movement to horizontal movement in the absence of a steering process. The adhesion provision comes from the magnetic wheels and magnetic bars located beneath the robot. The robot moves up and down vertically with the belts which are driven by the magnetic wheels. Pneumatic adhesion Robot can climb onto a variety of materials, including both ferromagnetic as well as non-ferromagnetic materials [11]. The primary pneumatic mechanisms to get adhesion force are suction cups and negative pressure-thrust, respectively. Suction Cups are relatively popular due to their simple designs and principle of operation. However, the main drawback is the continuous need for attaching and detaching to or from the surface for gaining further locomotion. Hence, this drawback always brings an adverse effect and limit their moving speed. They also experience suffering from the leakage of vacuum over time, as well as sensitivity to cracks and abnormalities in the surface material, which leads to the loss of adhesion force. A wall-climbing robot which was designed by Ge et al. based on the air suction cup method. It mainly consists of a track belt, motor, passive suction cups, pulley, guide rail, as well as the tail as shown in Fig. 1 [12]. The passive suction cups are positioned on the exterior surface of the track belt at the interval equal. As the track belt has the flexibility, a guide rail is necessary to make sure that the positions of the suction cups are as desired. Consequently, specific guide rail is necessary to make the suction force of the cups well-distributed. It Fig. 1 Model of a wall-climbing robot equipped with the air suction method

478

Md. H. Ali et al.

is the most crucial component of a wall-climbing robot. Besides, the track belt is necessary to be driven by the motor with a pulley. A pump has to be installed on the passive air suction robot to provide the constant energy supply to sustain the attachment. Moreover, pumps usually introduce the considerable payload problem of wall-climbing robots. A wall-climbing robot developed by Xu et al. represents a new adhesion mechanism by the method of biomimetic grasping claws for inspecting the wall surface of rough concrete buildings [13]. Two types of models for the communication between grip claws and micro-embossing are discussed.

3 Design and Development The water wall tube in a boiler plant is made up of many tubes joined by rip panels [14]. The abrasion and corrosion may result in reducing the boiler tube thickness [15]. Low carbon, overheating, creep as well as hydrogen embrittlement are regarded as the secondary and at times main damage factors. It is broadly known that creep may appear in carbon steels at temperatures over 400–440 ◦ C, which is the standard temperature in water-wall boiler tubes. As a preventive measure to be boiler tube failure, a regular surface inspection is utmost necessary. A climbing robot is designed and developed to carry out particular inspection task visually with the help of a camera. The robot conducts an inspection process remotely with the help of wireless communication from the ground. The robot carries a camera to conduct a visual inspection of the boiler tube. To conduct the operation efficiently, the robot is designed to climb up vertically to reach the target position. The toggle method is aimed to design and to switch the robot from vertical movement to horizontal movement when it needs to change the motion direction. Although the water wall tube is made of metal, firstly, we choose the air suction mechanism as the suitable adhesion method for the robot, as shown in Fig. 2 [4]. The adhesion of the robot makes use of the air suction force to stick to the wall surface. An Android app was developed to navigate the robot on the boiler tube. Fig. 2 3D model of the pneumatic adhesive robot

Vision-Based Robot for Boiler Tube Inspection

479

The robot had some disadvantages, as discussed earlier as it was developed based on the Pneumatic adhesion principle. The second robot was developed based on the magnetic adhesion principle, and it gives far better performance than the pneumatic adhesion robot. Since the weight of the robot is the crucial factor in moving vertically, plywood is used as the frame of the robot. Consequently, the weight of the frame is meager and can be neglected. The dimensions of the frame are 160 × 200 mm with a tolerance of 10 mm. Also, a hole was drilled in the middle of the frame to install a magnet and two DC motors that are connected to the bottom of the robot. All the other components, such as a controller, sensors, and batteries are fixed on top of the frame. The robot is equipped with an HC-05 Bluetooth module in order to get remote control over the prototype. It is the critical component which is connected in Master and Slave configuration. Master configuration lets the robot to initiate the connection while Slave configuration can only receive the initialization. SoftwareSerial.h library is used in the Arduino platform to develop communication. After activating the Bluetooth, a smartphone can control the robot with the help of the app. According to the manufacturer, the robot can be operated in a range of 30 m distance. Additionally, the information about the surface conditions is obtained by the image processing algorithm using a camera. The camera can save the image in JPEG format. Based on the recorded image, the defects can be determined. An android app drives the robot, and it can easily be controlled from a remote distance.

4 Preliminary Inspection Results and Discussion Figure 3 shows the developed prototype for inspecting the long pipes and tubes. A camera is attached to the robot for detecting the cracks on the external surface of the pipes and tubes. It is the second version of the robot which can climb the wall with a belt-type wheel driven by a pair of DC motors. Fig. 3 The developed robot with a camera

480

Md. H. Ali et al.

Fig. 4 The robot is climbing the pipes and recording images

Fig. 5 Saved images by the magnetic adhesive robot

Figure 4 shows the robot climbing on the tubes approximately having a diameter of 500 mm. The robot is controlled by a remote controller app developed on android platform. The remote controller navigates the robot in four different directions, namely upward, downward, left, and right. The test pipe diameter is chosen to be 500 mm, and a height of about 10 m. From the figure, it can be seen that the robot can climb both on the vertical and inclined surfaces with enough adhesion force. Figure 5 shows the images taken by the camera. The camera records the images of the pipes and tubes while climbing on it. Initially, the camera is positioned at a distance of 45 mm far from the wall. As the location is very close, the recorded

Vision-Based Robot for Boiler Tube Inspection

481

images are not very clear, but it can still inspect the wall to find the cracks on the outer surface. The robot needs to travel through the inner surface to inspect the inner surface of a pipe or tube. It is very time consuming, and costly process as the plant should be stopped and start again. Normally, the outer surface inspection gives an overall idea about the structural health of the pipes and tubes. However, sometimes, a crack may occur from inside which is invisible from the outer surface and to solve this problem, a new method should be developed.

5 Conclusions Based on the review of various types of adhesion and locomotion mechanisms, the boiler tube inspection robot has been developed successfully. The switching device of the robot enables the robot to alter the motion from vertically to horizontally. With the help of the sensor, the inspection of the pipe and tube wall diameter is simpler and more appropriate than the conventional methods. The proposed robot can be used to detect and find the cracks on the outer surface of the pipes and tubes in the petroleum industry. Some other areas of application are on the structures which are vulnerable to the failures, such as long bridges, big tunnels, tall buildings, a big vehicle such as ships, airplanes, etc. Currently, the robot carries an inspection camera, and it is able to inspect the wall surfaces with a range of variable surface profiles to determine the creep, cracks, decays, and defects. The designed and developed robot can be used in a place where a human being is not capable of reaching and having a high risk to life such as high-temperature boiler tube. Thus, the robot increases the safety of the plants and reduces the risk to human life.

References 1. D. Ge, C. Ren, T. Matsuno, and S. Ma, “Guide rail design for a passive suction cup-based wallclimbing robot,” 2016 IEEE/RSJ International Conference on intelligent robots and Systems (IROS), Daejeon, 2016, pp. 5776–5781. 2. Y. Tamura, I. Kanai, K. Yamada, and H.O. Lim, “Development of pipe inspection robot using ring-type laser,” 2016 16th International Conference on Control, Automation and Systems (ICCAS), Gyeongju, 2016, pp. 211–214. 3. S. C. Han, J. An, and H. Moon, “A remotely controlled out-pipe climbing robot,” 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, 2013, pp. 126–126. 4. Md. Hazrat Ali, et al. “Development of a Robot for Boiler Tube Inspection”, 15th International Conference on Informatics in Control, Automation and Robotics ICINCO 2018, Portugal. 5. J. Lim, H. Park, S. Moon and B. Kim, “Pneumatic robot based on inchworm motion for small diameter pipe inspection,” 2007 IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, pp. 330–335, 2007. 6. Suzuki, M., Yukawa, T., Satoh, Y. and Okano, H., “Mechanisms of Autonomous Pipe-Surface Inspection Robot with Magnetic Elements,” Proc. of the 2006 IEEE International Conference on Systems, Man and Cybernetics, Vol. 4, pp. 3286–3291, 2006.

482

Md. H. Ali et al.

7. Peilin Li, Sang-heon Lee, Hung-Yao Hsu, Review on fruit harvesting method for potential use of automatic fruit harvesting systems, Procedia Engineering, Vol. 23, pp. 351–366, 2011. 8. R. Chen, R. Liu, J. Chen, and J. Zhang, “A gecko-inspired wall-climbing robot based on electrostatic adhesion mechanism,” 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, 2013, pp. 396–401. 9. H. Ishihara, “Basic study on the wall-climbing robot with passive magnetic wheels,” 2017 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, 2017, pp. 1964–1969. 10. A. Boonyaprapasorn, T. Maneewarn, and K. Thung-Od, “A prototype of inspection robot for water wall tubes in boiler,” Proceedings of the 2014 3rd International Conference on Applied Robotics for the Power Industry, Foz do Iguassu, 2014, pp. 1–6. 11. A. Brusell, G. Andrikopoulos, and G. Nikolakopoulos, “A survey on pneumatic wall-climbing robots for inspection,” 2016 24th Mediterranean Conference on Control and Automation (MED), Athens, 2016, pp. 220–225. 12. D. Ge, C. Ren, T. Matsuno, and S. Ma, “Guide rail design for a passive suction cup-based wallclimbing robot,” 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, 2016, pp. 5776–5781. 13. F. Y. Xu, X. S. Wang, G. P. Jiang and Q. Gao, “Initial design and analysis of a rough concrete building-climbing robot based on biomimetic grip claws,” 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, 2013, pp. 408–413. 14. X. Gao, D. Xu, Y. Wang, H. Pan, and W. Shen, “Multifunctional robot to maintain boiler watercooling tubes,” Robotica, vol. 27, pp. 941–948, 2009. 15. S. Park, H.D. Jeong, and Z.S. Lim, “Development of mobile robot systems for automatic diagnosis of boiler tubes fossil power plants and large size pipelines,” in Proc. 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, EPFL, Lausanne, Switzerland, 2002, pp. 1880–1885.

Qualitative Study on Data Mining Algorithms for Classification of Mammogram Images N. Arivazhagan and S. Govindarajan

1 Introduction Breast cancer is the crucial disease for women. Uncontrolled growth of breast tissues leads to Breast cancer. Cancer is identified by the large amount of cyst present in the breast tissues [1]. The malignant tumors developed in the cells in the breast are termed as breast cancer. Breast cancer deaths could be prevented if detected early. Mammograms are widely used as the screening tool to detect breast cancer in women experiencing no discharge. Mammography helps in 30% reduction in breast cancer deaths [2, 3]. It is considered the most effective option because of its low cost and high sensitive technique that detects even small lesions.

2 Method The work adopted in the study encompasses the following steps: (1) Digitalization of Mammogram images. (2) Identification of Region of interest (ROI) that is recognizing suspicious areas. (3) Extraction of the features from the ROI that is processing essential features of breast cancer. (4) Classification of the images, that is, categorizing the tumor as Benign or Malignant with the use of Data Mining Algorithms as shown in.

N. Arivazhagan () School of Computing, SRM University, Chennai, Tamil Nadu, India S. Govindarajan School of Public Health, SRM University, Chennai, Tamil Nadu, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_46

483

484

N. Arivazhagan and S. Govindarajan

Fig. 1 Algorithm for cancer detection

Digital Database for Screening Mammograms (DDMS) is utilized for this research. For every case the left portion of breast Cranial-Caudal view is considered. Figure 1. Shows two cases: (1) Samples of mammogram images without cancer trace and (2) Sample of mammogram images with cancer disease.

2.1 Data Mining Algorithms Digitalized mammography involves a Data Base of larger magnitude. In this context Data mining, the means of extracting information from the database assumes greater importance [4–6]. Data mining algorithms are of great use in medical field as it aids in the automated mammogram classification through accurate prediction of cancer in the breasts. This paper makes a qualitative study on Data Mining Algorithms used in the classification performance analysis of breast cancer.

Qualitative Study on Data Mining Algorithms for Classification. . .

485

Fig. 2 Left portion of breast Cranial-Caudal view

2.2 Classification Algorithm Samples are labeled and then a class label is assigned to samples that are derived from labeled samples. This procedure is called Classification. This paper qualitatively analyses the algorithm of unsupervised K-Nearest Neighbor, supervised Naive Bayesian, statistical linear Discriminant analysis, ensemble based Boosting, Support Vector Machine and ensemble Bagging that are popularly used in classifying the mammogram into cancer or non-cancer. The paper has followed the use of supervised learning methods for solving problems faced in checking the efficiency of SVM with Bagging classification [7–11].

2.2.1

K-Nearest Neighbor (KNN)

The k-Nearest Neighbors algorithm is the efficient and classical data mining algorithm used for classification. The value of K represents how many classes the algorithm going to classify. Input is the specified object and training samples is the closest neighbor objects [12].

2.2.2

The Naive Bayesian (NB)

The most popular and widely used classification algorithm is the Naive Bayesian. Navies Bayes classification method easily and efficiently implements classification the diversified set of data. Naive Bayes algorithm uses Bayes theorem as a basic principle of working with probabilistic values. The algorithm provides excellent performance under the condition if the classes are independence one. The Naive

486

N. Arivazhagan and S. Govindarajan

Bayes works well if the future set of classes are not dependent among them. Naive Bayes could estimate the parameter even to a small dataset.

2.2.3

Linear Discriminant Analysis (LDA)

Linear Discriminant Analysis (LDA) is basically perform dimensionality reduction where large data set or feature set are reduced to small size of set to reduce the computational complexity. It is one of the widely used preprocessing technique in pattern matching application.

2.2.4

Boosting

The tuning of machine learning algorithms for better performance is efficiently done with the help Adaaboost algorithm. It is an iterative algorithm where the performance is improved gradually step by step in every iteration. This step by step improvement is achieved by selecting the weight value optimally such a way at the end of the iteration there will be some improvement in the performance of the machine learning algorithm. This algorithm involve number of different classifier to achieve the target performance.

2.2.5

Support Vector Machine (SVM)

SVM makes use of the machine supervised learning algorithm for classification. An N-dimensional hyper plane is constructed to optimally classify the data set into two categories. The construction of hyper plane for optimal classification of the clusters of vector is the primary objective of SVM. On the two sides of the plane, the independent variable of the same category is placed on one side and the variables of other categories on the other side. Vectors that are placed nearer to the hyper plane are the support vectors. SVM model is based on identifying the hyper plane to maximize the margin between the support vectors. A support vector machine classifies data that’s linearly separable [9]. If it isn’t linearly separable, kernel trick can be used. However, a linear kernel is the better option for text classification. In comparison with newer algorithms like neural networks, SVM has two main advantages, higher speed and better performance even with a limited number of samples like in thousands. Hence this algorithm becomes suitable for text classification problems, where the dataset quite commonly be having a couple of thousands of tagged samples.

Qualitative Study on Data Mining Algorithms for Classification. . .

2.2.6

487

Bagging

Bootstrap aggregating also called as Bagging is a widely known and popularly used classification technique. It makes repeated sampling (with replacement) from a data set in accordance to a uniform probability distribution. Each bootstrap sample’s data size is same as the size of the original data. Then h bootstrap samples are created and h classifier on each bootstrap sample is created by Bagging. By taking a weighted majority of the k learned classifiers a classification of new instance is created.

3 Efficiency of SVM with Bagging The dataset DDSM, Digital Database for Screening Mammography is used in this research. The research community considers DDSM as the standardised open source for analysis and research. DDSM is one of the optimal dataset which is widely used for testing the performance of pattern recognition algorithms for breast cancer detection. The database is collection of standard images of size 2500. The required information for classification like age group of the person, breast density rating of the sample image, the level of abnormalities (subtlety rating) and information about the abnormalities of two images of each breast. The classification of the mammogram images is evaluated by performance metrics called accuracy which is account for the accuracy level of the algorithm for correct classification, recall which is account for better matching capability of algorithm, precision, specificity and Root mean square error. These values are derived from confusion matrix. The confusion matrix is also used to evaluate overall (average) accuracy of the classifier. Throughout the research the confusion matrix serves as the foundation of measurement. The comparison Table 1 shows the results of the best classification algorithm that is being identified by the evaluation metrics. Fig. 3 Original data

488

N. Arivazhagan and S. Govindarajan

Fig. 4 SVM classification

Margin

Separating Hyperplane Target=Yes

Target=No Support Vectors Support Vector Machine

4 Findings The paper evaluated Digital Database for Screening Mammogram (DDSM) database by extracting GLCM features and physical features from ROI, which are then taken as input for classification algorithm. To compare the classification results, we have used six classification algorithms. When analyzed with other classification techniques, the accuracy of detection by SVM Bagging technique has improved to 98.667% from 83.33% and thus we declare that SVM with bagging provides the better performance.

5 Conclusion The bagging mechanism results shows that there is a great extent of improvement in the SVM classifier for the given cancer classification application. The results shows there is a large dependency of the performance of the algorithm on dataset. The analysis of all the algorithm on DDSM dataset shows that the SVM algorithm outperform comparing with others. The results also argue our assumption of the hybrid algorithm will outperform compare with single classification algorithm even though it is little bit computationally complex one. The efficiency of the algorithm

Table 1 Performance metric of Classification Algorithms Classifiers SVM KNN LDA NB Bagging Boosting

Accuracy 83.45 77.471 72.923 63.989 99.477 93.667

Recall 0.858 0.841 0.652 0.98 0.976 0.828

Precision 0.805 0.75 0.6442 0.5747 0.9789 0.9286

Specificity 0.49 0.4536 0.61 0.2311 0.4664 0.54

RMSE error 0.1667 0.2143 0.3521 0.2571 0.0134 0.0714

Qualitative Study on Data Mining Algorithms for Classification. . .

489

is further improved with help of bagging technique. It is true that the reported performance is valid only under the assumptions we made and for the given dataset.

References 1. Jehlol, H. B., Abdalrdha, Z. k., & Oleiwi, A. S. Classification of Mammography Image Using Machine Learning Classifiers and Texture Features. International Journal of Innovative Research in Advanced Engineering. 2015, 8. 2. Arpana M.A, Prathiba Kiran. Feature Extraction Values for Digital Mammograms, International Journal of Soft Computing and Engineering, 2014;4(2), 183–187. 3. Karthikeyan Ganesan, U. Rajendra Acharya, Chua Kuang Chua, Lim Choo Min, K. Thomas Abraham, and Kwan-Hoong Ng. Computer-Aided Breast Cancer Detection Using Mammograms: A Review. IEEE Reviews in biomedical engineering. 2013, 6. 4. P. Mohanaiah, P. Sathyanarayana, L. GuruKumar. Image texture feature extraction. International Journal of Scientific and Research Publications. 2013; 3(5). 5. Adegoke, B., Ola, B., & Omotayo, M. Review of Feature Selection Methods in Medical Image Processing. IOSR Journal of Engineering. 2014;4(1),01–05. 6. Chadha, A., Mallik, S., & Johar, R.Comparative Study and Optimization of Feature-Extraction Techniques for Content based Image Retrieval. International Journal of Computer Applications, 2012; 52(20), 0975–8887. 7. Gebejes, A., & Huertas, R. (2013). Texture Characterization based on Gray Level Cooccurrence Matrix. Conference of Informatics and Management Sciences. 2013; 25. - 29. 8. Neeta V. Jog, S.R. Mahadik. Implementation of Segmentation and Classification Techniques for Mammogram Images, 2015; 4(2). 9. Pushpalata P. and Jyoti B. G, Improving Classification Accuracy by Using Feature Selection and Ensemble Model. International Journal of Soft Computing and Engineering. 2012; 2(2), 380–386. 10. J. Tang, R. M. Rangayyan, J. Xu, I. E. Naqa, and Y. Yang, “Computer-aided detection and diagnosis of breast cancer with mammography: Recent advances,” IEEE Trans. Inf. Technol. Biomed., vol. 13, no. 2, pp. 236–251, Mar. 2009. 11. K. Ganesan, U. Acharya, C. K. Chua, L. C. Min, K. Abraham, and K. Ng, “Computer-aided breast cancer detection using mammograms: A review,” IEEE Rev. Biomed. Eng., vol. 6, pp. 77–98, Mar. 2013. 12. RD Rajagopal, S Murugan, K Kottursamy, V Raju, ‘Cluster based effective prediction approach for improving the curable rate of lymphatic filariasis affected patients’, springer cluster computing, pp.1–9, 2018.

Designing a Framework for Data Migration of Odoo ERP PostgreSQL Database into NoSQL Database Krina Shah and Hetal Bhavsar

1 Introduction Nowadays the size of the data is being increased because of advancement in technologies such as web applications, data collecting tools, audio, videos, chat rooms, body of emails, etc. It is difficult for traditional relational database to store and manage the huge amount of data with wide varieties like, structured, semi structured and unstructured data of an application. This causes a need for huge storage capacity with high processing capability. Big data technologies provide the huge storage capacity through the NoSQL data model and at the same time provides fast processing of data using distributed databases [1]. Odoo is an Enterprise Resource planning software. It manages all kinds of business application that makes it a complete package for enterprise management. It includes the software’s like billing, accounting, customer resource management, ecommerce website, manufacture, warehousing, project management, and inventory management. Odoo ERP stores its data in PostgreSQL database, i.e. relational database. In Odoo, audit logs are generated for every single entry. This entry can be a single change in input from the client or data updated by the client. These create a very large database for Odoo application and cause a problem of data handling. Also, if a problem occurs in the system then in order to identify the problem, it needs to read all the log entries. This consumes a lot of processing and it may get time out or may not be able to handle and process the larger data. To solve this problem of a relational database of Odoo software, these data needs to migrate into a NoSQL database in order to process it efficiently and to archive for later use. To organize

K. Shah · H. Bhavsar () Department of Computer Science and Engineering, The Maharaja Sayajirao University of Baroda, Vadodara, Gujarat, India e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_47

491

492

K. Shah and H. Bhavsar

and manage all these large amounts of data, there is a research need to identify a set of guidelines to help the database specialist to first time migration from RDMS to NoSQL database. There are four different types of NoSQL databases: Key-value Store, Document oriented database, column oriented database and Graph oriented database. Cassandra, a column oriented database, is being rapidly chosen by a large number of companies to implement their big database [2]. This research uses Cassandra database to migrate Odoo software’s PostgreSQL database into a NoSQL database. The rest of the paper is organized as follows. Section 2 briefs about the related work carried out in the same direction. Section 1 describes about NoSQL databases. The proposed framework is described in Sect. 3 and the subsequent Sect. 4 presents the results on various database of Odoo software. Finally, Sect. 5 provides the conclusion of the paper.

2 Related Work NoSQL databases are used when RDBMS reaches to their bottleneck. Several reasons to use NoSQL databases are as listed below: 1. The relational database stores the data in relational model, whereas NoSQL ıs model-less approach. Relational database has Pre-defined schema while NoSQL database is having Dynamic schema for unstructured data. 2. Relational database cannot handle huge amounts of data as it requires more memory and CPU usage. NoSQL surpass the storage of relational database by providing the facility of distributed database to handle structured and unstructured data [3]. 3. Relational database has the limitation of cache dependent Read and write operation whereas NoSQL database is capable to perform real-time read operation which does not conflict with write operations. 4. Relational database supports ACID property, whereas NoSQL database supports brewer’s CAP theorem [4]. 5. Relational databases are scalable vertically by increasing resources of the system and NoSQL database is horizontally scalable by creating clusters of commodity machines which is one of the best reason to use NoSQL database [4]. Lot of work has been done for the data migration between SQL to NoSQL database. There are several motivations for the schema migration which will automate SQL-to-NoSQL schema denormalization and migration. The mid-model design method is used to integrate the data in an identical way for any NoSQL database [1]. The hybrid architecture proposed by Ying-Ti Liao et al. in [5] expand the existing system service to support the increase in data size by migrating it to NoSQL database and at the same time it can handle Relational database. Comparison of different NoSQL database and Relational database such as MySQL, Cassandra, Hbase for heavy Write Operation showed that Cassandra scaled up the

Designing a Framework for Data Migration of Odoo ERP PostgreSQL Database. . .

493

most among the three databases and it also increases the performance of system [2]. MySQL data is migrated into Mongodb by designing a model using the two model approach [6]. SQL databases have the limit to handle horizontal scalability. For flexibility and viability of parallel processing it used NoSQL database which increases 45% access performance after schema migration of CMS Software [7]. Analysis has been done on the practicality of having NoSQL databases for database federation and a federated system has been achieved among MongoDB, CouchDB and Cassandra [8]. Due to the rapid data growth, relational database are needed to translate into NoSQL column family data stores for better scalability. In [9], the authors solve the problem of columns which are not normalized in a vast data table into column families for better enactment of query handling by proposing heuristic algorithm, GPA (Graph-based Partitioning Algorithm). For better effectiveness of scattered and parallel computing, correlation aware technique on the Sqoop for data migration from a relational database into NoSQL is proposed in [10]. It increases the query proficiency twice than apache Sqoop. Studies show that there are several data migration techniques to migrate data from relational databases to NoSQL databases but there is no migration technique for Odoo software application. This work focused on the migration of Odoo PostgreSQL Database into NoSQL Database and designs a framework for the same.

3 NoSQL Databases NoSQL stands for Not Only SQL. NoSQL databases are non-relational, open source, distributed databases. They are widely used for huge amounts of data storage and other real-time web applications like, analyzing logs, Social networking feeds, time-based data which are not easily analyzed in RDBMS. They are able to deal with rich variety of data i.e. Structured, Semi-Structured and unstructured data [4]. NoSQL databases are classified into four main domains: (1) Key value data store, (2) Document oriented database, (3) Column oriented database and (4) Graph Database [3, 11]. 1. Key-value data stores: It is the most simple and efficient database which stores the data into key and value format. Keys are not identical and are require for addressing the data which are uninterrupted arrays of bytes and are not visible to the system. The keys are the main choice left to restore or fetch the stored data. This data store is having higher scalability over consistency [11]. 2. Document oriented database: It stores the data in type of Document. In document, it consists of key-value pair and keys are mandatory to be unique. In this database, the tables are represented as collections and every document is having a unique Id and data are stored in JSON, XML format [3]. 3. Column oriented database: Data are stored in column-family having row-key with the column as a row. Its structure is similar to SQL at physical level. The

494

K. Shah and H. Bhavsar

data are stored in distributed databases and storage style is column by column. It has higher scalability for retrieving and loading the data for huge databases [3]. 4. Graph Databases: They are used to represent entities and associations between entities. Entities are represented as nodes in the database and the associations between the entities are represented as directed edges. Graph databases are gaining importance as they are now being deployed in organizations for managing data within applications like social networking [11]. Examples of above mentioned databases are shown in Table 1. Cassandra and Hbase are the two most famous column oriented database. The comparison between these two databases based on different parameters is shown in Table 2. Cassandra is a free and open-source NoSQL database administration framework. It is intended to deal with a lot of information crosswise over numerous commodity servers, furnishing high accessibility with no single purpose of disappointment [12]. The Apache Cassandra database is the correct decision when you require adaptability and high accessibility without negotiating execution. As compared to the Hbase and the MySQL, Cassandra scaled up for heavy write operation and takes less time for query execution [2]. Cassandra database is used by several companies such as Facebook, Netflix, CERN, Comcast, GitHub, Instagram, etc. which have large data sets. So for migrating Odoo ERP software’s data into NoSQL database, Cassandra is the true choice. Table 1 The examples of the above mentioned databases Type of database Key-Value Store Column-Family Database Document-Oriented Database Graph Database

Example Amazon S3, Redis, BerkelyDB, Voldemort, etc. C-store, Hypertable, Cassandra, Hbase, Kudu, etc. OrientDB, MongoDB, CouchDB, RevenDB, SisiDB, etc. Neo4J, AllegroGraph, Infinite Graph, etc.

Table 2 Cassandra vs. Hbase Parameter NoSQL classification Architecture Data Model Writes Performance

Usage

Cassandra Column family databases Peer to peer architecture model Key space—column family Write operation is very fast on account of peer to peer architecture and Cassandra data model Apache Cassandra™ is mostly used for online Web and mobile applications

Hbase Column family database on Hadoop Distributed File System Master Slave architecture model Regions-column family Writes slower than Cassandra if it uses pipelined writes (synchronous).Asynchronous writes are configurable Hbase is intended to help data warehouse and lake-styled use cases

Designing a Framework for Data Migration of Odoo ERP PostgreSQL Database. . .

495

4 Proposed Framework Framework proposed to migrate Odoo PostgreSQL relational database into NoSQL Cassandra database is shown in Fig. 1. The steps of the proposed framework are described below: Step 1: Import required libraries for establishing a connection between PostgreSQL and Cassandra database. Step 2: Establish a connection between PostgreSQL and Cassandra database Step 3: If connection successful Check if relational tables of PostgreSQL database are there in Cassandra? If not Run the script to create the tables in Cassandra database Map the data types of relational database into Cassandra database. End If Step 4: For each table in relational database For each field of table If field name is matched with keyword of Cassandra database Change the field name as table name.field name in Cassandra End loop

Fig. 1 Proposed framework

496

K. Shah and H. Bhavsar

Add id as primary key for tables which are not having any unique id End loop Step 5: For each table of relational database For each row Fetch the record and insert into Cassandra database. End loop End loop Step 6: Exit

5 Results 5.1 Experimental Setup The experiments have been performed on data of Odoo software which stores the data in PostgreSQL database. The proposed framework comprises of two parts: (1) Table Migration and (2) Data Migration. For data migration, NoSQL database Cassandra is employed. The script is developed using eclipse mars with pydev ® plugin. The results have been taken on Intel core ™i5-33375U CPU processor with 6Gb RAM, 64-bit Linux Ubuntu Operating System.

5.2 Result and Description To show the successful implementation of the proposed framework, this section describes the results on two different databases of Odoo namely, testdb@ and sample_db. They are about 35 MB and 673 MB respectively. Table 3 shows the results for table creation and data migration time for these databases. This can be visualized from Figs. 2, 3 and 4. After successful data migration into NoSQL database, the companies can use these data for • • • •

Archiving it for later use Decision making Data analysis by applying data mining algorithm and Generating reports.

Table 3 Time analysis for different database of Odoo

Name of database Size Table creation time Data migration time

Testdb@ 35 MB 509.860 s 19.517 s

Sample_db 673 MB 800.181 s 154.385 s

Designing a Framework for Data Migration of Odoo ERP PostgreSQL Database. . .

497

Fig. 2 PostgreSQL database having Size 35 MB

Fig. 3 Time for table creation of 35 MB data size

The results have been tested to generate a report from relational and NoSQL database and it demonstrates that the Cassandra requires considerably lesser time than the relational database. To produce the reports from Odoo software, it took

498

K. Shah and H. Bhavsar

Fig. 4 Time for data migration of 35 MB size

Fig. 5 The report generation time for Cassandra database

2.75 s to print a report and to create the reports from Cassandra, the apparatus utilized jasper reports and it took 0.476 s to print. This can be visualized from Figs. 5 and 6.

Designing a Framework for Data Migration of Odoo ERP PostgreSQL Database. . .

499

Fig. 6 The report generation time for relational database

6 Conclusion This paper proposed and implemented the framework for data migration from Odoo PostgreSQL Relational database to column oriented Cassandra database and it is successfully migrated. Cassandra Scaled up for heavy write operation and takes less time for query execution. Results proved that Cassandra generates reports more efficiently than Odoo PostgreSQL Relational database.

References 1. Dongzhao Liang, Yunzhen Lin, Guiguang Ding: Mid-model Design Used in Model Transition and Data Migration between Relational Databases and NoSQL Databases (2016) 2. Vishal Dilipbhai Jogi, Ashay Sinha, Performance Evaluation of MySQL, Cassandra and HBase for Heavy Write Operation, in: 3rd Int’l Conf. on Recent Advances in Information Technology (2016) 3. Jagdev Bhogal, Imran Choksi: Handling Big Data using NoSQL: 29th International Conference on Advanced Information Networking and Applications Workshops (2015) 4. Rashid Zafar, Eiad Yafi, Megat F. Zuhairi, Hassan Dao: Big Data: The NoSQL and RDBMS review: International Conference on Information and Communication Technology (ICICTM) (2016). 5. Ying-Ti Liao, Jiazheng Zhou, Chia-Hung Lu, Shih-Chang Chen, Ching-Hsien Hsu, Wenguang Chen, Mon-Fong Jiang and Yeh-Ching Chung: Data Adapter for Querying and Transformation between SQ and NoSQL Database in press (2016) 6. Leonardo Rocha, Fernando Vale, Elder Cirilo, Dárlinton Barbosa, and Fernando Mourão: A Framework for Migrating Relational Datasets to NoSQL, in : IEEE International Conference on Systems, Man, and Cybernetics, Volume 51, pages 2593–2602 (2015)

500

K. Shah and H. Bhavsar

7. Chao-Hsien Lee and Yu-Lin Zheng: SQL-to-NoSQL Schema Denormalization and Migration: A Study on Content Management (2016) 8. Dharmasiri, H.M.L., Goonetillake, M. D. J. S.: A Federated Approach on Heterogeneous NoSQL Data Stores, International Conference on Advances in ICT for Emerging Regions (2013) 9. Li-Yung Ho Meng-Ju Hsieh, Jan-Jan Wu, Pangfeng Liu: Data Partition Optimization for Column-Family NoSQL databases (2014) 10. Jen-Chun Hsu, Ching-Hsien Hsu, Shih-Chang Chen, Yeh-Ching Chung: Correlation Aware Technique for SQL to NoSQL Transformation: 7th International Conference on Ubi-Media Computing and Workshops (2014) 11. Koshy George, Tessy Mathew: Big Database Stores A review on various big data datastores: International Conference on Green Computing and Internet of Things (ICGCIoT) (2016) 12. Avinash Lakshman, Prashant Malik: Cassandra—A Decentralized Structured Storage System.

Juxtaposition on Classifiers in Modeling Hepatitis Diagnosis Data Preetham Ganesh, Harsha Vardhini Vasu, Keerthanna Govindarajan Santhakumar, Raakheshsubhash Arumuga Rajan, and K. R. Bindu

1 Introduction Machine learning, an advancing field of computer science, plays a crucial role in predicting unforeseeable parameters in different domains such as medical diagnosis, weather forecast, sports, and many more, which has always been very complicated for humans. A machine learning model is trained based on the inspection of data done by the algorithm with which mathematical equations can be developed to make better decisions in the future based on the observed trends. The preliminary objective is to make computers learn and make decisions without human intervention. A recent trend observed in the medical field is the implementation of machine learning techniques to diagnose the presence of an infection/disease. Since medical datasets have loads of information, data mining also has a significant role in mining the necessary features for prediction. So it is fundamental to use both machine learning and data mining techniques to model and predict from hepatitis data. A disease named Hepatitis C damages the liver by causing inflammation and infection in it. The condition aggravates after being infected with the Hepatitis C Virus (HCV). Identifying the presence of Hepatitis is one of the significant challenges faced by health organisations [1]. Worldwide around 130–170 million people have been infected by HCV [2]. Approximately 71 million among them have chronic hepatitis C, and 399,000 people die each year of Hepatitis C [3]. Accurate diagnosis and precise prediction at an early stage can help save the patient’s life with minimum damage to the patient’s health. This study intends to analyse Hepatitis Data and classify based on the observed patterns using different classifiers and check for the perfect classifier based on the performance measures.

P. Ganesh · H. V. Vasu · K. G. Santhakumar · R. A. Rajan · K. R. Bindu () Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_48

501

502

P. Ganesh et al.

This study is segregated as sections and is as follows: Section 2 explores the literature survey in the areas related to data mining and machine learning. Section 3 discusses the details about the dataset used, machine learning models used for classification, and the performance measures used for evaluation. Section 4 presents the result obtained by the conducted study, and Sect. 5 concludes the paper based on the obtained result.

2 Related Works The authors in [1] tested different decision tree algorithms on the hepatitis dataset from the UCI repository and evaluated the classification models using measures such as accuracy, precision, recall, and F1-Measure. Based on the results, it was concluded that the random forest classifier performed best with an accuracy of 87.5%. Rosalina et al. [4] performed feature selection using the wrapper method on the same dataset mentioned above. The authors used Support Vector Machines (SVM) on both the feature selected data and the original data to compare its performance. An accuracy score was used to check the performance of the classifier model. It was concluded by the authors that SVM produced better results for the feature selected data than the original data. Ekız et al. [5] used the Heart Diagnosis dataset from the UCI repository for analysis, where the classifiers used for analysing are Decision Tree, SVM, Ensemble Subspace on MATLAB and WEKA. Based on the values of accuracy, it was concluded that subspace discriminant performs better than the others, and among SVM, SVM with linear kernel surpasses the others. This paper primarily anchors on finding the best classification model for the chosen dataset. The study is about the application of five classification algorithms— Random Forest Classifier, SVM, Logistic Regression, Naive Bayes Classifier and Decision Tree Classifier—on the hepatitis dataset and selecting the best by comparing its performance metrics such as accuracy, recall, specificity, precision, F1-Measure, Matthews Correlation Coefficient and many more.

3 Methodology 3.1 Dataset Description The dataset was collected from the UCI Repository [6], which has 155 tuples, 19 self-dependent attributes, and a label named ‘Class’ for prediction. The columnwise details of the dataset are given in Table 1.

Juxtaposition on Classifiers in Modeling Hepatitis Diagnosis Data Table 1 Dataset description

S. No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Attribute Age Bilirubin Alk. phosphate SGOT Albumin Protime Sex Steroid Antivirals Fatigue Malaise Anorexia Liver Big Liver Firm Spleen Palpable Spiders Ascites Varices Histology Class

503 Type Numerical Numerical Numerical Numerical Numerical Numerical Categorical Categorical Categorical Categorical Categorical Categorical Categorical Categorical Categorical Categorical Categorical Categorical Categorical Categorical

Values 31, 34, 39, 32 0.7, 0.9, 1, 1.3 46, 95, 78, 59 52, 28, 30, 249 4, 4, 4.4, 3.7 80, 75, 85, 54 Male/female Yes/no Yes/no Yes/no Yes/no Yes/no Yes/no Yes/no Yes/no Yes/no Yes/no Yes/no Yes/no Live/die

3.2 Process Flow Figure 1 shows the process flow used in this study.

3.3 Classification Algorithms Logistic Regression (LR) A logistic function is used to model the binary class variable, where the variable should be in the numerical form of 0 or 1. The class variable can be a combination of self-dependent binary variables/continuous variables. The respective probability of the value labeled ‘1’ varies from 0.5 to 1, and ‘0’ varies from 0 to 0.5 [7]. Naive Bayes Classifier (NBC) They are a family of uncomplicated probabilistic classifiers based on the implementation of Bayes Theorem, where the classifier works on the assumption that attributes are independent of each other [8]. There are six types of Naive Bayes classifiers out of which three are used in this study, namely Gaussian, Multinomial, and Bernoulli. Support Vector Machine (SVM) It is used for finding the optimal dividing hyperplane between the classes using the statistical learning theory [9]. Overfitting

504

P. Ganesh et al.

Fig. 1 Process flow

can be avoided by choosing the correct size of the margin separating the hyperplane from positive and negative classified instances [4]. Decision Tree Classifier (DTC) It resembles a structure similar to a flowchart, each interior node represents a try-out on a feature, and each limb represents the result of the try-out. Each leaf node represents any one of the class labels [10]. Random Forest Classifier (RFC) It is an ensemble learning method which mainly operates by building a swarm of decision trees during the training stage of the model and displaying the mode of the target class during the testing stage [11]. Usually, the model is overfitted to the training data.

3.4 Performance Measures The performance of a classifier can be decided based on the instances the classifier has classified correctly in the test set after trained on the train set. A tool called Confusion Matrix plays a vital role in calculating the performance of the classifier [12]. The representation of the confusion matrix is given in Table 2. The performance measures used in this study are listed in Table 3, along with their definitions, are formulae [13].

MCC = √

T P ∗ T N + FP ∗ FN (T P + F P ) ∗ (F N + T P ) ∗ (T N + F P ) ∗ (T N + F N)

(1)

Juxtaposition on Classifiers in Modeling Hepatitis Diagnosis Data

505

Table 2 Sample representation of confusion matrix

Predicted

Actual Class Class A Class B

Class A True Positive (TP) Correctly classified as Positive False Negative (FN) Incorrectly classified as Negative

Class B False Positive (FP) Incorrectly classified as Positive True Negative (TN) correctly classified as Negative

Table 3 Performance measures description along with their formulae S. No 1

Performance measure Accuracy

Definition The fraction of tuples the model has classified correctly

2

Balanced Accuracy

3

Recall (R)/Sensitivity (SN)

4

Specificity (SP)

5

Precision (Pr)

6

Negative Predictive Value

7

Fall-out

8

False Discovery Rate

9

False Negative Rate

10

F1-Measure

11 12

Matthews Correlation Coefficient (MCC) [14] Informedness [15]

13

Markedness [15]

Average of correctly classified tuples for each class The fraction of tuples correctly classified as positive The fraction of tuples correctly classified as negative The fraction of tuples correctly classified as positive among predicted positives The fraction of tuples correctly classified as negative among predicted negative The fraction of tuples incorrectly classified as positive The fraction of tuples incorrectly classified as negative among predicted negatives The fraction of tuples incorrectly classified as negative among actual negatives Harmonic mean of precision and recall Correlation coefficient between observed and predicted tuples Evaluates how informed a model is for the specified condition Evaluates how marked a condition is for the model

Formula T P +T N T P +F P +T N +F N TP P

+ TNN 2

TP T P +F N TN F P +T N TP T P +F P

TN T N +F N

FP F P +T N FP T P +F P

FN T P +F N

2∗Pr∗R Pr+R

Eq. (1) SP + SN − 1 Pr + NPV − 1

4 Results and Discussion This chapter discusses in detail the outcomes of the five classifier models that have been used for the study based on different measures mentioned in Table 3. The programming was done with the help of R Programming language in R Studio.

506

P. Ganesh et al.

Table 4 Performance measure values based on the formulas in Table 3 S. No 1 2 3 4 5 6 7 8 9 10 11 12 13

Performance measures Accuracy Balanced Accuracy Recall Specificity Precision Negative Predictive Value Fall-Out False Discovery Rate False Negative Rate F1-Measure Matthews Correlation Coefficient Informedness Markedness

NBC 0.823 0.749 0.536 0.962 0.867 0.813 0.038 0.133 0.464 0.66 0.116 0.499 0.679

DTC 0.86 0.843 0.808 0.878 0.45 0.963 0.122 0.55 0.192 0.535 0.206 0.687 0.413

LR 0.867 0.836 0.702 0.904 0.717 0.904 0.096 0.283 0.298 0.68 0.158 0.606 0.621

SVM 0.873 0.842 0.788 0.896 0.55 0.954 0.104 0.45 0.212 0.622 0.189 0.684 0.504

RFC 0.907 0.885 0.845 0.926 0.683 0.963 0.074 0.317 0.155 0.734 0.189 0.771 0.646

Fig. 2 Accuracy and balanced accuracy for all the classifiers

The dataset collected had missing values, which was imputed using Predictive Mean matching [16], and the numerical attributes were normalized using Z-Score normalization [17]. The processed dataset was split into a train and test set using the Holdout method. Table 4 discusses in detail the various performance measures for each of the classifiers in the test set. The graphical representation of the same is given in Figs. 2, 3, and 4. A good classifier model should have high accuracy, recall, precision, sensitivity, specificity, and F1-Measure [18] and low false-negative rate, false-discovery rate, and false-positive rate. The dataset used for analysis is biased, i.e., class ‘live’ has 123 tuples, and class ‘die’ has 32 tuples. Therefore, accuracy, balanced accuracy, precision, recall, and F1-measure will not be sufficient to judge as to whether a

Juxtaposition on Classifiers in Modeling Hepatitis Diagnosis Data

507

Fig. 3 Recall, precision, and F1-measure for all the classifiers

Fig. 4 Other performance measures for all the classifiers

classifier performed well or not. From the observation in Table 4, it can be inferred that Random Forest Classifier outperformed the other models. Even though the other models had better values in a few performances measures better than Random Forest Classifier, but the difference was very minimal. Hence, it can be concluded that Random Forest Classifier performed best for the chosen dataset.

5 Conclusion In this study, the performance of the different classifiers modeled on the hepatitis data from the UCI Repository was inspected. The classifiers used in this study are Logistic Regression, Naive Bayes Classifier, Support Vector Machine, Decision

508

P. Ganesh et al.

Tree Classifier, and Random Forest Classifier. Various performance measures were used for evaluating and comparing the performance of the classifier models. Based on the obtained results, it was inferred that Random Forest Classifier outperformed the other classifiers and provided an accuracy of 90.7%. The model produced good accuracy for a sparse dataset, so there is a higher probability that the model would work even better in a denser dataset, which would help diagnose Hepatitis C at an earlier stage.

References 1. M. Ramasamy, S. Selvaraj, and M. Mayilvaganan. An empirical analysis of decision tree algorithms: Modeling hepatitis data. In 2015 IEEE International Conference on Engineering and Technology (ICETECH), pages 1–4, 2015. 2. Huda Yasin, Tahseen A. Jilani, and Madiha Danish. Hepatitis-c classification using data mining techniques. International Journal of Computer Applications, 24(3):1–6, 2011. 3. World Health Organization et al. Global hepatitis report 2017. World Health Organization, 2017. 4. A.H. Roslina and A. Noraziah. Prediction of hepatitis prognosis using support vector machines and wrapper method. In 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery, volume 5, pages 2209–2211, 2010. 5. S. Ekız and P. Erdogmus. Comparative study of heart disease classification. In 2017 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT), pages 1–4, 2017. 6. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. 7. Strother H. Walker and David B. Duncan. Estimation of the probability of an event as a function of several independent variables. Biometrika, 54(1/2):167–179, 1967. 8. David J. Hand and Keming Yu. Idiot’s bayes: Not so stupid after all? International Statistical Review/Revue Internationale de Statistique, 69(3):385–398, 2001. 9. Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273– 297, 1995. 10. J. Ross Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, 1986. 11. Leo Breiman. Random forests. Machine Learning, 45(1):5–32, 2001. 12. Jiawei Han, Jian Pei, and Micheline Kamber. Data mining: concepts and techniques. Elsevier, 2011. 13. Tom Fawcett. An introduction to roc analysis. Pattern Recognition Letters, 27(8):861–874, 2006. 14. B.W. Matthews. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA) - Protein Structure, 405(2):442 – 451, 1975. 15. David Martin Powers. Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation. 2011. 16. Donald B. Rubin. Statistical matching using file concatenation with adjusted weights and multiple imputations. Journal of Business Economic Statistics, 4(1):87–94, 1986. 17. D. Freedman, R. Pisani, and R. Purves. Statistics: Fourth International Student Edition. International student edition. W.W. Norton & Company, 2007. 18. Vikas K Vijayan, KR Bindu, and Latha Parameswaran. A comprehensive study of text classification algorithms. In 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pages 1109–1113. IEEE, 2017.

Voltage Stabilization by Using Buck Converters in the Integration of Renewable Energy into the Grid J. Suganya, R. Karthikeyan, and J. Ramprabhakar

1 Introduction To face the tasks caused by irregular energy generation sources, people operating the grid have put forward for more strict technical necessities for connecting and operating the irregular energy sources connected to the grid system. These contemplations, joint with an increasing demand for reduction in cost, higher performance and higher consistency, are forwarding to a stable and technical evolution in the areas related to intermittent energy conversion systems. Renewable energy such as wind turbines and photovoltaic (PV) systems use usual resources and gives required emission free energy. The integration of these resources which are renewable is increasing throughout the world. In the recent years, integration of renewable energy technologies have been rapidly improved in high performance and they are possible way to reduce the risk of gas emission from non-renewable energy plants. In the department of electricity, renewable energy accounted for 18.37% of the total mounted power measurements in India as of 31st October 2017. The Renewable energy resources is an alternative choice for production of electricity, especially the Photovoltaic system as it is a green and emission free system. But to attain this, a lots of challenges want to be explained and solutions

J. Suganya () · R. Karthikeyan Department of Electronics and Communication Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, Bengaluru, India e-mail: [email protected]; [email protected] J. Ramprabhakar Department of Electrical and Electronics Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, Bengaluru, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_49

509

510

J. Suganya et al.

need to be given to those challenges. The development in power electronics technology encouraged the use of renewable sources in an efficient way. Introduction of the power produced by the sources other than conventional nonrenewable energy sources to the main electrical system with the help of power electronic inverters was a new idea. Small grids usually contain solar PV that gives DC output or mini sized turbines that gives very high frequency AC output. They have rectifiers, inverters to edge with the grid. Several microgrids, industry related operations, and maritime operations are seeing at direct current power as an alternate towards conventional alternate current structures for rising presentation, competence, suppleness, and consistency. For instance, manufacturing also suburban small grids meet problems in terms of flexibility, suppleness, and steadiness [1]. Many ideas and suggestions have been put forward to use the power converter which is interfacing energy resources to the bus and the fault guard system [5]. There are many methods explained to sense liabilities and organize the deed of power electronic devices and other safety elements to protect the electrical structures, decrease effect of a liability, and reduce the danger of disastrous shut down [1]. The aim of this paper is to describe the reaction of buck converter and its ability to offer fault current restriction, associate with safe guarding elements, reduce the chance of disastrous harm of the system and improve the reliability and existence of the grid [6–8]. In our simulation the buck converter have been used to control the voltage at the grid side even though there is increase or decrease in output voltage from the solar panel. The two parameters temperature (T) and irradiation factor (S) have taken for consideration [9–11]. By changing these values the output power of solar panel will be changed. We have simulated that the voltage level at the grid side remains constant even for the changes in these above factors. Buck converter has been used for stabilization of the voltage level in the grid side [12–14]. In our simulation the buck converter have been used to control the voltage at the grid side even though there is increase or decrease in output voltage from the solar panel. The two parameters temperature (T) and irradiation factor (S) have taken for consideration. By changing these values the output power of solar panel will be changed. We have simulated that the voltage level at the grid side remains constant even for the changes in these above factors. Buck converter has been used for stabilization of the voltage level in the grid side.

2 System Configuration In our System the Photovoltaic system is linked to the grid through Inverter and Buck Converter (Fig. 1).

Voltage Stabilization by Using Buck Converters in the Integration of Renewable. . .

511

Fig. 1 Voltage stabilization system using buck converter

2.1 Solar Panel Energy from sun is unique of the greatest easily available types of sustainable resource. Photovoltaic system can be formed in amounts from a few kW at the occupied measure up to multiple MW at the efficacy scale. The increasing demand for electrical energy and hike in price of petroleum products and slightly decrease in PV system cost over the years, the entry and chances for PV smart grid system seem to be growing. Photovoltaic energy systems consist of collections of solar cells as in Fig. 2, which produce electricity from light incident on them. The yield of the PV (photovoltaic) system is primarily reliant on the strength and period of illumination. PV provides fresh, emission free, energy renovation, without relating to any dynamic additional system. It has a high life time (>20 years). Still many research has to be done to increase the overall performance of the solar cell which is the basic element of the Photovoltaic system. The benefits of PV modules are less maintenance and easy development to face the increasing energy requirements. This feature of PV encourages users to use PV system to the required condition. High price and the prerequisite for the load to match with radiance of light output of Photovoltaic system are the shortcoming of this solar panel. The rays from sun when incident on panel light, most of its portion is lost. Some portion reflections off or enters over the panel. Approximately few portion is twisted

512

J. Suganya et al.

Fig. 2 Solar panel

Fig. 3 Buck converter

into heat. Sunlit with the accurate wavelengths, or colors, is captivated and then twisted into electrical energy.

2.2 Buck Inverter (Fig. 3) A buck converter is the very elementary form of Switched Mode Power Supply. It is extensively used all over factory to change high level voltage into low level output [2]. In this work, the buck converter is used to interface the solar output to the grid. It delivers the essential indigenous voltage from a developed voltage bus that is shared to several converters in the system. The step down converter is made up of an active switch control, a diode and filter components. This structure ease for cost operative high well-organized power delivery in the request [3].

Voltage Stabilization by Using Buck Converters in the Integration of Renewable. . .

513

2.3 Single Phase Inverter Inverter is well-defined as an Electrical expedient which changes the Direct current into the Alternating current [4]. The input of the inverter is a fixed DC voltage which is technically found from the batteries and the output of the inverter is commonly a fixed or a variable frequency Alternating voltage, the AC voltage magnitude is also variable. Three phase inverters can be operated in to two diverse types of modes of conduction [4].

2.4 PI Controller The past and present line of the controller fault does not affect the proportional term calculation. In a proportional and integral controller output is directly relational to the summary of proportional of error and integration of the error signal. > C (t) = Ki

x (t) dt + Kpx (t)

where C(t) is the controller output; Ki and Kp are the proportional and integral constants. X(t) is the error signal (Fig. 4). Due to the quality of the major causes (e.g. renewables, energy storage systems, etc.) and the reason that causes are spread everywhere in the distribution system, small sized grids are featured by a many sum of power devices that edge the power

Fig. 4 PI converter

514

Grid Input

J. Suganya et al.

Circuit Breaker

Choke

Contactor Utility Load

PWW/ MPPT Charger Circuit Breaker

Solar PV Modules

Circuit Breaker

Bi-directional Inverter Isolation Transformer

Contactor

Circuit Breaker

Control & Monitoring

Battery Bank

Fig. 5 Renewable micro grid with distributed generation

causes to the key bus lines. In the case of a renewable integrated small sized grid, production is circulated over a specific zone and many access places for both power available in the grid and close by power from renewable energy sources [1]. In the below system, it is vital that the arrangement reacts rapidly in the happening of a accountability in order to evade overall damage of all elements associated with the system [1] (Fig. 5).

3 System Design and Results In this portion, a dc power system is modeled and tested with a buck converter having voltage control ability. Converter circuit, solar panel output and grid side output voltage have been measured from the simulation and the results were obtained. It shows the for different values of temperature and irradiation constant the output remains constant in the grid side.

4 Advantage • • • • •

Prevents equipment damage Reduces equipment replacement; Use lower fault rated equipment; Decline voltage depressions on end-to-end feeders; Increase power grid temporary steadiness.

Voltage Stabilization by Using Buck Converters in the Integration of Renewable. . .

515

5 Application • Multi Terminal DC systems • Protect entire power transmission and distribution system. • Protect all electrical load (fan, mixer, refrigerator, etc.) connected from grid power supply.

6 Conclusion In our proposed system the closed loop system protects the grids from high or low voltage. Buck converters are used to maintain the constant voltage, it reduces voltage dips on adjacent feeders. The simulation results show that the variation of temperature and irradiation constant of the solar panel does not affect the voltage feed into the main grid. Simulated waveforms for T = 45 and S = 600 have been obtained as below. Output Voltage from Solar panel = 78 V, output voltage of Buck converter = 60.03 V and Inverter Output voltage = +60 V to −60 V. Simulated waves:

Result 1: Output voltage and current from solar panel

516

J. Suganya et al.

Result 2: Output voltage of buck converter and inverter

Result 3: Output voltage of solar panel, gate pulse, buck converter and inverter respectively (Fig. 6)

Fig. 6 Schematic of an AC-DC isolated buck converte r

Voltage Stabilization by Using Buck Converters in the Integration of Renewable. . .

517

References 1. Pietro Cairoli, Rostan Rodrigues, Huaxi Zheng. “Fault current limiting power converters for protection of DC microgrids”, SoutheastCon 2017. 2. Sarode, Dipali, Arti Wadhekar, and Rajesh Autee. “Voltage source inverter with three phase preventer and selector for industrial application”, 2015 International Conference on Pervasive Computing (ICPC), 2015. 3. Daniel W. Hart, “Power Electronics”, Tata Mc Graw Hill, 2011. 4. www.ijsr.net 5. Adam Hirsch, Yael Parag, Josep Guerrero. “Microgrids: A review of technologies, key drivers, and outstanding issues”, Renewable and Sustainable Energy Reviews, 2018 6. B. M. Prabhakar, Dr. J. Ramprabhakar, and Sailaja, V., “Estimation and controlling the state of charge in battery augmented photovoltaic system”, in 2016—Biennial International Conference on Power and Energy Systems: Towards Sustainable Energy, PESTSE 2016. 7. Dr. Shankar S., Chatterjee, J. K., and Saxena, M., “Reduction of grid current harmonic injection in cosinusoidal modulated matrix converter controlled induction generator for wind applications part-I”, in Annual IEEE India Conference (INDICON), 2011, 2011. 8. Rithu R. and R. Karthikeyan, “Fuzzy tuned Proportional Integral controller with decouplers for quadruple tank process”, International Journal of Control Theory and Applications, Volume 10(9), pp 831–839, 2017. 9. M. Rashid, Power Electronics Handbook (Academic Press Series In Engineering). 10. Rifdian Indrianto Sudjoko, Purwadi Agus Darwito. “Design and simulation of synchronous generator excitation system using buck converter at motor generator trainer model LEMMGS”, 2017 International Conference on Advanced Mechatronics, Intelligent Manufacture, and IndustrialAutomation (ICAMIMIA), 2017. 11. Shuva Paul, Md. Kamrul Alam Khan. “Design, fabrication and performance analysis of solar inverter”, 2013 IEEE Energytech, 2013. 12. H. Shehadeh, S. Favuzza, E. Riva Sanseverino. “Electrostatic synchronous generator model of an Inverter-Based Distributed Generators”, 2015 International Conference on Renewable Energy Research and Applications (ICRERA), 2015. 13. Cui Xiaodan, Lv Yazhou, Li Wei, Li Bijun, Sun Zhongqing, Wu Chenxi, Hu Yang, Li Xi. “Effects of fault current limiter on the safety and stability of power grid and its application: A research review”, 2016 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC), 2016. 14. L. Qi, J. Pan, X. Huang, X. Feng. “Solid-state fault current limiting for DC distribution protection”, 2017 IEEE Electric Ship Technologies Symposium (ESTS), 2017.

OCR System For Recognition of Used Printed Components For Recycling Shubhangi Katti and Nitin Kulkarni

1 Introduction Optical character recognition is identification of hand written or printed document by computer. Optical Character Recognition systems are also being used in recycling electronic waste for separating E Waste based on the material. The majority of the works have focused on the WEEE recycling either as a whole or fraction wise i.e. PCB and Polymer part. Works on electronic component reuse and recycle has not been reported so far. Hence, this is a new area of focus for the technologists and practitioners [1]. Electronic components are divided into two different groups viz. color coded components and printed components. Printed components have printed information on the surface that is used for selecting appropriate component for building a specific circuit. Printed electronic components are further categorizes as passive printed components and active printed components. Passive printed components include printed resistors, capacitors, while active components include diodes, bipolar transistors, junction field effect transistors, voltage regulators, analog integrated circuits and digital integrated circuits. These components are available in different size, shape and color. Classification of these components have been done by many researchers using various algorithms viz. object detection and recognition, blob analysis, Hough transform [2], but for the components with same color, shape and size but different functionalities some other solution is required. The difference in such components could be understood with the help on unique identification number and other relevant information printed on them. Industrial machine vision systems employ optical character verification technology to verify printed information on integrated circuits [3].Vision systems are employed in PCB

S. Katti () · N. Kulkarni Fergusson College, Pune, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_50

519

520

S. Katti and N. Kulkarni

manufacturing Industries for guiding the robotic system for automated placement and soldering of electronic components [4].

2 Related Work Optical character recognition has become a prime application area in Industrial Machine Vision Systems along with the automated vision for quality improvement. OCR systems are also being employed in the manufacturing Industries for labeling the products or painting the technical information on the products, in pharmaceutical industries, in Medical Transcription field, etc. [5, 6]. Optical character recognition systems are divided into group’s viz. online character recognition systems and offline character recognition systems. Online character recognition systems are employed mostly in hand written character recognition applications and offline character recognition systems are employed in document processing and information verification in pharmaceutical companies and various manufacturing Industries. It has been observed that optical character recognition is affected by different parameters such as illumination variation, background texture, and background color. Success of a Character recognition process solely depends on the segmentation. In character recognition application segmentation is achieved by simple threshing method. But selecting a proper threshold value for each of the image under different constraints is very difficult task which plays very important role in feature extraction process. As size of characters and the distance between two consecutive characters is very small, proper techniques must be used for isolating the connected characters or incomplete structure. Text detection process can be categorized into two different methods viz. region based and texture based method [7]. In texture based method images are scanned at number of scales and number of pixels is classified based on the text properties such as high density of edges, low gradients above and below the text. Due to scanning at different scales computational complexity increases [8]. In region based method pixels exhibiting constant color are grouped together and resulting connected components which are not text are filtered geometrically [9]. Another approach of detecting the text is stroke width employed by [10]. They have measured stroke width of each pixel and merged the neighboring pixels with similar stroke width into connected components which form the characters [9]. It is difficult to read the tiny, low contrast text printed on the package. Solution to this problem has been suggested by [11]. RGB Image was converted to HSV color space and they have selected H channel to separate text from background. Isolated characters are matched with use of various methods; template matching technique has been mentioned by many researchers as a simplest method. Template Matching is a high-level machine vision technique that allows to identify the parts of an image (or multiple images) that match the given image pattern. It can be used in manufacturing as a part of quality control, a way to navigate a mobile robot, or

OCR System For Recognition of Used Printed Components For Recycling

521

as a way to detect edges in images. In this we see all various methodologies which is used for implementing Template Matching [12]. In this paper we have proposed Machine Vision system for classification of used electronic components into different categories using optical character recognition technique. We have discussed the method of classification of printed electronic components into different categories viz. Polar Capacitors, Disc capacitors, relays printed resistors, transistors and integrated circuits. As components to be classified are manufactured by different manufacturing companies, it is a very challenging task to read the printed information with different fonts, different color background and different illumination level.

3 Methodology Machine vision system consists of a camera, lighting system, computer, software for processing images captured by camera and component sorting system. Images of printed electronic components viz., disc capacitor, electrolytic capacitor, printed resistors, transistor and integrated circuit have been acquired by the DSLR camera in digital form. Images of five electrolytic capacitors (first row) manufactured by different manufacturers along with the five images of disc capacitors (second row) three images of printed Resistors (third row) one image of relay (third row fourth column) one power transistor (third row fifth column), one small signal transistor (fourth row fist column and four integrated circuits (fourth row and second, third, fourth and fifth column are shown in Fig. 1. Acquired images have been preprocessed to remove background noise and unwanted illuminations using morphological operations and to improve image quality. These images have been processed further to isolate different lines, words and characters from each other as a preliminary step of character recognition process. Extracted characters as well as words have been stored as templates in a database for further use. Same operations have been performed on the query image to extract the essential features such as lines, words and characters for identification purpose. Steps followed in identification process have been discussed in the proposed algorithm section. The algorithm proposed in this paper detects and recognizes the Machine printed characters printed on the surface of components. For recognition of printed component the steps mentioned in algorithm have been followed.

3.1 Proposed Algorithm 1. 2. 3. 4.

Read the Image Convert RGB Image to HSV color Space. Binarize the image. Detect Maximally Stable extremal Regions.

522

S. Katti and N. Kulkarni

Fig. 1 Image collection

5. Remove NonText Regions by using blob parameters viz., Aspect Ratio, Eccentricity, Euler number, Extent and solidity, 6. Isolate each character using connected components. 7. Match isolated character by using template matching algorithm by comparing the templates from database. 8. Construct a word from matched characters with the use of coordinates of the center bounding box of each matched character. 9. Classify the Component based on printed information. Above mention algorithm was implemented using Computer Vision and Image processing Toolbox. Results obtained for 100uF 63 Volt Polar Capacitor are shown in Figs. 2 and 3. Binarized image of a electrolytic capacitor with the text highlighted with different colors is shown in Fig. 2. Highlighted Text in Fig. 2 consists of the name of the manufacturer on the topmost line, value of the capacitance i.e. 1000 microfarad has been shown in the second line while third and last line shows the value of voltage rating of the capacitor. Words and characters isolated from each other are shown in Fig. 3. It could be seen from Fig. 3 that each digit corresponding to the value of a capacitance i.e. 1000 microfarad has been separated (extracted) using region prop function and minimum bounding box region in MATLAB.

OCR System For Recognition of Used Printed Components For Recycling

523

Fig. 2 Maximally stable extremal regions

Fig. 3 Isolated characters and words from Fig. 2

Fig. 4 Printed resistor 220E 5W with 5% tolerance

Similarly results obtained for 220E Printed Resistor are shown in Figs. 4 and 5 respectively. Captured Image of a printed resistor is shown in Fig. 4 and the value of the Resistor i.e. 220E and its power rating has been extracted regionprop function followed by minimum bounding rectangles. All text information that has been extracted from the image 4 has been. Shown in Fig. 5. From these isolated characters technical specifications of the printed resistor have been identified using position coordinates of the minimum bounding rectangles of corresponding character.

4 Result and Conclusion It was very challenging task to identify different printed components as the technical information printed on the body of the component differs from component to component. Some components have the information printed using one line (disc

524

S. Katti and N. Kulkarni

Fig. 5 Isolated characters and words from figure 4

capacitor, transistor), some have information printed on two lines (Printed Resistor), in case of capacitors information is available in three different lines, while integrated circuit may have logo on the body of the component and Relay has nine lines printed on it). Therefore it is necessary to use the algorithm to separate the lines, then words and finally characters in order to recognize different types of printed components. Optical character recognition system designed for classification of used printed electronic components, performs better, when RGB image of a component is converted to HSV color space and using V component of HSV Image for further processing. Adaptive thresholding technique provides better solution to a problem of low contrast and illumination variation. Template matching method provides accurate results in identification process of printed characters of various components especially electrolytic capacitor, printed resistor and integrated circuits. The OCR system could be used successfully in material based E waste Recycling and reuse process at component level.

5 Limitations of the System In case of transistor accurate position of printed text facing the camera is necessary to capture the image of text and with DLSR camera it was not possible to recognize the value of some of the disc capacitors.

OCR System For Recognition of Used Printed Components For Recycling

525

References 1. Electronic Components (EC) Reuse and Recycling—A New Approach towards WEEE Management Procedia Environmental Sciences 35 (2016) 656–668 1878-0296 © 2016 the Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the organizing committee of 5IconSWM 2015. doi: https://doi.org/10.1016/j.proenv.2016.07.060. Available online at www.sciencedirect.comScience Direct International Conference on Solid Waste Management, 5IconSWM 2015 Biswajit Debnatha,*, Priyankar Roychowdhuryb, Rayan Kundu. 2. RauH, Wu CH. Automatic optical inspection for detecting defects on printed circuit board inner layers. Int J Adv Manuf Technol 2005;25:940–6, https://doi.org/10.1007/s00170-004-2299-9. 3. Sheau-Chyi Lee, Serge Demidenko,Kok-Hua Lee, IC handler Throughput Evaluation for Test Process Optimization, IEEE Instrumentation & Measurement Technology Conference, May 2007. 4. Automated visual inspection of surface mount PCBsTeoh, E.K.; Sch. of Electr. & Electron. Eng., Nanyang Technol. Inst., Singapore; Mital, D.P.; Lee, B.W.; Wee, L.K.Industrial Electronics Society, 1990. IECON ‘90, 16th Annual Conference of IEEE, Date of Conference: 27–30 Nov 1990, Page(s): 576–580 vol. 1, Meeting Date: 27 Nov 1990–30 Nov 1990, Print ISBN: 0-87942-600-4, INSPEC Accession Number: 4101854, Conference Location: Pacific Grove, CA, DOI:https://doi.org/10.1109/IECON.1990.149205, Publisher: IEEE 5. R. Nagarajan, Sazali Yaacob, Paulraj Pandian, M. Karthigayan, Shamsudin H.J. Amin, and Marzuki Khalid. A real time marking inspection scheme for semiconductor Industries. The International Journal of Advanced Manufacturing Technology. 34(9–10):926–932, August 2006 6. M. Bukovec, et al. “Automated visual inspection of imprinted pharmaceutical tablets,” Measurement Science and Technology, vol. 18, no. 9, pp. 2921–2930, 2007 7. X. Chen, A. Yuille, “Detecting and Reading Text in Natural scenes” Computer Vision and Pattern Recognition, pp. 366–373, 2004 8. R. Lienheart, A. Wernicke, “Localizing and Segmenting Text in Images and videos” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 12, No. 4, April 2002 pp. 256–268 9. Y. Liu, S. Goto, T. Ikenaga, “A Contour Based Robust Algorithm for Text Detection in Color Images”. IEICE TRANS. INF. & SYST., Vol. E89-D No. 3 March 2006. 10. Boris Epshtein, Eyal Ofek, Yonatan Wexler “Detecting Text in Natural Scenes with Stroke Width Transform” Microsoft Corporation 11. Noman Islam, Zeeshan Islam, Nazia Noor “A Survey on Optical Character Recognition System”, Journal of Information and Communication Technology-JICT Vol 10 Issue-2, December 2016 12. Nazil Perveen, Darshan Kumar and Ishan Bhardwaj, “An Overview on Template Matching Methodologies and its Applications”, International Journal of Research in Computer and Communication Technology, Vol 2, Issue 10, October-2013, page 998

Modern WordNet: An Affective Extension of WordNet Dikshit Kumar, Agam Kumar, Man Singh, Archana Patel, and Sarika Jain

1 Introduction A dictionary, sometimes called as a word database, is an accumulation of words in one or more specific languages and oftentimes settled in alphabetical order which may include information on pronunciations, definition usage, translation, etc. whereas thesaurus is a reference work that lists words grouped together according to analogy of meaning like synonyms [1]. There are lot of thesauri and dictionaries exist on the web namely Oxforddictionaries.com, Dictionary.com, Cambridge advanced learner’s dictionary, Merriam-Webster, Macmillan dictionary.com, Wiktionary, Collinsdictionary.com, Word Web Online, power thesaurus, Hinkhoj, Your dictionary and WordNet. Among these online dictionaries and thesaurus, the WordNet is most widely used lexical system. There are several versions of WordNet have been released like WordNet 1.5, 1.6, 1.7, 1.7.1, 2.0, 2.1, 3.0 and 3.1 whereas the newest version of WordNet is 3.1 and it is released in 2013 November. WordNet 3.0 is available for Unix-like systems and the latest windows version of WordNet is 2.1, published in 2005 March [2]. WordNet Database includes 155,287 words which are organized in 117,659 synsets (sets of synonyms) for a total of 206,941 words sense (short definitions) pairs [3]. It is a lexical database which is based on ontology. An ontology typically contains representations and descriptions of object, attribute, relationship, values, axioms [4]. Although WordNet is most widely online lexical system which works like a dictionary and thesaurus but it does not provide sentence categorization, visualization of relation between the words, ignores preposition and it has less effective user interface. But, there exists a dictionary and thesaurus named “Your Dictionary”

D. Kumar () · A. Kumar · M. Singh · A. Patel · S. Jain Department of Computer Application, N.I.T. Kurukshetra, Kurukshetra, Haryana, India e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_51

527

528

D. Kumar et al.

which provides classification of sentence in term of part of speech [5]. Therefore, we have developed a modern WordNet which is an extension of existing WordNet. For this purpose, we have used Java programming language, protégé (an IDE for ontology development), servlet and HTML. The remainder of paper is divided into various sections. Section 2 represents the review of all existing dictionaries and thesauri and Sect. 3 talks about gaps. In Sect. 4 we have mentioned ontology development process. Section 5 presents our proposed modern WordNet and Last section concludes the discussion and future works.

2 Related Work In this section we represent some dictionaries and thesauri which endows a concrete illustration of the word and each of them have some significant and marvelous aspect but none of them provides entire feature. Therefore, in this part we will discuss all systems along with their significances. The content of OXFORDDICTIONARIES.COM basically focused on the underway English and tells how we can use the words and how we can pronounce the word correctly as well. Its database include world as well as American English as different lexicons [6]. DICTIONARY. COM released on may1995 and is an online dictionary [7] which endows the synonym and Hindi interpretation of requested word. It also provides the sense (definition) of the words which you want to search [8]. CAMBRIDGE UNIVERSITY is issued the dictionary in 1995 for English learners as well as users. Its database include over 140,000 phrases, words and meaning. Cambridge University made it available freely all over world in 1999 [9]. MERRIAM-WEBSTER started offering its dictionary completely on zero of cost in 1996. This includes dictionary of synonyms, English usage, slang, medical terms, geography biography, proper names, Spanish/English, sports terms, and numerous others [10]. MACMILLAN DICTIONARY. COM was released in 2002 for the progressive English learner by Macmillan Education [11]. This electronic dictionary is available on web free of cost all over world. The WIKTIONARY does not only focus to incorporate the sense of the word, but emphasis on to provide more knowledge to perceive it. Therefore synonyms, opposite words, sample quotations and translations are incorporated [12]. SHABDKOSH was first brought online in 2003 by Maneesh Soni. It is an online dictionary which endows the synonym and Hindi interpretation of requested word. It also provides the sense (definition) of desired word [13]. THE FREE DICTIONARY. COM is an American online dictionary accumulate the knowledge from a various sources and provides the graphical view of the synonymous of a words [14]. COLLINSDICTIONARY.COM dictionary contains the word of the British English. It uses the full strength of computer databases and compositor in its disposition. It has released latest 12th edition in October 2014 [15]. WORD WEB ONLINE is issued in December 2016. It is a worldwide English dictionary and thesaurus, partly use the WordNet database [16]. POWER THESAURUS is a fast, free, extensive and easy-to-follow

Modern WordNet: An Affective Extension of WordNet

529

online thesaurus for writers and learners. Power thesaurus provides the result in alphabetical order and also suggests to see the result according to length [17]. HinKhoj is an Indian dictionary which is widely used in India. It is Hindi English Dictionary and also provides synonyms, antonyms and translation. Hinkhoj also aid to learn English by improving vocabulary [18]. YOUR DICTIONARY is the easiestto-use online dictionary and Your Dictionary has free browser tools, widgets and more so you can easily access the full power of Your Dictionary [19]. WORDNET is a lexical database of words having semantic relationships among them and also organizes words into groups called synsets (i.e., synonym sets) [20]. It widely used for the scientific application like information retrieval, text summarization and text classification, etc. [21]. Table 1 shows the comparative analysis of the existing online dictionaries and thesauri.

3 Gaps and Scope The WordNet is widely used and most famous lexical system in comparison of others existing online dictionaries and thesauri. Therefore, after perusal the WordNet we found that there is some important aspects in which it needs to work [22]. (a) Sentence categorization: This is a most important facet which is not supported by the existing WordNet. Sentence categorization means “to categorize the sentences into the Parts of speech”. Suppose if we write “He is King” then according to the part of speech the result should be define as “He” is Pronoun, “is” is helping verb and “King” is noun. (b) WordNet ignores the preposition but includes the lexicon categories such as verb, adjective, noun and adverb. (c) It does not provide the facility to suggest the user to see the given result by lengthwise and complexity wise. (d) Less impressive user interface. (e) Visualization of relationship among the words is not provided by WordNet. (f) Semantic Sentence Facility: It is also an important aspect which is not provided by WordNet, if two sentences are given and both have different structure and way of saying and nevertheless they have same meaning is called semantic sentence. For example: “I sold a pan to Mohit” “Mohit bought a pan from me” As you are seeing both sentences have different structure and way of saying but the meaning is same. These features are the soul of the dictionary and thesaurus. In this paper we have removed the problem of “sentence categorization” by developing a modern WordNet.

Sr. No A. B. C. D. E. F. G. H. I. J. K. L. M. N.

Title Oxforddictionaries.Com Dictionary.Com Cambridge Dictionary Merriam-Webster Macmillandictionary. Com Wiktionary Shabdkosh Thefreedictionary.Com Collinsdictionary.Com Wordweb Online Power Thesaurus Hinkhoj Your Dictionary Wordnet 3.1

Synonym Y Y Y Y Y Y Y Y Y Y Y Y Y Y

Definition (Sense) Y Y Y Y Y Y Y Y Y Y Y Y Y Y

Table 1 Comparison of different dictionaries and thesauri Multilingual Y N Y N N Y N Y Y Y N Y N Y

Word’s Complexity N Y N N N N N N N N Y N N N Visualization N N N N N N N Y N N N N N N

Ontology Based N N N N N N Y N N Y N N N Y

Alphabet Ordering N Y Y Y Y Y N N N N Y N N Y

Sound Y Y Y Y Y Y Y Y Y Y Y Y N N

Classification N N N N N N N N N N N N Y N

530 D. Kumar et al.

Modern WordNet: An Affective Extension of WordNet

531

4 Ontology Development WordNet stores all concepts in knowledge base called ontology. Ontology defines a set of classes, attributes, and relationships which are representational primitives [23]. Classes can be defined as an intension or an extension. According to an intentional definition, they are abstract objects that are defined by values of aspects that are constraints for being member of the class whereas an extensional definition, they are sets, collection of objects or abstract groups. Object properties link individuals to individuals of ‘classes’ whereas Data properties relate individuals to literal data [24]. For example, at the creation time of a table in relational database we decide the attributes of the table and also decide the types (Integer, numeric, char, int, etc.) of the attributes, these types which we decide is called the data property with respect to the ontology. Individuals or instances are the basic, “ground level” components of ontology. It can be including abstract individuals like words and numbers, as well as concrete objects like animals, tables, people, planets and automobliles. Figure 1 shows all classes of existing and modern WordNet ontology. The top most class of WordNet is thing which created by Protégé editor. In the modern WordNet the next levels of classes are “words” and “classification”. The class ‘words’ is basically a container which contain all English words as its instances and these instances may any English word for ex- ‘good’, ‘bad’, ‘weep’, ‘laugh’, ‘talk’, etc. The class “words” is divided into five sub-classes namely ‘noun’, ‘adjective’, ‘verb’ ‘preposition’ and ‘adverb’. English word can have many synonyms and definition. so, the synonyms and definition of an English words can be classified among these classes, suppose ‘good’ is an instance of the class ‘words’ so, its synonyms with respect to the noun like ‘advantage’ and must belong to the class ‘noun’ and for synonyms and definition with respect to the adjective like ‘fine’ and ‘satisfactory in quality’ should be in class ‘adjective’ and similar for other classes at this level of classes. Our modern WordNet has two object properties namely ‘has synonyms’ and ‘has definition’. The ‘has synonyms’ object property maps the instance of class ‘word’ to its subclass’s instances like ‘good’ has a synonyms ‘advantage’. Similarly, ‘has definition’ maps an instance of class ‘word’ to the instance of its subclasses like ‘good’ has definition ‘satisfactory in quality’. Our modern WordNet is differed from the existing WordNet in three ways: preposition, verb, and classification. WordNet does not classify the word into the form of preposition. Therefore, to overcome this shortcoming of WordNet we required to create the class ‘preposition’. For example, let us consider an English preposition word ‘at’ which belongs to class ‘word’ so, its synonyms like ‘by’, ‘near to’, etc. are parts of this class ‘preposition’. Figure 1a shows that ‘preposition’ is not a part of WordNet ontology but it belongs to the modern WordNet ontology. The second point which differentiates the existing WordNet to the modern WordNet is “Verb” class. WordNet ontology has two classes for verb, first one is ‘verbsynset’ and second is ‘verbsense’. We have divided “Verb” class into two subclasses namely named ‘helping verb’ and ‘main verb’. Because this provides classification of

532

D. Kumar et al.

Fig. 1 (a) Existing WordNet and (b) modern WordNet

sentence in more detailed form and easier way for implementation. The instances of the class ‘helping verb’ can be ‘is’, ‘am’, ‘are’, ‘was’, ‘were’, etc. and the instances of the class ‘main verb’ can be any appropriate English words like ‘happen’ and ‘do’, etc. The class “classification” has two subclasses, ‘pronoun’ and ‘articles’. These subclasses are requiring for classification of sentence. The pronoun like ‘he’, ‘she’, ‘they’, etc. are the instances of the subclass ‘pronoun’ and subclass ‘article’ contain ‘a’, ‘an’, ‘the’ as its instances.

5 Proposed Modern Wordnet In this section, we will describe the working of our modern WordNet in three phases. First phase of modern WordNet is a dictionary, second phase is thesaurus and last phase which makes unique our modern WordNet is Classification of the sentence. Figure 2 shows the flow of modern WordNet into three level of architecture from which the request is flowing from user to ontology and vice versa after processing. In the first level user can make the request which is passes to the next level where a servlet is working for receiving and responding the query and query result respectively. After receiving the query, it sends to downward level where user query processed and the result is sent to the user. Normally, we can say the user request travel from top to bottom and after processing the result travel from last level to first level. It is also showing the techniques which are using at each level like HTML and CSS, SERVLET and JENA API. To run this application apache tomcat server has been used which work in back end.

Modern WordNet: An Affective Extension of WordNet

533

Fig. 2 Flow chat of modern wordnet

Fig. 3 Synonym of “alpine”

5.1 Working as Thesaurus After clicking the thesaurus tab, a new page is loaded into the same window. Now, we will enter the word which we want to search into the search box and press submit button. After pressing the submit button this request goes to the Second layer and SERVLET send this request to the Third layer. Now, JENA API receives this request and process it. For processing the request JAVA API use an ontology file as input and run the following SPARQL query on it. SELECT?Synonyms WHERE{syn: alpine syn: has_synonyms ?Synonyms} After running the SPARQL query we get the result and this result sent back to the SERVLET and SERVLET return this result to the user at user interface. The synonym of word ‘alpine’ is shown in Fig. 3.

534

D. Kumar et al.

5.2 Working as Dictionary The working of the dictionary is same as the working of thesaurus but SPARQL query is different from the Thesaurus’s SPARQL query. To do the work as dictionary we will have to go through the ‘Dictionary’ tab and would give the word which we want to search in a search text field. After submit the request the request will be processed and will get the result. Basically, the SPARQL query is run in back-end and provides the result. The appropriate SPARQL query which we used is given below: SELECT?Definition WHERE {syn:alpinesyn:has_definition ?Definition}. In Fig. 4, we are getting the definition of word ‘alpine’ as result. The result has three definitions of word ‘alpine’.

5.3 Working as Sentence Classifier In the third phase we will classify the sentences into the parts of speech. There are eight types of parts of speech: nouns, pronouns, adjectives, verbs, adverbs, prepositions, conjunctions and interjections. For example, the sentence is “She is a girl” then according to the part of speech the result should be as follows: Result: she (pronoun) is (helping verb) a (articles) girl (noun). When we write the sentence in the search text field then after submitting, the appropriate application receives and process it. When application get the sentence as a request then it breaks the sentences into the words (called tokenization). The java String method “public String split (regex) is used for breaking the sentence into words, which split the string into the array on the basis of the whitespaces and these words are stored into the array of String. The SPARQL query of classification is given below: SELECT DISTINCT ?Classification WHERE{syn: q rdf: type ?Classification} This query returns the class name of the words which we provide it as input. According to the example of Fig. 5, ‘she’ is an instance of class Pronoun, ‘is’ is an

Fig. 4 Definition of “alpine”

Modern WordNet: An Affective Extension of WordNet

535

Fig. 5 Sentence classification Table 2 Tools S. No. 1 2 3 4 5 6 7 8 9

Purpose Programming language IDE Knowledge storage structure Ontology editor Query language Application programming interface Server Client-side Technology Server-side Technology

Languages JRE 1.8.0_101, JDK 1.8.0_101 Eclipse Luna Ontology Protégé 4.3 SPARQL JENA API, OWL API Apache Tomcat HTML 5.0, CSS 3.0 Servlet

instance of class Helping Verb and ‘good’ is an instance of class adjective. After running the query, application combine the result and send it back to the user and user get result which looks like according to Fig. 5. In the result ‘she’ is ‘pronoun’ and ‘is’ is ‘helping verb’ both of these are fine but good can be used in a form of noun and adjective, here it will be adjective. For implementation of methods or procedures, we have used java programming language. It handling the request of user in back end and disseminating the result to the user. Table 2 represents all terminologies which have been used in the development phase. The dictionary and thesaurus resources are stored in ontology. Protégé 4.3 editor is used for creation of ontology. For reasoning purpose we have used SPARQL query which provides implicit information from the ontology. We have used JENA API, it is an API which contain all classes and interfaces which is used to work with ontology and it just similar JDBC. JENA API basically is used for querying, manipulating, storing the data from ontology. We used Apache Tomcat server because it deploy our Java Servlets. HTML is a web technology language which is used for creating the web pages and CSS is used for styling the webpages and we use both of them at client-side to create our interface more effective and at serverside we used Servlet for request processing.

536

D. Kumar et al.

6 Conclusions and Future Scope Our main motive in this paper is to fulfill one aspect of WordNet “classification of sentences” and to illustrate, why WordNet has higher significance than other available lexical systems. WordNet support large future work and broad scope. As we have discussed some feature that are not supported by WordNet and we added one feature “classification of sentences”. Author can add other features like visualization of the relationship among the words and semantic sentence which make it more effective and attractive.

References 1. G. A. Miller, R. Beckwith, C. D. Fellbaum, D. Gross, K. Miller.: WordNet an online lexical database (1990). 2. WordNet, https://en.wikipedia.org/wiki/WordNet, last accessed 20/8/2018. 3. Miller, G. A, and Charles. G.: Contextual correlates of semantic similarity. Language and Cognitive Processes 6, 1 (Feb. 1991). 4. Patel, A., Jain, S., & Shandilya, S. K. (2018). Data of SemanticWeb as Unit of Knowledge. Journal of Web Engineering, 17(8), 647-674. 5. Yourdictionary, http://www.yourdictionary.com/about.html, last accessed 14/9/18. 6. OxfordDictionaries, https://en.wikipedia.org/wiki/OxfordDictionaries.com., last accessed 8/9/2018. 7. Dictionary.com WHOIS, DNS, & Domain Info - Domain Tools page at DomainTools.com. Retrieved June 16, (2016), last accessed 7/9/2018. 8. Dictionary.com, https://en.wikipedia.org/wiki/Dictionary.com., last accessed 26/8/2018. 9. Cambridge Dictionary, https://dictionary.cambridge.org/about.html, last accessed 1/9/2018. 10. Merriam-Webster, https://en.wikipedia.org/wiki/Merriam-Webster, last accessed 7/9/2018. 11. Macmillan Dictionary, https://en.wikipedia.org/wiki/Macmillan_English_Dictionary_for_ Advanced_Learners, last accessed 8/9/2018. 12. Wiktionary, https://en.wikipedia.org/wiki/Wiktionary, last accessed 10/9/2018. 13. Shabdkosh, https://hi.wikipedia.org/wiki/shabdkosh, last accessed 11/9/2018. 14. TheFreeDictionary, https://en.wikipedia.org/wiki/TheFreeDictionary.com, last accessed 4/8/2018. 15. Collins Dictionary, “Collins English Dictionary [12th edition]”. Collins.co.uk. HarperCollins Publishers Ltd. Retrieved 6 December 2014, last accessed 12/9/2018. 16. WordWeb, https://en.wikipedia.org/wiki/WordWeb, last accessed 15/9/2018. 17. PowerThesaurus, https://www.powerthesaurus.org/_about, last accessed 13/9/2018. 18. Hinkhoj, http://dict.hinkhoj.com/, last accessed 11/9/1018. 19. Yourdictionary, http://www.yourdictionary.com/about.html, last accessed 16/9/2018. 20. G. A. Miller, R. Beckwith, C. D. Fellbaum, D. Gross, K. Miller. WordNet: An online lexical database. Int. J. Lexicograph. 21. J. Morato, M.A. Marzal, J. Llorens, Jose Moreiro: WordNet Application. 22. Kumar D, Kumar A, Singh M, Patel A, Jain S.: An Online Dictionary and Thesaurus. Journal of Artificial Intelligent Research and Advances. 23. Jain S.: Intelligent Decision Support for Unconventional Emergencies, In Exploring Intelligent Decision Support Systems, pp. 199–219, Springer, Cham, 2018. 24. Patel, A., Sharma, A., & Jain, S. (2019). An intelligent resource manager over terrorism knowledge base. Recent Patents on Computer Science, 12(1).

Analysis of Computational Intelligence Techniques for Path Planning Monica Sood, Sahil Verma, Vinod Kumar Panchal, and Kavita

1 Introduction Path planning can be considered as the optimization problem under different constraints having objective to determine the shortest optimal path within the defined end points. In different applications, path planning is considered with different aspects to resolve the defined problem [1]. In this research work, we have presented an analysis on the existing path planning concepts using computational intelligence techniques. The overall work from year 2011 to 2018 is presented by defining some research questions. These research questions are defined here: RQ1: Based on the considered work, what are the categories of computational intelligence techniques defined for path planning and Why? RQ2: Which category of computational intelligence techniques dominating over other techniques for the path planning? RQ3: What is the overall year wise distribution of considered computational intelligence techniques? RQ4: Which computational intelligence technique is majorly used for the path planning? RQ5: Which types of obstacles are majorly handled by researchers in their research work? RQ6: Which type of system is mostly considered by researchers for path planning?

M. Sood () · S. Verma · Kavita Department of Computer Science and Engineering, Lovely Professional University, Phagwara, India e-mail: [email protected]; [email protected] V. K. Panchal Computational Intelligence Research Group (CiRG), Delhi, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_52

537

538

M. Sood et al.

The rest of the paper is discussed in the following manner: Sect. 2 presents the considered work related to path planning using computation intelligence techniques. Section 3 presents the analysis of the considered research contributions along with the answers of the proposed research questions. Section 4 concludes the paper with future research directions.

2 Related Work This section presents the study of various computational intelligence techniques for path planning. The literature is loaded with numerous papers in the domain of path planning. However, their authenticity and profundity need to be analyzed. Various papers were studied and only those which were actually containing significant information from different perspectives are considered. This study includes the work of numerous authors from the year 2011 to 2018. Table 1 illustrate the work of several authors for path planning using computational intelligence techniques in the mentioned timeframe. The considered research contributions are discussed here. Zhu et al. [2] proposed an improved ant colony-based algorithm to identify a collision free optimal path in an unknown environment with static/dynamic obstacles. Englot and Hover [3] extended the popular travelling salesman problem of ACO to multi-goal path planning problem. Huang and Tsai [4] combined the advantages of GA and PSO for collision free global path planning. Wang et al. [5] used the concepts of CS with differential evolution to find an optimal path for the air vehicles. Sood and Kaples [6] combined the algorithm of PSO with bee colony optimisation (BCO) for cross country path planning. Further, Sood and Kaur [7] implemented BBO and BCO for the cross country path planning. Liu et al. [8] designed an adaptive firefly algorithm (FA) for path planning in a known environment. Further, Wang et al. [9] modified the standard FA for path planning in unmanned combat aerial vehicle (UCAV). Wang et al. [10] introduced bat algorithm for path planning in unmanned combat air vehicles. Alajlan et al. [11] investigated the approach of genetic algorithm for path planning in large scale search space. Further, Chen et al. [12] developed a two stage ACO algorithm for robotic path planning. Zhang et al. [13] developed a multi-objective PSO-based path planning algorithm for robot in the presence of inadequate environmental conditions. Further, Mohanty and Parhi [14] proposed a CS-based optimisation concept for the robotic path planning in an unknown environment with multiple obstacles. The authors [15] further continued their work and proposed a new variant of CS algorithm for path planning problem in mobile robot navigation. Châari et al. [16] investigated Tabu Search algorithm for path planning in grid environment. Ju et al. [17] hybridized the concept of GA with PSO to eliminate the issues with GA. Duan and Huang [18] proposed a hybrid method based on artificial neural network (ANN) and imperialist competitive algorithm (ICA) for path planning in UCAV. Montiel et al. [19] implemented the bacterial potential field (BPF) optimisation algorithm for mobile robot path planning. Behnck et al.

Authors and year Zhu et al. (2011) [2] Englot and Hover (2011) [3] Huang and Tsai (2011) [4] Wang et al. (2012) [5] Sood and Kaples (2012) [6] Sood and Kaur (2012) [7] Liu et al. (2012) [8] Wang et al. (2012) [9] Wang et al. (2012) [10] Alajlan et al. (2013) [11] Chen et al. (2013) [12] Zhang et al. (2013) [13] Mohanty and Parhi (2013) [14] Mohanty and Parhi, (2014) [15] Châari et al. (2014) [16] Ju et al. (2014) [17] Duan and Huang (2014) [18] Montiel et al. (2015) [19] Behnck et al. (2015) [20] Contreras-Cruz et al. (2015) [21] Panda et al. (2016) [22] Lee and Kim (2016) [23]

Method ACO ACO PSO and GA Cuckoo search PSO and BCO BBO and BCO Firefly algorithm Firefly algorithm Bat algorithm Genetic algorithm ACO PSO Cuckoo search Cuckoo search Tabu search PSO and GA Neural network Bacterial potential field Simulated annealing ABC and evolutionary programming PSO and tabu search Genetic algorithm

Table 1 Research contributions from 2011 to 2018 Obstacle type Static and dynamic Static Static N/A Static Static Static N/A N/A N/A N/A Dynamic Static Static Static Static N/A Static and dynamic N/A Static Static Static

(continued)

System type Mobile robot system Hovering underwater vehicle Mobile robot system Uninhabited combat air vehicle Autonomous outdoor path planning Autonomous outdoor path planning Mobile robot system Uninhabited combat air vehicle Uninhabited combat air vehicle Mobile robot system Mobile robot system Robot system Mobile robot system Mobile robot system Mobile robot system Mobile robot system Unmanned combat aerial vehicle Mobile robot system Small unmanned aerial vehicle Mobile robot system Multi-robot system Mobile robot system

Analysis of Computational Intelligence Techniques for Path Planning 539

Authors and year Lu et al. (2016) [24] Das et al. (2016) [25] Zeng et al. (2016) [26] Liu et al. (2016) [27] Das et al. (2016) [28] Ayari and Bouamama (2017) [29] Lin et al. (2017) [30] Ni et al. (2017) [31] Chen et al. (2017) [32] Mac et al. (2017) [33] Zhang et al. (2017) [34] Zhang et al. (2017) [35] Ghosh et al. (2017) [36] Zhang et al. (2018) [37] Bayat et al. (2018) [38] Bibiks et al. (2018) [39] Patle et al. (2018) [40] Goel et al. (2018) [41]

Table 1 (continued)

Method Neural network PSO and GSO PSO PSO PSO PSO Genetic algorithm Neural network Fuzzy logic and neural network PSO ACO PPPIO FPA and BA PSO and DE EPF approach Improved discreate CS Firefly algorithm GWSO

Obstacle type Static and dynamic Static Static Static Static Static Static and dynamic Static and dynamic Static Static and dynamic Dynamic Dynamic Static Static Static Static Static and dynamic Dynamic

System type NAO bio-mimetic robot Multi-robot system Intelligent robot system Unmanned aerial vehicular system Multi-robot system Multi-robot system Compliant Robot System Underwater robotic vehicle Autonomous off-road vehicle Mobile robot system Tourism and transport system Uninhabited combat aerial vehicle Mobile robot system Mobile robot system Mobile robot system Resource constraint project scheduling Mobile robot navigation system Unmanned aerial vehicles

540 M. Sood et al.

Analysis of Computational Intelligence Techniques for Path Planning

541

[20] modified the simulated annealing algorithm for the path planning of small UAVs. An integrated technique of ABC with evolutionary programming (EP) was introduced by Contreras-Cruz et al. [21] for mobile robot path planning. Panda et al. [22] implemented the hybrid PSO and Tabu Search for the path planning. Lee and Kim [23] focused on the population initialisation method in a conventional GA and proposed a modified GA with effective initialisation method. Lu et al. [24] have used the deep convolutional NN for the path planning for NAO bio-mimetic robot in static and dynamic environments. Das et al. [25] proposed a combination of PSO and gravitational search optimisation (GSO)-based algorithm for path planning of a multi-robot system in a clutter environment. Zeng et al. [26] have used the improved concept of PSO with differential evolution and non-homogeneous Markov chain. Further, Liu et al. [27] have used the PSO for path planning with unmanned aerial vehicle (UAV). Das et al. [28] hybridised the improved particle swarm optimization (IPSO) with differential perturbed velocity (DV) algorithm to identify multi-robotbased optimal path in a cluttered environment. Ayari and Bouamama [29] developed a dynamic distributed PSO (D2 PSO) algorithm to identify collision free optimal path in static and known environment. Lin et al. [30] have improved the GAbased compliant robot path planning (GACRPP) by improvement in the population initialisation using improved bidirectional rapidly-exploring random tree (Bi-RRT) approach. Ni et al. [31] have used the bio-inspired neural network (BINN) for the path planning with underwater mobile robot system. Chen et al. [32] have proposed the novel concept of fuzzy support vector machine (FSVM) based on general regression neural network (GRNN) for the improvement of anti-jamming ability improvement for autonomous vehicle. Mac et al. [33] have developed a three level scenario for the path planning of mobile robots. This three level scenario includes triangular decomposition based space setting, second level is path finding based on Dijkstra algorithm and final level is based on particle swarm optimization for path smoothness and parameter optimization. Zhang et al. [34] have modified the concept of ant colony optimization for path planning in single scenic view but with multiple spots. Zhang et al. [35] have replaced the basic concept of pigeon optimization algorithm with improved predator-prey pigeon-inspired optimization (PPPIO) for the path planning in Uninhabited Combat Aerial Vehicle. The proposed algorithm reported improved results in comparison with pigeon optimization algorithm, different evolution, and particle swarm optimization. Ghosh et al. [36] have presented the used the swarm intelligence based bat algorithm (BA) and flower pollination algorithm (FPA) for the path planning in unknown cluttered environment. In 2018, Zhang et al. [37] have modified the bare bone particle swarm optimization algorithm with the integration of modified differential evolution (DE) algorithm. Bayat et al. [38] have used the charged particles based electrostatic potential field (EPF) approach for the path planning of mobile robots. Bibiks et al. [39] have improved the cuckoo search algorithm to apply on the application of scheduling algorithm. This optimization of scheduling can be extended to the work of path planning. Patle et al. [40] have used the firefly algorithm for the path planning of mobile robot navigation in uncertain environmental conditions. Goel et al. [41] have worked for the path planning of unmanned aerial vehicles using the swarm intelligence based Glow-Worm Swarm Optimization (GWSO) concept.

542

M. Sood et al.

3 Research Analysis and Discussion This section presents the research analysis and discussion based on the considered research contributions along with the answers to proposed research questions. The analysis has been presented on the basis of publication distribution, categories of computational intelligence, distribution of obstacle types, and distribution of computational intelligence methods used for path planning. In this work, total 40 research publications have been discussed. Figure 1 presents the year wise publication distribution from 2011 to 2018. From Fig. 1, it can be observed that we have considered efficient and latest research publications to analyze the contributions in path planning field. Further, Fig. 2a presents the distribution of publications based on categories of computational intelligence. The selected quality contributions of researchers are discussed by considering computational techniques into two categories of swarm intelligence based concepts and other techniques. In Fig. 2a, one more category has been added that is termed as ‘Both’ in which both the swarm intelligence and other techniques are used. From Fig. 2a, it can be analyzed that 62.50% research contributions are based on individually swarm intelligence techniques, 25% contributions are all the other categories of computational intelligence techniques, and 12.50% contributions considers techniques based on both the swarm intelligence and other computational intelligence concepts. It indicates the growing adaptability of swarm intelligence techniques for path planning. Further, Fig. 2b indicates the distributions of contributions based on types of obstacles handled during path planning. For this, we have considered three categories: Static, Dynamic, and Both. During this distribution, we have not considered the research contributions whose obstacle handling information was not available. From Fig. 2b, it can be observed that there are more research contributions based on static obstacles handling. Further, proposed research questions are answered

Year wise Publications

9 8 7 6 5 4 3 2 1 0 2011

2012

2013

Fig. 1 Year-wise research publications

2014

2015

2016

2017

2018

Analysis of Computational Intelligence Techniques for Path Planning

543

Distribution of Obstacles Handled

Categories of Computational Intelligence

21.21%

12.50%

25.00%

12.12% 62.50%

Swarm

Other

66.67%

Both

Static

(a)

Dynamic

Both

(b)

Fig. 2 (a). Distribution based on Categories of Computational Intelligence Techniques (b). Distribution of publications based on types of obstacles handled Table 2 Answers to proposed research questions Research questions RQ1: Based on the considered work, what are the categories of computational intelligence techniques defined for path planning and Why? RQ2: Which category of computational intelligence techniques dominating over other techniques for the path planning?

RQ3: What is the overall year wise distribution of considered computational intelligence techniques? RQ4: Which computational intelligence technique is majorly used for the path planning? RQ5: Which types of obstacles are majorly handled by researchers in their research work? RQ6: Which type of system is mostly considered by researchers for path planning?

Answers The categorization of computation intelligence techniques into swarm intelligence and other category is due to changing adaptability of researchers towards the swarm intelligence techniques The major contribution is analyzed for the swarm intelligence techniques with 62.50%. Moreover, swarm intelligence techniques have properties of global optimization which makes it more efficient to provide optimal problem solution Overall year-wise distribution of considered computational intelligence techniques is presented in Fig. 1 As per the considered research contributions, particle swarm optimization is majorly used by researchers Static Obstacles with 66.67% (illustrated in Fig. 2b) From Table 1, it can be observed that mobile robot system is majorly considered by researchers

based on the Table 1 and Figs. 1 and 2a, b. These answers to research questions are illustrated in Table 2.

544

M. Sood et al.

4 Conclusion and Future Scope In this paper, an analysis on the computational intelligence techniques for path planning has been discussed from year 2011 to 2018. The research contributions of current decade are selected based on the quality and relevance of the path planning using computational intelligence techniques. The analysis is illustrated with different aspects such as year wise publication distribution, categories of computational intelligence, distribution of obstacle types, and distribution of computational intelligence methods used for path planning. From these overall contributions, it can be analyzed that there is increasing focus of researchers on swarm intelligence techniques for path planning problems. Moreover, it is also observed that researchers mostly consider static obstacle types during their experimentation. For future directions, there is need to develop systems that can work both on static and dynamic obstacles. Moreover, the existing systems can be further improved by combining the swarm intelligence systems like cuckoo search, ant colony optimization with other computational category techniques such as fuzzy logic and neural network etc.

References 1. Bhattacharya, P., Gavrilova, M.: Roadmap-Based Path Planning - Using the Voronoi Diagram for a Clearance-Based Shortest Path. IEEE Robotics & Automation Magazine. 15, 58–66 (2008). 2. Zhu, Q., Hu, J., Cai, W., Henschen, L.: A new robot navigation algorithm for dynamic unknown environments based on dynamic path re-computation and an improved scout ant algorithm. Applied Soft Computing. 11, 4667–4676 (2011). 3. Englot, B., Hover, F.: Multi-goal feasible path planning using ant colony optimization. 2011 IEEE International Conference on Robotics and Automation (ICRA). pp. 2255–2260. IEEE, Shanghai (2011). 4. Huang, H., Tsai, C.: Global path planning for autonomous robot navigation using hybrid metaheuristic GA-PSO algorithm. 2011 Proceedings of SICE Annual Conference (SICE). pp. 1338–1343. IEEE, Tokyo (2011). 5. Wang, G., Guo, L., Duan, H., Wang, H., Liu, L., Shao, M.: A Hybrid Metaheuristic DE/CS Algorithm for UCAV Three-Dimension Path Planning. The Scientific World Journal. 2012, 1–11 (2012). 6. Sood, M., Kaplesh, D.: Cross-Country path finding using hybrid approach of PSO and BCO. International Journal of Applied Information Systems. 2, 22–24 (2012). 7. Sood, M., Kaur, M.: Shortest Path Finding in country using Hybrid approach of BBO and BCO. International Journal of Computer Applications. 40, 9–13 (2012). 8. Liu, C., Gao, Z., Zhao, W.: A new path planning method based on firefly algorithm. 2012 Fifth International Joint Conference on Computational Sciences and Optimization (CSO). pp. 775–778. IEEE (2012). 9. Wang, G., Guo, L., Duan, H., Liu, L., Wang, H.: A modified firefly algorithm for UCAV path planning. International Journal of Hybrid Information Technology. 5, 123–144 (2012). 10. Wang, G., Guo, L., Duan, H., Liu, L., Wang, H.: A Bat Algorithm with Mutation for UCAV Path Planning. The Scientific World Journal. 2012, 1–15 (2012).

Analysis of Computational Intelligence Techniques for Path Planning

545

11. Alajlan, M., Koubaa, A., Chaari, I., Bennaceur, H., Ammar, A.: Global path planning for mobile robots in large-scale grid environments using genetic algorithms. 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR). pp. 1–8. IEEE (2013). 12. Chen, X., Kong, Y., Fang, X., Wu, Q.: A fast two-stage ACO algorithm for robotic path planning. Neural Computing and Applications. 22, 313–319 (2013). 13. Zhang, Y., Gong, D., Zhang, J.: Robot path planning in uncertain environment using multiobjective particle swarm optimization. Neurocomputing. 103, 172–185 (2013). 14. Mohanty, P., Parhi, D.: Cuckoo search algorithm for the mobile robot navigation. International Conference on Swarm, Evolutionary, and Memetic Computing. pp. 527–536. Springer (2013). 15. Mohanty, P., Parhi, D.: Optimal path planning for a mobile robot using cuckoo search algorithm. Journal of Experimental & Theoretical Artificial Intelligence. 28, 35–52 (2014). 16. Châari, I., Koubâa, A., Bennaceur, H., Ammar, A., Trigui, S., Tounsi, M., Shakshuki, E., Youssef, H.: On the Adequacy of Tabu Search for Global Robot Path Planning Problem in Grid Environments. Procedia Computer Science. 32, 604–613 (2014). 17. Ju, M., Wang, S., Guo, J.: Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding. The Scientific World Journal. 2014, 1–8 (2014). 18. Duan, H., Huang, L.: Imperialist competitive algorithm optimized artificial neural networks for UCAV global path planning. Neurocomputing. 125, 166–171 (2014). 19. Montiel, O., Orozco-Rosas, U., Sepúlveda, R.: Path planning for mobile robots using Bacterial Potential Field for avoiding static and dynamic obstacles. Expert Systems with Applications. 42, 5177–5191 (2015). 20. Behnck, L., Doering, D., Pereira, C., Rettberg, A.: A Modified Simulated Annealing Algorithm for SUAVs Path Planning. IFAC-PapersOnLine. 48, 63–68 (2015). 21. Contreras-Cruz, M., Ayala-Ramirez, V., Hernandez-Belmonte, U.: Mobile robot path planning using artificial bee colony and evolutionary programming. Applied Soft Computing. 30, 319– 328 (2015). 22. Panda, M., Priyadarshini, R., Pradhan, S.: Autonomous mobile robot path planning using hybridization of particle swarm optimization and Tabu search. 2016 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). pp. 1–7. IEEE (2016). 23. Lee, J., Kim, D.: An effective initialization method for genetic algorithm-based robot path planning using a directed acyclic graph. Information Sciences. 332, 1–18 (2016). 24. Lu, Y., Yi, S., Liu, Y., Ji, Y.: A novel path planning method for biomimetic robot based on deep learning. Assembly Automation. 36, 186–191 (2016). 25. Das, P., Behera, H., Panigrahi, B.: A hybridization of an improved particle swarm optimization and gravitational search algorithm for multi-robot path planning. Swarm and Evolutionary Computation. 28, 14–28 (2016). 26. Zeng, N., Zhang, H., Chen, Y., Chen, B., Liu, Y.: Path planning for intelligent robot based on switching local evolutionary PSO algorithm. Assembly Automation. 36, 120–126 (2016). 27. Liu, Y., Zhang, X., Guan, X., Delahaye, D.: Potential Odor Intensity Grid Based UAV Path Planning Algorithm with Particle Swarm Optimization Approach. Mathematical Problems in Engineering. 2016, 1–16 (2016). 28. Das, P., Behera, H., Das, S., Tripathy, H., Panigrahi, B., Pradhan, S.: A hybrid improved PSODV algorithm for multi-robot path planning in a clutter environment. Neurocomputing. 207, 735–753 (2016). 29. Ayari, A., Bouamama, S.: A new multiple robot path planning algorithm: dynamic distributed particle swarm optimization. Robotics and Biomimetics. 4, 1–15 (2017). 30. Lin, D., Shen, B., Liu, Y., Alsaadi, F., Alsaedi, A.: Genetic algorithm-based compliant robot path planning: an improved Bi-RRT-based initialization method. Assembly Automation. 37, 261–270 (2017). 31. Ni, J., Wu, L., Shi, P., Yang, S.: A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles. Computational Intelligence and Neuroscience. 2017, 1–16 (2017).

546

M. Sood et al.

32. Chen, J., Jiang, W., Zhao, P., Hu, J.: A path planning method of anti-jamming ability improvement for autonomous vehicle navigating in off-road environments. Industrial Robot: An International Journal. 44, 406–415 (2017). 33. Mac, T., Copot, C., Tran, D., Keyser, R.: A hierarchical global path planning approach for mobile robots based on multi-objective particle swarm optimization. Applied Soft Computing. 59, 68–76 (2017). 34. Zhang, W., Gong, X., Han, G., Zhao, Y.: An Improved Ant Colony Algorithm for Path Planning in One Scenic Area With Many Spots. IEEE Access. 5, 13260–13269 (2017). 35. Zhang, B., Duan, H.: Three-Dimensional Path Planning for Uninhabited Combat Aerial Vehicle Based on Predator-Prey Pigeon-Inspired Optimization in Dynamic Environment. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 14, 97–107 (2017). 36. Ghosh, S., Panigrahi, P., Parhi, D.: Analysis of FPA and BA meta-heuristic controllers for optimal path planning of mobile robot in cluttered environment. IET Science, Measurement & Technology. 11, 817–828 (2017). 37. Zhang, J., Zhang, Y., Zhou, Y.: Path Planning of Mobile Robot Based on Hybrid MultiObjective Bare Bones Particle Swarm Optimization With Differential Evolution. IEEE Access. 6, 44542–44555 (2018). 38. Bayat, F., Najafinia, S., Aliyari, M.: Mobile robots path planning: Electrostatic potential field approach. Expert Systems with Applications. 100, 68–78 (2018). 39. Bibiks, K., Hu, Y., Li, J., Pillai, P., Smith, A.: Improved discrete cuckoo search for the resourceconstrained project scheduling problem. Applied Soft Computing. 69, 493–503 (2018). 40. Patle, B., Pandey, A., Jagadeesh, A., Parhi, D.: Path planning in uncertain environment by using firefly algorithm. Defence Technology. 1–11 (2018). 41. Goel, U., Varshney, S., Jain, A., Maheshwari, S., Shukla, A.: Three Dimensional Path Planning for UAVs in Dynamic Environment using Glow-worm Swarm Optimization. Procedia Computer Science. 133, 230–239 (2018).

Techniques for Analysis of the Effectiveness of Yoga Through EEG Signals: A Review Rahul Goyat, Anil Khatak, and Seema Sindhu

1 Introduction People have been carrying out Yoga Practices since time immemorial. Yoga asanas are widely considered as a panacea to a slew of ailments related to mind and body. A yoga instructor may commend yoga for better functioning of organs and the better control over the psychological and physiological functioning of the brain and body respectively. When Yoga Practices are performed properly as they are supposed to be performed, yoga can bring results no less than a miracle. Yoga improves mental as well as the physical fitness of a person and recovers person from psychological disorders [1]. Pranayama and Yoga: Pranayama (AnulomVilom, Kriya Yoga) is the technique which comes under pranayama which helps in stabilizing strength and wakefulness. Pranayama (AnulomVilom, Kriya Yoga) is a phraseology with many definitions. This is the yoga technique in which air is inhaled, retained and then exhaled. The relationship developed between breathing and wakefulness is the function of pranayama [2]. Human brain requires energy to work, and Superbrain yoga is one of the methods which provides energy, and it also activated the mind and with the involvement internal alchemy Super brain yoga awakens the energy in the subject’s brain. Super brain yoga involves acupressure technique and also breathing process as both of these are used to make a balance between both left and right part of the brain and also make the brain energetic. We can do this Yoga at any place and also it also very less time consuming and easy process. Electroencephalography allows us to measure brain activity involved in various types of cognitive functions. Experimental goal of this work is to interpret and characterize the EEG electroencephalography activity during Pranayama (AnulomVilom, Kriya Yoga)

R. Goyat · A. Khatak () · S. Sindhu Department of BME, GJUS&T, Hisar, Haryana, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_53

547

548

R. Goyat et al.

and Super brain yoga. This paper intends to highlight the importance and benefits of pranayama, yoga, superbrain yoga on the subjects mind during normal and mental disordered conditions [3]. These are the yoga techniques in which brain is recharged, peace is induced and improvement in mental disorders are observed due to which brain activities got changed. An electroencephalograph is used for understanding the function of the brain since 1924. Electroencephalography technique is used for mapping mind waves which changed during and after Pranayama and Superbrain yoga respectively. Then the acquired electroencephalography signals are analyzed and various results are discussed [4]. Electroencephalography (EEG): An electroencephalogram is a biomedical device which is used for the detection and amplification of the electrical signal from the mind of living beings by placing electrodes over the scalp of the living being. Electroencephalographs would identify even the minute time frame changes [4]. Our mind has different mind waves,the classification of the mental state and the frequency in the mind state is shown in the Table 1 below. Non-agitated, relaxed, aware of self surroundings 8–12 khz, beta alternes, thinking, focused agitation, envisioning, integrated thinking 12–3 khz drowsiness .10–4 khz represent the frequency wave denoting the mental state beyond the 40 khz. Delta Deep and dreamless sleep, trance and insensate Table 1 Frequency bands The Alpha signal physiologically represents healthy and relaxed conditions. The two subbands of the alpha frequency is low 8–10 Hz, it is related to the integration between the body and the mind and the self awareness high alpha: 10–12 Hz. It is related to the healing, in the body or the mind connection. Physiologically the beat signal is related to the, active, but not active body and brain that is agitated. The task processing is associate to the gamma signal [5, 6]. physiologically delta signal is related to the resting position, low strength arousal. The Theta signal physiologically co-relates to healing and integration of mind or body [7, 8]. The objectives of this review is to study and understand the various techniques that are employed for analyzing the effectiveness of Yoga through electroencephalography (EEG) signals. The article remaining is presented with the (1) The related works with the breif explanation, (2) various techniques, comparisons and the (3) Conclusion along with the references below. Table 1 Classifications of mind waves at different psychological conditions Wave Delta (δ) Theta (θ) Alpha (α) Beta (β) Gamma (γ)

Frequency (Hz) 0.1–4 4–8 8–12 12–30 ≥30

Mind State Deep and dreamless sleep, trance and insensate Visionary, dreamlike, drowsy and knowing, intuitive Non-agitated, relaxed, conscious state of mind Focused, integrated, agitation, awareness, alertness Integrated thought, thinking

Techniques for Analysis of the Effectiveness of Yoga Through EEG Signals: A Review

549

2 Related Work An extensive study for reviewing the most prevalent Yoga, pranayama and meditation techniques which use electroencephalography technique for obtaining signals is done by considering the latest articles of reputed journals. Few of them are briefly elaborated as follows: Pradhan et al.: The Dimensional studies of complexities involve in electroencephalography is studied in this work. Yogic meditation is employed for evaluating the EEG signal obtained through two subjects. The running fractal dimension and running attractor dimension is utilized for analyzing the four channels for the compressed spectral array in between the yoga process during meditation. Spectacular features are revealed by analyzing the signals through the CSA. Low fractal dimensions values are obtained by running the fractals plots during post and pre-meditation periods [9]. Fu et al.: Loop Control Theory is the basic principle which was defined as basic relation followed by the O/P followed the I/P. Discussion of Loop Control Theory between mind and the environment is proposed in this article. Quality of mind waves affected due to the interrelationship between mind I/P and O/P signal. The objective of remote controlling the mind waves is to enhance the quality of mind waves and also enhance the mind power [10]. Patil et al.: Anulomvilom is the technique which comes under pranayama which helps in stabilizing strength and wakefulness. A person who is performing anulomvilom is subjected to electroencephalogram for the recording of brain signal. Then the recorded signal is subjected to wavelet transformation. Then signals different characteristics are obtained which are then used to develop a model. The developed model and software are accustomed to checking over the different parameters of the human body [11]. Hosseini et al.: Electroencephalograph signal for Epileptic seizures are recognized by using the chaos-ANFIS process. Electroencephalograph quantification is done in two forms Hurst Exponent [H] and Lyapunov Exponent [λ] which is nonlinear. The process of electroencephalography examination is done in two stages which are a qualitative and quantitative examination. ANFIS testing method is used to classify the potential of the Hurst and Lyapunov measures. Then the developed technique is used to obtain the results in which a high exactness of above 97% in the inter-ictal case and approximately equals to 97% is achieved in the ictal case when tested with ANFIS classifier and also with the use of four fold cross validation successfully [12]. Meshram et al.: Pranayama techniques are commended for the better functioning of organs mainly for better control over the psychological and physiological functioning of the brain and body respectively. Sudarshan-Kriya advantage over other yoga techniques is that it is less time taking the process and it also enhances brain ability. The advantages of Sudarshan-Kriya yoga and its results overmind are analyzed by using electroencephalographic signals [13].

550

R. Goyat et al.

Ramachandran et al.: The technique is used to interpret and characterize the EEG activity during Pranayama breathing concerning temporal and spatial context, and acquire two-channel data using MyDAQ and Labview. The variation in the EEG wave pattern was explored during different stages of Pranayama as well as the variation of alpha wave level in the left and right frontal, temporal parietal and occipital regions of the brain. The statistical significance test is performed for different cycles of Pranayama to measure the significant change in the alpha power concerning baseline measures. An effort was made to analyze the difference in the cerebral electrical activity among long-term and short-term meditation practitioners. Data was recorded for ten subjects during three cycles of Pranayama, each cycle lasting for two minutes. In order to measure the effects towards the end of Pranayama, the last 20 s of EEG data were analyzed in each cycle of Pranayama [14]. Vijayalakshmi et al.: So many research works have been done in recent time on meditative yoga techniques, and the results proved that meditation is beneficial for psychological disorders and also benefit humans physiologically healthy. Brain signals are obtained with the help of electroencephalography and analyzed with the help of different methodology. Spectral features of electroencephalography throughout the meditation have been traced, and then the Quantitative analysis method is used to find out the changes in electroencephalography signals throughout the meditation. Then brain signals analysis is done by using EEG, and it is found out that theta wave energy is increased which is indicating that the subject brain is in uttermost relaxing position. After the period of meditation almost all the persons who are subjected to the meditation shows rise in alpha (α) and beta (β) waveforms. Finally, the conclusion is that meditation is beneficial to human beings to live a stress-free life, a life free from psychological disorders and physiological disorders [15]. Shaw et al.: Electroencephalographic signals are obtained during Kriya Yoga which is a kind of meditation technique and in normal condition, then the statistical characteristics of the electroencephalographic signal have been studied. Practicing meditation leads to change the concentration level in the human brain and then the brain signals from the subject who meditates and non-meditates are cautiously calculated with the help of electroencephalography. Then the overlapped segmented data received with the help of electroencephalograph is divided and the different variables are calculated for every segmented data. We do not use all the data acquired from electroencephalography as in place of it we only use high order statistical information for the analysis [16]. Sreenivasa et al.: Human brain requires energy to work and Super brain Yoga (SBY) is one of the methods which provides energy, and it also activated the mind, and with the involvement, internal alchemy SBY awakens the energy in the subject’s brain. SBY involves acupressure technique and also breathing process as both of these are used to make a balance between both left and right part of the brain and also make the brain energetic. We can do this Yoga at any place and also it also very less time consuming and easy process [17].

Techniques for Analysis of the Effectiveness of Yoga Through EEG Signals: A Review

551

Tiwari et al.: Electroencephalograph is used for mapping mind signals developed during psychological disorders. The subjects mind signal analysis is done in their different moods, and the analyzed data varies from subject to subject. Basically in this paper electroencephalography technique is used for the analysis of change in mind waves during mood change and also analyze that yoga is preventing the switches in the mood or not. The main targeted area of this technique is the analysis of the frequency which developed during relaxed mental state [18].

3 Comparison Table 2 The existing yoga analysis methods with the detail explanation on its comparison: Initially the research work was at elementary level with which the effectiveness of yoga and meditation is evaluated by analysing EEG signals which is executed on few subjects. Later on various advanced Electroencephalographic techniques

Table 2 Comparison Author and year Pradhan et al. (1995)

Fu et al. (2009)

Patil et al. (2011)

Hosseini et al. (2013) Meshram et al. (2014) Ramachandran et al. (2014) Vijayalakshmi et al. (2015)

Shaw et al. (2016)

Sreenivasa et al. (2017) Tiwari et al. (2017)

Comparison and techniques used Yogic meditation is employed for evaluating the EEG signal obtained through two subjects and dimensional complexities during electroencephalography is studied [9]. Discussion of Loop Control Theory between mind and the environment is proposed in and the quality of mind waves affected due to the interrelationship between mind I/P and O/P signal [10]. A person who is performing anulomvilom is subjected to electroencephalogram for the recording of brain signal. Then the recorded signal is subjected to wavelet transformation [11]. Electroencephalograph signal for Epileptic seizures is recognized by using the chaos-ANFIS process [12]. An electroencephalogram is used to obtain mind signal during Sudarshan-Kriya yoga [13]. Experimental goal of this work is to interpret and characterize the EEG activity during pranayama [14]. Spectral features of electroencephalography are traced during meditation and then the Quantitative analysis is done for the signal [15]. The statistical characteristics of the electroencephalographic signal have been studied during Kriya Yoga and normal brain conditions in this paper [16]. Super Brain Yoga significance and advantages are highlighted in this study in this paper [17]. Electroencephalographic signals analyzed during stress and depression and methods to control these mental disorders are discussed [18].

552

R. Goyat et al.

are utilised at different mind conditions such as mentals disorders, stress, anxiety, depression by collecting huge data base from the large number of subjects.

4 Conclusion The effects of different yoga techniques are positive on mind wave activities as yoga stimulates the brain functioning as alpha, beta and theta mind waves are activated, which have been associated with improvement in brain functioning, memory, control the mood swings, and relieve mind from anxiety, stress and depression-related brain disorders. Here, in this review paper brain waves from different subjects are analyzed by using electroencephalograph for encompassing mediation impacts, pranayama, yoga on the brain, as variety of signal based processing methodologies are used such as classification methods and frequency bends. Wavelet analysis is used to decompose signals utilizing the statistical approaches into sub-bands and the extracted features. This review work can further be utilized by the emerging researchers to develop advance techniques which can be incorporated to look over the mind activities and the subject’s mental health progress can also be monitored at a different stage of mental illness when they are subjected to different yoga techniques, meditation and pranayama during pre and post yoga sessions through electroencephalography signal.

References 1. Desai, R., Tailor, A., Bhatt, T.,: Complementary Therapies in Clinical Practice Effects of yoga on brain waves and structural activation : A review. Complement Ther Clin Pract. 2015, 1–7. doi:https://doi.org/10.1016/j.ctcp.2015.02.002. 2. Ahani, A., Wahbeh, H., Nezamfar, H., Miller, M., Erdogmus, D., Oken, B.,: Quantitative change of EEG and respiration signals during mindfulness meditation. J Neuroeng Rehabil. 2014, 11(1), 1–11. doi:https://doi.org/10.1186/1743-0003-11-87. 3. Jois, S.N., D’Souza, L., Moulya, R.,: Beneficial effects of superbrain yoga on short-term memory and selective attention of students. Indian J Tradit Knowl. 2017, 16(June), S35-S39. 4. Korde, S., Paikrao, L.,: Analysis of EEG Signals and Biomedical Changes due to Meditation on Brain. A Review. Int. Res J of Eng and Tech. 2018. 5. Ajay, Khatak, A., Gupta, A., Gesture Recognition Techniques: A comparison on the Accuracy & Complexities. IEEE International Conference on Intelligent Computing and Sustainable System. ICICSS 2018. 6. Adhalli, M., Umadevi, H., Guruprasad, S., Hegde, R.,: Design and Simulation of EEG Signals Analysis - A Case Stud. IJESC. 2016. 7. Anju, Khatak, A.,: Analysis of the Various Eyes Images using Colour Segmentation Techniques and their Noise effects. Journal of Image Processing & Pattern Recognition Pro-gress. 2017; 4:1. 8. Yadav, A., Khatak, A., Sindhu, S.,: A Comparative Analysis of Different Image Restoration techniques. IEEE International Conference on Intelligent Computing and Sustainable Sys-tem. ICICSS. 2018.

Techniques for Analysis of the Effectiveness of Yoga Through EEG Signals: A Review

553

9. Pradhan, N., Dutt, D.N.,: An analysis of dimensional complexity of brain electrical activity\nduring meditation. Proc First Reg Conf IEEE Eng Med Biol Soc 14th Conf Biomed Eng Soc India An Int Meet. 1995:92–93. doi:https://doi.org/10.1109/RCEMBS.1995.511692 10. Fu, H-L.,: Under different conditions of learning memory in the Electroencephalograph (EEG) analysis and discussion. 2009:352–355. 11. Engineering, R.S., Anulom, A.,: Electroencephalograph Signal Analysis During Anulom vilom. 12. Hosseini, S.A., Akbarzadeh, T. M-R., Naghibi-Sistani, M.B.,: Qualitative and Quantitative Evaluation of EEG Signals in Epileptic Seizure Recognition. Int J Intell Syst Appl. 2013;5(6):41–46. doi:https://doi.org/10.5815/ijisa.2013.06.05 13. Meshram, Y., Fulpatil, P.,: Review Paper on Electroencephalographic Evaluation of Sudarshan Kriya. 2014;3(7):2012–2014. 14. Vijayalakshmi, K.,: Independent Component Analysis of EEG Signals and Real-Time Data Acquisition Using MyDAQ and Labview. 2014;1(9):65–74. 15. Kochupillai, V.,: Quantitative Analysis of Eeg Signal Before and After Sudharshana Kriya Yoga Chandana. Int J Public Ment Heal Neurosci. 2015;2(2):2394–4668. 16. Shaw, L., Routray, A.,: Statistical features extraction for multivariate pattern analysis in meditation EEG using PCA. 2016 IEEE EMBS Int Student Conf. 2016:1–4. doi:https://doi.org/ 10.1109/EMBSISC.2016.7508624 17. Sreenivasa, T.,: Ed MP, Phil M, et al. Super Brain Yoga. 2017;11(18):324–325. 18. Tiwari, A., Tiwari, R.,: Monitoring and detection of EEG signals before and after yoga during depression in human brain using MATLAB. 2017;(Iccmc):329–334. doi:https://doi.org/ 10.1109/ICCMC.2017.8282702

Multiobjective Integrated Stochastic and Deterministic Search Method for Economic Emission Dispatch Problem Namarta Chopra, Yadwinder Singh Brar, and Jaspreet Singh Dhillon

1 Introduction The nature inspired evolutionary methods studied till date proves that nature is a robust optimizer. By following the various nature inspired optimization methods, the global optimum solution is achieved for different applications of engineering problems. Evolutionary algorithms are population based algorithms inspired by biological evolution having a metaheuristic or stochastic characteristics [1]. It starts with random set of solutions and is updated with each iteration, in which the weak solutions are eliminated with the addition of some random changes in solutions. The whole optimization process includes the stages like initialization, selection, crossover, mutation and finally termination [2]. Thus, the stochastic methods includes randomly changing systems with time used in the fields of engineering, science, physics, image processing, signal processing, technology and computer science. The various methods that comes under nature inspired stochastic approach are ant colony optimization, genetic algorithms, particle swarm optimization, etc. On the other hand, deterministic methods are mostly based on heuristic approach which uses some starting and stopping rule according to some experimentation. In case the stopping criteria is not met, the parameter settings are changed within the iterations. It includes various methods like Steepest ascent, Pattern search, Tabu search, Hooke and Jeeves type and Simplex based which are most commonly used for the local search [3]. This paper integrates the stochastic approach, which is used for the base level search with deterministic approach for the local level search which helps in improving the results obtained from stochastic method alone. Particle

N. Chopra () · Y. S. Brar I.K.G. Punjab Technical University, Kapurthala, India J. S. Dhillon Sant Longowal Institute of Engineering and Technology, Longowal, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_54

555

556

N. Chopra et al.

swarm optimization (PSO) is integrated with simplex approach and its validity is checked by considering the benchmark functions. After that, the proposed algorithm is also tested on multiobjective constrained economic emission dispatch (MCEED) problem in thermal power plants, so as to have its practical applicability in the complex engineering problems. Due to the strict government regulations regarding the increasing environmental pollution, the conventional economic dispatch problem nowadays is converted into multiobjective conflicting and non-commensurable economic emission dispatch problem. It includes the minimization of fuel cost and pollutants emissions simultaneously, while satisfying certain equality and inequality constraints. This multiobjective problem is a conflicting and non-commensurable type, so it requires an efficient decision making method along with optimization approach [4]. Price penalty factor method is used to convert this conflicting multiobjective problem into the single objective problem.

2 Multiobjective Constrained Economic Emission Dispatch Problem This section consist of mathematical formulation of the practical multiobjective problem considered for the validation of proposed algorithm.

2.1 Economic Dispatch Objective It solves the minimization of overall fuel cost in thermal power plants, which is represented by piecewise quadratic equation given in Eq. (1), where, Pi is the power generated, ai , bi and ci are the coefficients of fuel cost for ith unit and u represents the number of power units in the plant. Minimize FC (Pi ) =

u    ai Pi2 + bi Pi + ci

$/ h

(1)

i=1

2.2 Emission Objective It solves the minimization of harmful pollutants’ emissions like oxides of carbon, sulphur and nitrogen from thermal power plants. Mathematically, it is also represented by quadratic equation given in Eq. (2) and emission due to each unit depends on the output of that unit only. Here, ET is total emissions, α i , β i and γ i are the emission coefficients for ith unit.

Multiobjective Integrated Stochastic and Deterministic Search Method for. . .

Minimize ET (Pi ) =

u    αi Pi2 + βi Pi + γi

Kg/ h

557

(2)

i=1

2.3 Constraints Handling The mentioned economic and emission objectives are subjected to certain equality and inequality constraints. Equality constraint that should be followed must include total power generated, Pi must be equal to the load demand (LD ) plus the transmission power losses (TPL ), Bij , B0i and B00 in Eq. (4) are the transmission loss coefficients. Mathematically, u 

Pi = LD + TP L

(3)

i=1

where TP L =

u u   i=1 j =1

Pi Bij Pj +

u 

B0i Pi + B00

(4)

i=1

In inequality constraint, the individual unit power generated must lie between the minimum Pimin and maximum Pimax constraint applied on it, ie., Pimin ≤ Pi ≤ Pimax

(5)

2.4 Multiobjective Constrained Optimization Problem The multiobjective optimization problem includes the minimization of conflicting objectives, i.e., overall fuel cost and emissions simultaneously [5]. Price penalty factors (f) are used to convert the multiobjective problem into single objective and also act as final decision making as given in Eq. (7) and the minimum value from Eq. (6) depending upon Eq. (8) decides the optimum solution. Thus, multiobjective constrained economic emission dispatch (MCEED) will be formulated as, MCEED = min {OT 1 , OT 2 , OT 3 , OT 4 }

(6)

  OTj = FC + fj ET ; j = 1, 2, 3, 4

(7)

where

558

N. Chopra et al.

        FC Pimin FC Pimin FC Pimax FC Pimax         f1 = ; f2 = ; f3 = ; f4 = ET Pimax ET Pimax ET Pimin ET Pimin (8)

3 Integrated PSO with Simplex Method 3.1 Conventional Stochastic Particle Swarm Optimization (PSO) Particle swarm optimization is a nature inspired stochastic search method inspired from the physical behavior of birds swarm. It is originated by Eberhart and Kennedy in 1995 and starts with a random set of variables equivalent to the number of particles in the swarm [6–8]. The improvisation is performed by moving the particles around the search space by means of a set of simple mathematical expressions which model some inter particle communications. During their flight, each particle adjusts its position according to its own experience and the experience of neighboring particles, making use of the best position encountered by itself and its neighbours. The whole process is quite simple and depends upon two main parameters, i.e., bestp (particles best position in the swarm during each iteration) and bestg (overall best position from all iterations). The movement of swarm particles depends on the velocity and position as,   V elocity, vpi (k + 1) = wvpi (k) + c1 r1 (k) bestp pi (k) − Ppi (k)   + c2 r2 (k) bestg i (k) − Ppi (k)

P osition, Ppi (k + 1) = Ppi (k) + vpi (k + 1)

I nertia weight, w = wmax −

w max − w min itr max

(9)

(10)

 itr; wmin = 0.4; wmax = 0.9 (11)

c1 , c2 are acceleration constants usually both are equal to 2, r1 , r2 are random numbers uniformly distributed between 0 and 1, itr and itrmax are number of iterations and maximum iterations.

Multiobjective Integrated Stochastic and Deterministic Search Method for. . .

559

3.2 Conventional Deterministic Simplex Method (DSM) In case of deterministic type methods, generally the information needed to solve the problem is available initially and with that fixed input it always give the same output until the input is changed again. Deterministic simplex method was originally suggested by Spendley in 1962 and after that improved by Nelder and Mead for finding local minima from a function of several variables which act as n + 1 vertices of polyhedron. Based upon these vertices, worst point (pw ), best point (pb ) and next to worst point (pnw ) based on the objective function value are determined from the initial simplex [3, 9]. The overall process includes three main operations, reflection, contraction and expansion. The amount of contraction and expansion are controlled by factor μ and factor σ . It then extrapolates the behavior of the objective function measured at each test point, the technique progresses in order to find a new test point and to replace one of the old test points with the new one, and so on. Mathematically, Centroid, pcj

1 = n

n+1 

pij

(12)

i=1,i=w

Ref lected point, prj = 2pcj − pwj

(13)

⎧ ⎪ ⎨

pnew,j

    + σ) pcj − σpwj ; if FT prj < FT pbj (1       = 1 − μ pcj + μpwj ; if FT prj ≥ FT pwj         ⎪ ⎩ 1 + μ p − μp ; if F p cj wj T nwj < FT prj < FT pwj (14)

The objective function is evaluated at this new value and the procedure is again repeated until some stopping condition is fulfilled. The iterations will terminate when the simplex converged enough according to the Eq. (15). ⎡ n+1 

⎢ ⎢ ⎣

i=1

⎤     2 1/2 OT pi − OT pci ⎥ ⎥ ≤∈ ⎦ n+1

(15)

where, ∈ is the termination parameter. The recommended values for the parameters are σ ≈ 2.0, μ ≈ 0.5 and ∈ ≈ 0.001.

560

N. Chopra et al.

3.3 Proposed Integrated PSO with DSM The proposed PSODSM algorithm is described as per the following pseudo code: Algorithm 1. PSO Initiate PSO parameters FOR itr = 1, itrmax FOR k = 1, Np Compute vpi and Ppi ENDFOR IF inequality constraint using Eq. (5) satisfied ELSEIFPpi (k) ≤ Pimin , set Ppi (k) = Pimin ELSEIFPpi (k) ≥ Pimax , set Ppi (k) = Pimax ENDIF IF equality constraint using Eq.(3) satisfies ELSE compute Ppi (k) using Eq. (10) ENDIF Compute f using Eq. (8) Compute Objective function FT using Eq. (6) Increment k = k + 1 Select bestp and bestg Compute inertia weight using Eq.(11) and updated vpi and Ppi using Eq. (9) and (10) ENDFOR

Algorithm 2. Simplex Initiate simplex parameters CALL Algorithm 1 FOR it = 1, itmax Set pw, pb and pnw Compute pc and pr using Eq. (12) and (13) IFpr < pb, compute pnew = (1 + σ)pcj − σpwj ELSEIFpr ≥ pw, Compute pnew = (1 − μ)pcj + μpwj ELSEIFpnw < pr < pw, Compute pnew = (1 + μ)pcj − μpwj ENDIF Check constraints using Eq. (3)–(5) IF convergence using Eq. (15) satisfied ELSE Replace pw by pnew ENDIF Increment it = it +1 ENDFOR

4 Test Systems: Results and Discussion The validity of the proposed integrated algorithm is investigated by using benchmark functions and then there practical applicability is checked using different cases related to MCEED problem. The code is implemented in MATLAB R2014b on a system with Intel core i3 and 4GB RAM. Initialization of PSO and DSM parameters are undertaken after several trials at different values and the best one that are considered in the cases are reported as: total particles in each swarm =10, total members in one particle is taken equal to the number of power units taken in each case study, wmin = 0.4, wmax = 0.9, c1 = c2 = 2, σ = 2.0, μ = 0.5 and ∈ = 0.001.

4.1 Validation of Proposed Algorithm Using Benchmark Functions The benchmark functions considered for the validation of proposed algorithm are Griewank, Rosenbrock, Step, Booth and Rastrigin. These are scalable, non-scalable,

Multiobjective Integrated Stochastic and Deterministic Search Method for. . .

Fitness function

8

561

Booth Step Rastrigin Rosenbrock Griewank

6 4 2

1 13 25 37 49 61 73 85 97 109 121 133 145 157 169 181 193 205 217 229 241 253 265 277 289

0 Iterations

Fig. 1 Convergence characteristics for benchmark functions

continuous, discontinuous, unimodal, multimodal and separable in nature [10]. The results obtained using proposed PSODSM are compared with conventional PSO (Fig. 1) (Table 1).

4.2 Validation on Practical MCEED Problem The proposed integrated algorithm is also tested on practical problem of multiobjective constrained economic emission dispatch on a six unit and forty unit system under small scale and large scale system. The results obtained for multiobjective dispatch after using conversion price penalty factor are shown in Table 2. From these results, it is concluded that both six unit and forty unit system show minimum MCEED cost of 59082.12 $/h and 314062.26 $/h for min-max type price penalty factor, that is calculated using Eq. (6)–(8). Case Study 1 (Small scale system): This system has six units, fuel cost coefficients, emission coefficients, generator limits and transmission loss coefficients are taken from [11]. The fuel cost and emissions calculated at 1000 MW demand with min-max type price penalty factor are 51207.4 $/h and 827.042 Kg/h. The results obtained are compared with other methods from [12] and it is concluded that the proposed integrated method proves more effective in results quality. The convergence characteristic are also shown in Fig. 2 (Table 3). Case Study 2 (Large scale system): This system consist of 40 units with all generator data is taken from [12]. The results obtained using proposed method are compared with other cited algorithms discussed in [10]. From Table 4, it is concluded that the proposed integrated method proves its superiority in comparison to other mentioned methods. The convergence characteristics for this case are also shown in Fig. 2.

Benchmark function

Griewank Rosenbrock Step Booth Rastrigin

S. No.

1 2 3 4 5

Fitness—PSODSM Worst Average 9.46 × 10−07 7.06 × 10−08 2.94 1.91 2.47 × 10−17 2.54 × 10−18 4.28 2.25 8.15 × 10−09 2.63 × 10−09

Table 1 Comparison results for benchmark functions Best 7.18 × 10−09 1 5.75 × 10−20 1 3.40 × 10−10

Fitness—PSO Worst 9.35 × 10−02 15.79 9.94 × 10−06 9.81 9.82 × 10−02

Average 3.30 × 10−02 13.75 3.54 × 10−06 5.26 3.01 × 10−02

Best 8.03 × 10−03 12.49 7.51 × 10−08 3.56 2.89 × 10−03

562 N. Chopra et al.

Multiobjective Integrated Stochastic and Deterministic Search Method for. . .

563

Table 2 MCEED results for the considered case studies Power Number of demand generators (MW) MCEED ($/h) (at various Price Penalty Factors) MinMin-Min Max-Max MaxMax Min Small scale system 6 1000 59082.12 108072.11 93007.73 367050.96 Large scale system 40 10,500 314062.26 694397.05 330862.05 888815.35

Case Study Type of system

65000

1.267 Fuel cost ($/h) × 105

Fuel cost ($/h)

63000 61000 59000 57000 55000 53000

1.265 1.263 1.261 1.259 1.257

51000 1

51

101

151

201

251

1 19 37 55 73 91 109 127 145 163 181 199 217 235

1 2

Iterations no. Case Study 1

Iteration no. Case Study 2

Fig. 2 Convergence characteristics for the considered case studies Table 3 Comparison of results using PSODSM with other methods at 1000 MW Methods Gravitational search algorithm Differential evolution Simplified recursive Genetic algorithm similarity PSO PSODSM

Total FC ($/h) 51255.7 51264.6 51264.6 51262.3 51269.6 51207.4

Emissions, ET (Kg/h) 827.138 828.715 828.715 827.261 828.863 827.042

5 Conclusion This paper presents the advantage of integrating the evolutionary type stochastic search method with deterministic search method. Owing to the wide success of nature-inspired particle swarm optimization in various engineering applications, it still has some disadvantages like premature convergence and stagnation problem with the increase in number of iterations. Thus, there is still the possibility of improvement in these swarm based optimization methods. Here, the stochastic type particle swarm optimization is used as base level search to find the global optimum solution and then the results obtained are further refined by integrating it with deterministic simplex method as local level search. The validation of proposed integrating search method is done on several benchmark functions and the results

564

N. Chopra et al.

Table 4 Comparison of large scale system with other methods Unit Pmin Pmax 1 36 114 2 36 114 3 60 120 4 80 190 5 47 97 6 68 140 7 110 300 8 135 300 9 135 300 10 130 300 11 94 375 12 94 375 13 125 500 14 125 500 15 125 500 16 125 500 17 220 500 18 220 500 19 242 550 20 242 550 21 254 550 22 254 550 23 254 550 24 254 550 25 254 550 26 254 550 27 10 150 28 10 150 29 10 150 30 47 97 31 60 190 32 60 190 33 60 190 34 90 200 35 90 200 36 90 200 37 25 110 38 25 110 39 25 110 40 242 550 Total power (MW) Transmission losses (MW) Total FC ($/h) × 105 Emissions, ET (kg/h) × 105 MCEED ($/h) × 105

PSODSM 104.8 85.2 104.0 158.2 93.4 136.4 239.0 225.1 292.6 215.7 218.6 309.2 418.4 462.5 400 400 400 496.4 422 422 422 440.7 546.4 546.4 546.4 546.4 110 110 110 93.4 186.4 186.4 123.3 159.0 161.7 196.4 78.7 80.7 80.5 369.6 10697.9 197.9 1.2575 2.11 3.14

MODE [12] 113.5295 114 120 179.8015 96.7716 139.276 300 298.9193 290.7737 130.9025 244.7349 317.8218 395.3846 394.4692 305.8104 394.8229 487.9872 489.1751 500.5265 457.0072 434.6068 434.531 444.6732 452.0332 492.7831 436.3347 10 10.3901 12.3149 96.905 189.7727 174.2324 190 199.6506 199.8662 200 110 109.9454 108.1786 422.0682 10,500 – 1.2579 – –

NSGA-II [12] 113.8685 113.6381 120 180.7887 97 140 300 299.0084 288.889 131.6132 246.5128 318.8748 395.7224 394.1369 305.5781 394.6968 489.4234 488.2701 500.8 455.2006 434.6639 434.15 445.8385 450.7509 491.2745 436.3418 11.2457 10 12.0714 97 189.4826 174.7971 189.2845 200 199.9138 199.5066 108.3061 110 109.7899 421.5609 10,500 – 1.2583 – –

SPEA-2 [12] 113.9694 114 119.8719 179.9284 97 139.2721 300 298.2706 290.5228 131.4832 244.6704 317.2003 394.7357 394.6223 304.7271 394.7289 487.9857 488.5321 501.1683 456.4324 434.7887 434.3937 445.0772 451.897 492.3946 436.9926 10.7784 10.2955 13.7018 96.2431 190 174.2163 190 200 200 200 110 109.6912 108.556 421.8521 10,500 – 1.2581 – –

Multiobjective Integrated Stochastic and Deterministic Search Method for. . .

565

obtained are compared with conventional PSO method, which proves its superiority in the results quality. Further, the practical application of proposed algorithm is done using the constrained problem of economic emission dispatch in thermal power plants, where again it shows its superiority as compared to other cited optimization methods. Acknowledgement The authors are indebted to I.K.G. Punjab Technical University, Kapurthala, for providing the advanced research facilities while preparing this research paper.

References 1. Padhy, N.P., Simon, S.P.: Soft Computing: With MATLAB Programming. Oxford University press (2015). 2. DeoBodha, K., Mukherjee, V., KumarYadav, V., Saurabh, K., Anium, S.: A Levy Flight Based Voltage Particle Swarm Optimization for Multiple-Objective Mixed Cost-Effective Emission Dispatch. 8th International Conference on Cloud Computing, Data Science Engineering. pp. 1–6 (2018). 3. Deb, K.: Optimization for engineering design: Algorithms and examples. PHI Learning Pvt. Ltd. (2012). 4. Kothari, D.P., Dhillon, J.S.: Power System Optimization. Prentice-Hall of India (2004). 5. Dosoglu, M.K., Guvenc, U., Duman, S., Sonmez, Y., Kahraman, H.T.: Symbiotic organisms search optimization algorithm for economic/emission dispatch problem in power systems. Neural Computing Applications. 29, 721–737 (2018). 6. Abido, M.A.: Multiobjective particle swarm optimization for environmental/economic dispatch problem. Electrical Power Systems Research. 79, 1105–1113 (2009). 7. Hadji, B., Mahdad, B., Srairi, K., Mancer, N.: Multi-objective PSO-TVAC for Environmental/Economic Dispatch Problem. Energy Procedia. 74, 102–111 (2015). 8. Chopra, N., Brar, Y.S., Dhillon, J.S.: Modified particle swarm optimization using simplex search method for multiobjective economic emission dispatch problem. Condition Assessment Techniques in Electrical Systems (CATCON), 2017 3rd International Conference on. pp. 171– 176. IEEE (2017). 9. Deb, K.: Multi-objective optimization using evolutionary algorithms. John Wiley & Sons (2001). 10. Jamil, M., Yang, X.S.: A literature survey of benchmark functions for global optimisation problems. International Journal of Mathematical Modelling and Numerical Optimization. 4, 150 (2013). 11. Balamurugan, R., Subramanian, S.: A Simplified Recursive Approach to Combined Economic Emission Dispatch. Electric Power Components and Systems. 36, 17–27 (2007). 12. Güvenç, U., Sönmez, Y., Duman, S., Yörükeren, N.: Combined economic and emission dispatch solution using gravitational search algorithm. Scientia Iranica. 19, 1754–1762 (2012).

Enhanced Webpage Prediction Using Rank Based Feedback Process K. Shyamala and S. Kalaivani

1 Introduction Web usage has become one of the exhaustive processes that make critical issues on performance degradation and causes the serious issues on user latency [1]. Nowadays World Wide Web plays an important role in our daily life and the web user is not ready to wait for a long time for subsequent pages when browsing the website. Congestion is one of the major issue when the user’s access is increased on the internet [2–4]. Cache service, webpage prediction and prefetching are some of the existing methodologies to manage the user latency problems. Webpage prediction consists of a set of pages expected to be the future request page of the users [5, 6]. This prediction is based on current and past access behaviors of the web users. It can be useful in predicting the relevant web page of a user in advance and prefetching the same. This reduces the user latency of accessing webpages. Web caching is a technique used as the solution for the problem of these access latencies. Prediction must be accurate and helpful to the users by increasing the browsing speed of webpages. It may found to be unsuccessful, if the desired webpages of the users turn to be irrelevant to the page predicted. Prefetching predicts the webpages that are expected to be requested in the near future and may not be accessed by the users at present [7]. Predicted pages are accessed from the original web server and stored temporarily in the cache memory of the client.

K. Shyamala · S. Kalaivani () PG & Research Department of Computer Science, Dr. Ambedkar Government Arts College (Autonomous), Affiliated to University of Madras, Chennai, Tamil Nadu, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_55

567

568

K. Shyamala and S. Kalaivani

1.1 Effective Use of Caching Techniques The browser caching is the technique which is used to minimize user latency by prefetching user future requested pages [8]. To decrease the server burden and to increase the faster access to the user, the browser will store the accessed pages and also used to fetch the pages through the hint given to it. The proxy cache is originated in the proxy server. It is initiated between the client machines and the original server. The proxy cache and the browser cache mechanism on the same principle, but on a much wider measures. The browser cache deals with only a single user, while the proxy server deals with thousands of users in the same manner. The proxy server checks its cache, the page request is received. It sends the requested page to the client if it is available. When it is unavailable or expired, the proxy server will request the page from the original server and send back to the client. The requested web pages are stored in the proxy’s local cache for future request. To decrease both user delay and internet congestion web proxy caching is commonly used especially by computer network administration technology providers. Various computations can be used to compute the efficiency and performance of web caching techniques. Hit Ratio [HR] and Byte Hit Ratio [BHR] are generally used metrics for computing the performance of web proxy caching policies. Hit Ratio is defined as the ratio of the number of requests served from the proxy cache and the total number of requests made. Byte Hit Ratio refers to the number of bytes served from the cache divided by the total number of bytes served. Recently, the content of various websites are not static, and perpetually new pages are added to the website to make effective. This scenario needs a more advanced model that can be online, that changes accordingly with the behavior of the users. The offline models are found to be suitable for the applications that are stable. Online model are desirable when the users demands on inserting and deleting web pages cumulatively. Most of the present models programmed only for the offline prediction and need a component for processing tasks that are offline. Online models that are outlined, it may soon turn out to be large to fit in memory. It is clear that prediction of webpages only based on the web server log file is not efficient. This paper presents a dynamic prediction algorithm for webpage prediction. In our previous work [9] the prediction has done for offline sites i.e. the static website, which may not include any of the new pages. It was identified that most of the websites often do the two processes called including and pruning of pages from their website for the advancements and for improvement of user navigation behavior. The objective of this work is to provide efficient dynamic web page prediction through Monte Carlo Prediction using feedback process model.

Enhanced Webpage Prediction Using Rank Based Feedback Process

569

2 Objectives Following are the objectives taken into consideration to achieve a better web user navigation in the dynamic website. • To enhance the user navigation graph construction, it supports for dynamic prediction. • To enhance the prediction algorithm based on prediction feedback process. • To rank the web pages those are predicted correctly. • To optimize the cache to reduce user latency.

3 Related Work Different models have been put forward for modelling the user navigation behavior and predicting an immediate demand of the users. Prediction by Partial Match [PPM] is a most widely used technique in prediction. Enhancement and efficiency of PPM model were discussed in [10], where three prototypes were proposed and they are (a) support pruning (b) confidence pruning and (c) error-pruning. In the PPM model, the states which are not frequently appeared were pruned through the predictor. In [11] author has achieved fast access of web document through the caching technique, Web caching can be achieved through client browser cache, server cache and proxies to store and process the most recently accessed web document. Access is rapid as the frequently used documents are kept in the cache memory. Content in the cache can be accessed easily by the client, server or proxy relying on the intention for which it is used. Web cache diminishes the latency dramatically. Since, it saves time to access the data/web pages local to the users. It results in the faster access of the document in the client’s cache. This will reduce the network traffic of the overall system. Content Delivery Network (CDN) is the most recent technique to reduce the web user perceived latency [12]. It is experienced by the internet user by locating several servers close to the client. CDN works like a distributed server which loads the web page from the server according to the nearest client request. Then the loaded webpage with the request should be shared between the geographically connected servers. Various organizations have installed their own CDN and established the effectiveness. Recently many websites use CDN to reduce web user perceived latency and load balancing [13, 14]. From the existing work, it is identified that the future user request prediction is achieved with less accuracy may lead to mis-prediction and the performance of the system will get degrade. Hence the proposed system is presented to achieve efficient prediction and better accuracy through the feedback process of the prediction algorithm. Here the feedback process depicts the comparison of the current request and previously predicted web pages to assign the rank.

570

K. Shyamala and S. Kalaivani

4 Enhanced Monte Carlo Prediction In previous work [9], the prediction is done based on a web server log file. The analysis has been made with monthly based log files and a user navigation graph is constructed according to the user navigation behavior. Monte Carlo Prediction (MCP) search algorithm was proposed [9] and implemented to predict the future requested webpages and then prefetching the same pages in the client browser cache for fast accessing. Accuracy was calculated based on the predicted pages with the recently processed log file and also shown the increase in accuracy when the user hit the predicted page. The present work is an extension of the previous work to enhance the MCP algorithm and applying prediction for dynamic webpages. MCP algorithm [9] depicts the fully constructed undirected graph which constructed from the complete analysis of web user log file. The MCP process has been divided into two phases. The first phase elucidates the graph construction; from the web user log file and the algorithm identified the unique users and unique pages along with their navigation behavior. The graph was constructed based on web user navigation behavior. The second phase depicts the MCP algorithm [9] which is one of the decision making algorithm through winning probability. Heuristic search algorithms play a major role in gaming theories. This technique has been introduced successfully in webpage prediction and also gained improved performance in unigram webpage prediction. Here unigram prediction model represents the one word sequence to predict the user future request pages.

5 Implementation Dynamic webpage prediction is nothing but when the new page is added to the website, it may not be available in the existing log file. So the offline prediction algorithm fails to predict the recently updated pages. The present work is depicted through two phases: the first phase shows the implementation of graph construction with periodical updation of the graph for the newly added webpages, which is similar to the previous graph with slight modification. The second phase of the work elucidates the feedback process between the user request and prediction algorithm in such a way to update the rank for each of the predicted pages.

5.1 Modified Graph Construction The graph is constructed according to the UNGC [9] and in this, work it is included with the automatic periodical update of the constructed graph to achieve dynamic webpage prediction.

Enhanced Webpage Prediction Using Rank Based Feedback Process

571

Algorithm 1: Enhanced UNGC Input:Log File Output:Updated User Navigation Graph 1. Begin 2. Time ← Instantiate timer object 3. St ← Instantiate scheduled task class (Graph construction class) 4. Tm ← assign time interval to perform the task (depends upon the dynamic website) 5. Time.schedule (St, O, tm) 6. End

The Enhanced UNGC algorithm shows the instantiation of the timer object which process the scheduled task periodically. The scheduled task delineates the process of graph construction followed from previous work [9]. The successful implementation of this Enhanced UNGC algorithm updates the graph according to the recent weblog entries. After identifying the new pages, it is considered as another unique page and it will be included in the graph according to the user navigation. For instance, consider a new webpage “x.htm” which was newly uploaded in the website and visited by a user, the UNGC algorithm identifies the new page (i.e) “x.htm” and creates the new node for the new page. The new node will be created with the help of reference string (i.e.) user navigation behavior and the edges will be included for the new node. So that accessing count for the page will be added whenever the user hit the page and prediction also done with the consideration of new page. Our proposed Enhanced Monte Carlo Prediction (EMCP) algorithm with feedback ranking paves way for the efficient prediction to the dynamic websites like e-commerce website which frequently updates according to their perception.

5.2 Ranking the Pages Through a Feedback Process The present work shows the enhancement of MCP algorithm by including rank to each page and prediction will be done through the rank of the predicted webpage. The ranking process illustrates when the set of pages are predicted for the future request and it should be validated with the next request whether the predicted page has been requested or not. The rank can be increased to the page when the user request and predicted pages were the same. Here, the feedback process is used to compare between current request and previously predicted pages. The pages in the graph have rank zero at the beginning stage then it will get updated accordingly through a feedback process. For example, when the user requests a page for the first time, it will not have any rank for that page. Ranking processes start from the second request. Figure 1 shows the prediction architecture with rank based feedback process. Enhanced Monte Carlo Prediction (EMCP) algorithm depicts the feedback process to increase the rank of the webpage. After each prediction, there should

572

K. Shyamala and S. Kalaivani Algorithm 2: Enhanced Monte Carlo Prediction (EMCP) Algorithm Input: Processing each Request Output: Assigning rank for each requested page 1. Begin 2. Req← Identify the current request page 3. Rank ← list of rank for each unique pages initially zero 4. Pred← list of predicted pages for the previous request 5. For all pages in pred do 6. If (Req is equal to any of the pages in pred) then 7. Rank[req] ← increase the rank by one (feedback process) 8. Endif 9. Endfor 10. Mp ← call MCPA 11. Update the list in Pred Mp←Mp list with max (Rank) pages 12. Repeat the process for all request 13. End

Fig. 1 Prediction Architecture with the feedback process

be a process which should identify whether the rank of the predicted page can be increased or not. This can be identified by monitoring the user request. This process is considered to be a feedback process.

6 Results and Discussion The working process of implemented model elucidates through Enhanced UNGC algorithm and Enhanced Monte Carlo Prediction (EMCP) Algorithm. This work shows that user navigation graph has been constructed according to user navigation

Enhanced Webpage Prediction Using Rank Based Feedback Process Table 1 Monte Carlo values and rank of the selected adjacent pages

Pages X1 X2 X3 X4 X5 X6 X7 X8 X9 X10

MP 16.27 10.07 20.53 17.91 10.19 18.75 10.30 9.32 7.65 14.54

573

Rank based feedback process 23 7 31 19 14 19 6 4 4 17

behavior, and then the graph gets updated periodically. After the graph construction, for each request the prediction algorithm works to predict user future request through Monte Carlo prediction algorithm. The prediction occurs from the current request node by identifying all its adjacent nodes, then MCP applied on the basis of frequency count and a number of users hit the page in unigram sequence. The rank of the predicted page gets increased through feedback connection when the user hit the same page. This successful implementation shows the performance improvement over MCP. NASA Kennedy space centre [15], ClarkNet, Calgary, SEC.gov [16] are the datasets used in the present and previous work. Table 1 shows the sample pages along with its calculated Monte Carlo Prediction values and rank of those pages. From Table 1, the maximum value of MP along with preferable maximum rank value will be considered for current prediction. Figure 2 depicts the graphical representation of Monte Carlo values and rank of the considered adjacent pages. This implementation gives improved performance for prediction and also it is necessary to answer the challenges raised by the enhanced MCP. When the graph updates periodically, the size of the graph get increased as a result, the processing time may get degrades. Pruning of unused pages from the user navigation graph is introduced in to overcome this problem. Pruning is the process of removing unused pages (the pages with very less hit count) from the graph but not from the site. Unused pages were removed by considering the creation time of the node (page) and comparing its hit count with the threshold frequency [9].

6.1 Accuracy The proposed algorithms were implemented using Java NetBeans 8.0.1 (IDE). The implementation depicted in the performance analysis of considered datasets from [9] and accuracy is evaluated based on the formula as given in (1).

574

K. Shyamala and S. Kalaivani

Fig. 2 Monte Carlo values and rank of adjacent pages Table 2 Dataset used for experimental analysis and accuracy Datasets DS1—NASA DS2—SEC.gov DS3—ClarkNet DS4—Calgary

No. of records (after preprocess) 1,032,471 455,893 865,234 134,677

Accuracy for MCP algorithm 71% 68% 70% 66%

Accuracy for Rank based feedback algorithm 75% 71% 72% 69%

  Accuracy = P agesPredictedCorrectly ÷ T otalpagesconsidered ∗ 100% (1) Table 2 shows the list of dataset which considered in [9] and present work which shows the improved performance when applied with rank based algorithms (i.e.) enhanced MCP algorithm. Figure 3 shows the graphical representation for the accuracy of MCP and prediction based feedback algorithms. The accuracy gets increased when including a rank of each predicted pages through feedback process. The experimental results show the improvement in prediction and prefetching the webpages. Web user latency is reduced by using rankbased feedback process. It will suitable for the dynamic websites and e-commerce websites.

Enhanced Webpage Prediction Using Rank Based Feedback Process

575

Fig. 3 Accuracy after feedback process

7 Conclusion The successful implementation of MCP algorithm with feedback ranking process shows the improved performance of MCP which is implemented in our previous work [9]. In this work, the unigram sequence was considered to predict user future request. The dynamic webpage prediction is achieved by updating the constructed graph periodically and also pruning of unused nodes (pages) from the graph gives better performance. The implemented model is most suitable for the prediction in an e-commerce website which gives the effective result. This work not only gave an improvement in performance but also it has given the direction to improve the prefetching concepts by introducing content delivery network to reduce the user latency. This work gives the motivation to introduce new concepts on caching technique to prefetch the predicted pages in the case of reducing user latency.

References 1. Moghaddam, Alborz, and Ehsanollah Kabir. Dynamic and memory efficient web page prediction model using LZ78 and LZW algorithms. Computer Conference, 2009. CSICC 2009. 14th International CSI. IEEE, 2009. 2. Chen, Xin, and Xiaodong Zhang. “A popularity-based prediction model for web prefetching.” Computer 36.3 (2003): 63–70. 3. Themistoklis Palpanas. Web Prediction using Partial Match Prediction, Technical Report CSRG-376, Department of Computer Science; University of Toronto, www.it.iitb.ac.in/~it620/ papers/WebReqPatterns.pdf. 4. Dario Bonino, FulvioCorno; Giovanni Squillero; Politecnico di Torino. An Evolutionary Approach to Web Request Prediction. The Twelfth International World Wide Web Conference 20–24 May

576

K. Shyamala and S. Kalaivani

5. Sunil Kumar and Ms. Mala Kalra. Web page Prediction Techniques: A Review. International journal of computer Trends and Technology (IJCTT), July 2013, Vol 4(7), ISSN: 2231–2803, pp - 2062-2066. 6. Vidhya, R. Predictive Analysis of Users Behaviour in Web Browsing and Pattern Discovery Networks. International Journal of Latest trends in Engineering and Technology (IJLTET), Vol 4(1). ISSN 2278–62. May 2014. 7. Gellert, Arpad, and Adrian Florea. Web prefetching through efficient prediction by partial matching. World Wide Web 19.5 (2016): 921–932. 8. Marshak, Marik, and Hanoch Levy. Evaluating web user perceived latency using server side measurements. Computer Communications 26.8 (2003): 872–887. 9. K. Shyamala and S. Kalaivani, Application of Monte Carlo Search for Performance Improvement of Webpage Prediction, International Journal of Engineering & Technology (UAE), PP133–137, Vol 7, No 3.4, 2018. 10. M. Deshpande and G. Karypis. Selective Markov models for predicting Web page accesses. vol. 4: ACM Press New York, NY, USA, 2004, pp. 163–184. 11. Greg Barish, Katia Obraczka. World Wide Web Caching: Trends and Techniques. IEEE Communications Magazine, Volume: 38 Issue: 5, on page(s): 178–184 May 2000. 12. Akamai, http://www.akamai.com. 13. Digital Island, http://www.sandpiper.net. 14. K.L. Johnson, J.F. Carr, M.S. Day, M.F. Kaashoek. The measured performance of content distribution networks. Fifth International Workshop on web caching and Content Distribution, Lisbon (Portugal), June 2000. 15. Dataset: http://ita.ee.lbl.gov/html/contrib/NASAHTTP.html. 16. U.S Govt website: https://www.sec.gov/dera/data/edgar-log-file-data-set.html

A Study on Distance Based Representation of Molecules for Statistical Learning Abdul Wasee, Rajib Ghosh Chaudhuri, Prakash Kumar, and Eldhose Iype

1 Introduction Statistical learning techniques are gaining popularity due to its ability to describe large data sets with high accuracy at a much smaller computational effort. A recent advancement in the data science area, popularly known as Machine Learning (ML) [1, 3, 11, 13] is gaining popularity among the computational community. [14, 20, 21] With proper training of the model, Machine Learning tools are promising to be valuable in predicting electronic properties in chemical space [26]. As a rough approximation, one can say that the ground state electronic properties are approximate functions of the nuclear coordinates of atoms and its valencies. Thus for a system of atoms, if the atomic positions are fixed, its electronic energy is also fixed. Machine learning regressors can be trained to predict the energies of molecular systems based on these atomic positions or patterns. Recently developed data science method, machine learning (ML), offers a potential pathway to predict electronic properties of the atomic and molecular system, once the model is properly trained. Sampling the chemical space using the traditional density functional theory (DFT) method suffers from high computational cost. Roughly, one can confidentially say that the DFT energies are a function of nuclear coordinates. Thus for a system of atoms, if the atomic positions are fixed, its electronic energy is also fixed. Machine learning regressors can be trained to predict the energies of molecular systems based on these patterns (atomic positions).

A. Wasee · P. Kumar · E. Iype () Department of Chemical Engineering, BITS Pilani, Dubai Campus, Dubai, United Arab Emirates e-mail: [email protected] R. G. Chaudhuri Department of Chemical Engineering, National Institute of Technology Durgapur, Durgapur, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_56

577

578

A. Wasee et al.

In the current work, we use an approach in predicting molecular energies of a number of the molecular system using atomic coordinates and charges. Another reason to go for overall energy is that the atomic energies are not observable from DFT.

2 Methodology 2.1 Regressors We have tested a number of regressors with our data which includes water monomer configurations, Sin clusters, methane and ethane monomers. The list of regressors are Linear regression model [15, 27], Linear model ridge regression [9], Bayesian ridge regression [8, 16], Theil-Sen regression [4, 5, 10], Huber regression [4, 5, 10], Random sample consensus (RANSCAN) [6, 19], Decision tree regression [18, 23] and Multilayer perception regression [12, 22].

2.2 Data Set Two types of data sets were used for testing the performance of the above-mentioned regressors. First, a single water molecule is considered and the positions of the atoms are randomly changed to mimic the molecular vibration and bending of the structure to generate 10,000 frames. For each frame, DFT single point calculations are performed at the level of PW91[17] xc-functional and TZ2P [25] basis set using ADF package [7, 24]. The structures and the computed energies of the structures are stored in separate files. Similarly, the procedure is repeated for methane and ethane molecules separately. Secondly, molecular clusters of Sin , where n = {1, 2 · · · 25} were also used. For each n, starting from an initial random structure, a geometry optimization is performed using the above mentioned DFT settings. As the geometry optimization is performed, the structure and energy at every SCF (self-consistent field) iteration is saved. This result in a total of 612 atomic configuration for Sin , where n = {2, 3 · · · 25}, clusters. The DFT calculations performed on the molecules (water, methane, and ethane) resulted in a large set of data of atomic coordinates and energies. The configurations sampled for all the three molecules were around the optimized structures such that the configurational space represents the region of phase space where the molecule travels during a molecular mechanics run. For the case of Sin clusters, the clusters are chosen such that n varies from 2 to 25. As one expects, the binding energy per the number of Si atom n in the clusters increases with the number of atoms n first

A Study on Distance Based Representation of Molecules for Statistical Learning

579

Fig. 1 DFT optimized energies of Sin clusters. The energy is scaled by 1/n, where n is the number of atoms in each cluster

before reaching a plateau [2]. This is the reason for the increased stability of Sin clusters as n increases. A similar trend is also visible in our calculation as per Fig. 1.

3 Results 3.1 Regression Results for Sin Clusters The dataset for Sin (n = 2, 3 · · · 25) is used to test the performance of various regressors. The X dataset contains the geometric information of all the atoms in each cluster and y data contains the energy. Each element in X is a vector with 3 × n entries. Here, three stands for the three xyz coordinates of each atom. Since n varies based on the size of the cluster, the length of the vector is set to be the maximum length possible (i.e. 3 × 25), by padding shorter vectors with zeros. The results of the regression using different regressors are shown in Table 1. Except for Theil-Sen and RANSAC regressors, all other regressors have shown a good fit for the data set. The RMS error and R 2 values show that the quality of the fit is very good for all the regressors, except for Theil-Sen and RANSAC. The -ve sign implies that the R 2 value is negative and therefore represents a poor fit. For Theil-Sen and RANSAC, the RMS errors show big deviations corresponding to the average energy value (i.e. 3.6 eV) and in addition, the R 2 value is negative for RANSAC. This shows the poor quality of the fit using these two regressors. In addition, the data set is shuffled and a 10-fold cross-validation is performed, and the mean of the absolute error (column2 in Table 1) also show that the regressors (except for Theil Sen and RANSAC) performed well for this data set.

580 Table 1 Results of various regressors on Sin clusters ; < Regressor 10-Fold CV |error| Ridge Regressor 0.04 Linear Regressor 0.03 Decision Tree Regressor 0.01 MLP Regressor 0.06 Bayesian Ridge Regressor 0.03 Theil Sen Regressor 0.32 RANSAC Regressor 1.47 Huber Regressor 0.04

A. Wasee et al.

Max error 0.24 0.23 0.33 0.33 0.22 4.95 44.98 0.35

RMS error 0.06 0.05 0.05 0.09 0.05 0.87 7.90 0.08

R2 0.99 0.99 0.99 0.99 0.99 0.05 − 0.99

Max error 45.40 46.73 9.22 21.83 47.82 60.41 57.74 49.84

RMS error 7.34 7.34 0.96 3.84 7.36 9.73 9.81 7.64

R2 0.03 0.03 0.98 0.73 0.03 − − −

All error values are in eV Table 2 Results of various regressor on water molecule ; < Regressor 10-Fold CV |error| Ridge Regressor 5.44 Linear Regressor 5.44 Decision Tree Regressor 0.58 MLP Regressor 2.95 Bayesian Ridge Regressor 5.45 Theil Sen Regressor 6.62 RANSAC Regressor 6.61 Huber Regressor 5.09 All error values are in kcal/mol

3.2 Regression Results for Water Molecules Next, the regressors were tested using a dataset of water molecules. For this, an optimized water molecule structure is taken and the structure was given random perturbations of ±0.1 Å in each direction (xyz) for every atom in the molecule. Then the DFT energies are computed using single point calculations. The dataset contains 10,000 entries and the overall prediction performances of each regressor are given in Table 2. Since water molecule contains two types of atoms, one more integer variable representing the type of atoms is also added along with the xyz coordinates in the data set. In comparison with Sin data, the performance of the regressors is not satisfactory, except for the case of Decision Tree Regressor and MLP regressor. Decision Tree Regressor and MLP regressor gave an R 2 values of 0.98 and 0.73 respectively. All other regressors performed poorly for this dataset. One of the main reasons could be that the structural variations between entries in the data set are too small compared to for example in the case of Sin . In the case of Sin , there were 25 different structures which are different from each other by at least one Si atom. Such a variation does not exist for the case of water molecule dataset. Another reason could be the complex interaction within the water molecule itself. Perhaps the regressors find it difficult

A Study on Distance Based Representation of Molecules for Statistical Learning Table 3 Results of various regressor on water molecule using xyzq ; < Regressor 10-Fold CV |error| Max error Ridge Regressor 3.94 30.12 Linear Regressor 3.94 30.74 Decision Tree Regressor 0.55 9.51 MLP Regressor 3.31 24.16 Bayesian Ridge Regressor 3.94 31.02 Theil Sen Regressor 3.75 32.51 RANSAC Regressor 4.47 44.89 Huber Regressor 3.70 32.73

RMS error 5.47 5.47 0.93 4.35 5.48 5.56 6.96 5.67

581

R2 0.46 0.46 0.98 0.66 0.46 0.44 0.12 0.42

All error values are in kcal/mol

to capture the inherent dependency of molecular energies on the small variation of structures. In order to improve the prediction, we added, in addition to xyz coordinates and atom type, Mulliken charges to the data set. Thus the length of the feature vector for each atom increased from 4 (atom type, x, y and z coordinates) to 5 (atom type, x, y, z coordinates, and Mulliken charges(q)). The results of the prediction are given in Table 3. All the regressors showed significant improvement. The Decision Tree Regressor continued to outperform the other regressors. Although, there is a significant improvement for other regressors, the R 2 values are still not satisfactory (< 0.5) for them except for MLP Regressor. The RANSAC is the poorest performing regressor among them for water dataset. Although the addition of atomic charges shows good improvement to the fit, this may not be a viable solution for a practical application. The reason being, in order to calculate charges, one needs to perform a DFT or other charge estimation methods such as Electro-negativity Equilibration method (EEM) or Charge Equilibration method (QEq). Using such methods for calculating charges will kill the computational advantage of performing a Machine Learning prediction. Thus, we can say the Decision Tree Algorithm remains to be the best choice with these data sets.

3.3 Regression Results for Methane and Ethane Molecules In order to test the dependence of the complexity of the molecular structures on the prediction results, we used methane and ethane molecule in a similar way as that of water molecule above. 10,000 configurations of each molecule are generated from the equilibrium structure by perturbing the xyz coordinates by a maximum of 0.1 Å. The resulting prediction results are given in Tables 4 and 5. In this case, the feature vector consists of all the xyz coordinates, atom types, Mulliken charges. The performance of the regressors for both methane and ethane molecules were relatively better than the case for the water molecule. In the case of methane, the

582

A. Wasee et al.

Table 4 Results of various regressor on methane molecule using xyzq ; < Regressor 10-Fold CV |error| Max error RMS error Ridge Regressor 2.36 19.29 3.20 Linear Regressor 2.37 19.33 3.20 Decision Tree Regressor 1.90 19.08 2.63 MLP Regressor 9.64 46.43 12.12 Bayesian Ridge Regressor 2.36 19.43 3.20 Theil Sen Regressor 2.35 18.40 3.21 RANSAC Regressor 2.75 23.09 3.79 Huber Regressor 2.31 20.86 3.25

R2 0.75 0.76 0.83 − 0.75 0.75 0.66 0.75

All error values are in kcal/mol Table 5 Results of various regressor on ethane molecule ; < Regressor 10-Fold CV |error| Ridge Regressor 2.86 Linear Regressor 2.85 Decision Tree Regressor 4.11 MLP Regressor 42.36 Bayesian Ridge Regressor 2.85 Theil Sen Regressor 2.85 RANSAC Regressor 3.63 Huber Regressor 2.83

Max error 19.43 19.06 26.60 188.24 18.44 18.81 22.13 20.01

RMS error 3.74 3.73 5.38 52.90 3.73 3.73 4.72 3.76

R2 0.78 0.78 0.55 − 0.78 0.78 0.65 0.78

All error values are in kcal/mol

Decision Tree regressor continued to outperform all other regressors. Now, the performance of all the regressors without the charges are given in the Supporting information and one can see that except for Decision Tree regressor, every other regressor performed poorly in that case. Another important aspect to notice that the R 2 value of the Decision Tree Regressor did not change when we added charges. A similar observation can be made for the dataset with water as well. Thus, one can say that the performance of the Decision Tree Regressor is not affected by the absence of atomic charges, which is a huge advantage, in the sense, there is no need for a separate charge estimator for the regressor to work properly.

4 Conclusion The comparison of the performance of eight Machine Learning regressors are analyzed on four different data sets of molecular energies. The algorithms tested include Ridge Regressor, Linear Regressor, Decision Tree Regressor, MLP Regressor, Bayesian Ridge Regressor, Theil Sen Regressor, RANSAC Regressor and Huber Regressor. The data set include 612 molecular clusters of Sin where n = {2, 3 · · · 25}, each 10,000 configurations of water, methane and ethane molecules.

A Study on Distance Based Representation of Molecules for Statistical Learning

583

Table 6 Overall performance (in terms of R 2 value) of all the regressors on the four data sets Regressor Ridge Regressor Linear Regressor Decision Tree Regressor MLP Regressor Bayesian Ridge Regressor Theil Sen Regressor RANSAC Regressor Huber Regressor

Sin clusters 0.99 0.99 0.99 0.99 0.99 0.05 − 0.99

Water 0.46 0.46 0.98 0.66 0.46 0.44 0.12 0.42

Methane 0.75 0.76 0.83 − 0.75 0.75 0.66 0.75

Ethane 0.78 0.78 0.55 − 0.78 0.78 0.65 0.78

The overall performance of the regressors are tested using a 10-fold cross validation and the R 2 values are tabulated in Table 6. Just like in previous tables, a − symbol implies that the R 2 value is negative and the prediction is poor. For Sin crystals, the performance of all the regressors is good except for Theil Sen Regressor and RANSAC regressor. The feature set used for Sin crystal only includes the xyz coordinates. In the case of water, using just the xyz coordinates did not yield any satisfactory prediction for any regressor except Decision Tree Regressor (see Supporting Information). However, after adding the atomic charges, the prediction improved for all the regressors except for Decision Tree Regressor. For Decision Tree Regressor, the prediction remained the same. Similarly, for the case of methane, the best performing regressor was the Decision Tree Regressor, although other regressors expect MLP shows significant improvement compared to water data. So in all these three data set, the Decision Tree Regressor came out to be the best performing regressor and it is to be noted that the absence of atomic charges in the feature set did not reduce the prediction of this regressor. In the case of ethane data set, the prediction of other regressors except for Decision Tree and MLP regressors are better after adding the atomic charges. The MLP regressor is not satisfactory at all for methane and ethane. Another aspect is that the Decision Tree Regressor performance remained the same after removing the charges, which makes it again the best performing regressor in this set, although the prediction results are only satisfactory. A number of other regressors were also tested which includes SGD Regressor, LASSO, ELASTIC Net, SVR, PLSRegression and none of them found to be satisfactory for the data set apart for Sin cluster.

References 1. Christopher M Bishop. Pattern Recognition and Machine Learning, volume 4. 2006. 2. Abdellaziz Doghmane, Linda Achou, and Zahia Hadjoub. Determination of an analytical relation for binding energy dependence on small size silicon nanoclusters (nSi ≤ 10 at.). Journal of Optoelectronics and Advanced Materials, 18(7–8):685–690, 2016.

584

A. Wasee et al.

3. Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern Classification, 2001. 4. R. Dutter. Algorithms for the Huber estimator in multiple regression. Computing, 18(2):167– 176, 1977. 5. Håkan Ekblom. A new algorithm for the Huber estimator in linear models. BIT, 28(1):123– 132, 1988. 6. Martin a Fischler and Robert C Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Communications of the ACM, 24(6):381–395, 1981. 7. Célia Fonseca Guerra, J. G. Snijders, G. Te Velde, and Evert Jan Baerends. Towards an order-N DFT method. Theoretical Chemistry Accounts, 99:391–403, 1998. 8. Yoel Haitovsky and Yohanan Wax. Generalized ridge regression, least squares with stochastic prior information, and Bayesian estimators. Applied Mathematics and Computation, 7(2):125– 154, 1980. 9. Douglas M. Hawkins, Subhash C. Basak, and Xiaofang Shi. QSAR with Few Compounds and Many Features. Journal of Chemical Information and Computer Sciences, 41(3):663–670, 2001. 10. P J Huber. Robust Statistics. Statistics, 60(1986):1–11, 2004. 11. David J C MacKay. Information Theory, Inference, and Learning Algorithms David J.C. MacKay, volume 100. 2005. 12. Jan Mielniczuk and Joanna Tyrcha. Consistency of multilayer perceptron regression estimators. Neural Networks, 6(7):1019–1022, 1993. 13. Tom M Mitchell. Machine Learning. Number 1. 1997. 14. Gregoire Montavon, Matthias Rupp, Vivekanand Gobre, Alvaro Vazquez-Mayagoitia, Katja Hansen, Alexandre Tkatchenko, Klaus Robert Muller, and O. Anatole Von Lilienfeld. Machine learning of molecular electronic properties in chemical compound space. New Journal of Physics, 15, 2013. 15. Douglas C Montgomery, Elizabeth A Peck, and G Geoffrey Vining. Introduction to Linear Regression Analysis (5th ed.). Technometrics, 49(December):232–233, 2011. 16. Leena Pasanen, Lasse Holmström, and Mikko J. Sillanpää. Bayesian LASSO, scale space and decision making in association genetics. PLoS ONE, 10(4):1–26, 2015. 17. John P Perdew and Yue Wang. Accurate and simple analytical representation of the electrongas correlation energy. 45(23):244–249, 1992. 18. J. R. Quinlan. Induction of Decision Trees. Machine Learning, 1(1):81–106, 1986. 19. Rahul Raguram, Ondrej Chum, Marc Pollefeys, Jiri Matas, and Jan Michael Frahm. USAC: A universal framework for random sample consensus. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):2022–2038, 2013. 20. Raghunathan Ramakrishnan, Pavlo O. Dral, Matthias Rupp, and O. Anatole von Lilienfeld. Big Data meets Quantum Chemistry Approximations: The Delta-Machine Learning Approach. Journal of Chemical Theory and Computation, 2015. 21. Raghunathan Ramakrishnan, Mia Hartmann, Enrico Tapavicza, and O. Anatole Von Lilienfeld. Electronic spectra from TDDFT and machine learning in chemical space. Journal of Chemical Physics, 143(8), 2015. 22. David E Rumelhart, Geoffrey E Hinton, and R J Williams. Learning Internal Representations by Error Propagation, 1986. 23. I. Sammut, Claude and Webb, Geoffrey. Encyclopedia of Machine Learning and Data Mining. Springer, 2 edition, 2017. 24. G. te Velde, F. M. Bickelhaupt, E. J. Baerends, C. Fonseca Guerra, S. J.A. van Gisbergen, J. G. Snijders, and T. Ziegler. Chemistry with ADF. Journal of Computational Chemistry, 22(9):931–967, 2001. 25. E. Van Lenthe and E. J. Baerends. Optimized Slater-type basis sets for the elements 1–118. Journal of Computational Chemistry, 24(9):1142–1156, 2003.

A Study on Distance Based Representation of Molecules for Statistical Learning

585

26. O. Anatole Von Lilienfeld. First principles view on chemical compound space: Gaining rigorous atomistic control of molecular properties. International Journal of Quantum Chemistry, 113(12):1676–1689, 2013. 27. Yan Xin and Xiao Gang Su. Linear Regression Analysis: Theory and Computing. World Scientific Publishing Co., Inc., River Edge, NJ, USA.

Comparative Analysis of Evolutionary Approaches and Computational Methods for Optimization in Data Clustering Anuradha D. Thakare

1 Introduction Data extraction and drawing useful patterns is an important part of knowledge discovery process. Among several steps of Knowledge Discovery process, Clustering becomes prominent especially when search space is complicated. The analysis of data has various phases. The confirmatory or exploratory analysis of data is carried out with appropriate computational models and data source whereas; the key step is data alignment and grouping. The natural groups are discovered through the data analysis process. An important objective of clustering is to find natural groups from the unstructured and complicated data. However this requires not only data concentrated computational model but also efficient machine learning algorithms. The data in the universe is mostly unstructured and multimodal in nature hence traditional machine learning algorithms converge early with poor quality clusters. These initial partitions are important for further knowledge discovery process and also to avoid local optima problem in clustering. In order to overcome these issues and to provide good quality initial partitions, various heuristic search techniques may be used. The researchers have devised various hybrid algorithms to address these issues. Evolutionary algorithms like Genetic algorithms have powerful domain search ability and through number of generations, it slowly reaches to the optimal solutions. The fitness function in GA is mostly dependant on objective of the problem to address. The success criteria are selection of GA operators and mathematically designing the appropriate fitness function.

A. D. Thakare () Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, Maharashtra, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_57

587

588

A. D. Thakare

2 Related Research 2.1 Research in Data Clustering Using Genetic Algorithm Categorization and classification of unlabeled data is a tedious process. The classification problems with such unlabeled data falls into the category of unsupervised classification and further it requires finding the correlation among available data. Correlation based Document Clustering with Genetic Algorithm was proposed [1]. Genetic Algorithms are used for correlation clustering problem. The performance is estimated with data correlation-based clustering precision. The clustering division in each generation of GA slowly approached towards better results. It is observed that the results of clustering division with correlation-based clustering precision for UCI document data sets was better than other clustering divisions. A modified genetic algorithm with self organization capability for data clustering was proposed which uses the semantic similarity measure reveal the data relationships [2]. This self-organized genetic algorithm efficiently evolve the meaningful relations among data and form the groups. It mainly influenced by two important features of Genetic Algorithms i.e. diversification in the sample population and the selection criteria whereas; k-means algorithms has limitations as far as diversity of population is concerned [3]. The two most similar documents are considered as neighbors of each other. The two documents are connected to represent the familiar neighbors. The global information for measuring closeness of two documents can be achieved by the neighbors and link involved, instead of just pairwise similarity. Automatic Abstracting of Multi-documents with text clustering and semantic analysis [4], overcomes the shortages of traditional methods for multi-documents. Semantic analytics is used for the Automatic Abstracting of multi-documents. For the title and first sentences of paragraph, the twice word segmentation method is proposed. The Genetic Algorithm operators are selection, crossover and mutation. Simultaneous mutation operator and Ranked mutation rate [5] for document clustering with Genetic Algorithm work significantly better to determine the exact number of clusters and extends the search directions. It avoids convergence problem to local optima. Genetic Algorithm simultaneously perform several mutation operators in order to produce new generations. A new clustering algorithm for discovering and describing the topics comprised of a text collection [6] that provides a novel parameter-less method for discovering the topics from the collection, at the same time that it attaches suitable descriptions of the identified topics. The authors introduced a new clustering algorithm aimed at discovering and describing the topics comprised of a text collection. Research in Clustering of Data Stream uses a feature mining method called Gaussian Mixture Model Genetic Algorithm [7]. It is a hybrid approach where a Gaussian mixture model is extended. A data stream clustering with probability density requires only the newly arrived data instead of historical data. Using random split and merge operation of Genetic Algorithm, the number of Gaussian clusters and the parameters of each Gaussian component are determined. A function reduces

Comparative Analysis of Evolutionary Approaches and Computational. . .

589

the bad cluster effect on the clustering results. Robustness and accuracy of the clustering numbers are improved, thereby saving memory and run time. A clustering with multiple views [8] for the same instances has multiple representations, which partitions a dataset into groups. The information available to all points of view is exploited and improved the clustering result obtained by using a single representation. In multiview algorithms, the views are considered equally important. If the view is of poor quality, that may lead to wrong cluster assignments. In order to handle this problem, a method based on exemplar based mixture models, were proposed. This presents a multiview clustering algorithm, based on training a weighted multiview convex mixture model that associates a weight with each view and learnt these weights automatically. Clustering and Ranking, both are integrated in sequence, which unavoidably ignore the interaction between them. It is a well-recognized fact that a document set often covers a number of topic themes with each theme represented by a cluster of highly related sentences. The topic themes are not much important, but the sentences in an important theme cluster are generally considered more significant. To solve this issue, a spectral analysis approach to document summarization [9] was introduced which clusters and ranked the sentences simultaneously. In Spectral Clustering algorithm, the similarity graph is generated and Graph Spectrum is used to find the best cut. This method of clustering requires vast matrix for similarity Graph representation which results into high memory consumption and it is a serious constraint. A Genetic Graph based Clustering algorithm, maintains the quality of the resulting clusters there by improving the memory usage. Multiple objectives are set for graph-based clustering and Genetic algorithms produces optimal results [10]. Multi-Objective Algorithms manages a reduced version of the Similarity Graph using an evolutionary approach. The improvement of clustering results is observed in a synthetic and real world datasets over existing clustering methods. A genetic algorithm with multiple objective functions and K-clustering method is studied. Here, K is the number of clusters which are already known. The suitable clusters and cluster modes are searched using the searching power of Genetic Algorithms thereby simultaneously optimizing the within cluster and between cluster distances [11]. Clustering of microarray data [12] finds differentially expressed genes from two sample microarray data. In order to perform two way clustering, the concept of fuzzy membership is used which transforms the hard adaptive subspace iteration algorithm into a fuzzy adaptive subspace iteration algorithm to perform clustering. This approach assigns a relevance value to genes associated with each cluster. The ranking of gene cluster is based on decision that how it provides correct classification. An evolutionary k prototype algorithm is an evolutionary clustering approach for mixed type data [13]. The global searching ability of the evolutionary algorithm helps to overcome its flaws. As a local search approach, K prototype is applied in the control of the evolutionary framework and is found to be more robust and generates much better results. Clustering using the Hilbert Schmidt independence criterion is the clustering algorithm [14] that maximize the cluster labels and data observations dependency on each other. The structural information on the cluster results is utilized for the clustering process. The selection of the loss

590

A. D. Thakare

function is important in supervised learning with structured outputs. This approach considers the dependency of output and loss function both along with new forms of partition matrix construction. K-Means is one of the popular data partitioning algorithm that is applied to various practical applications in the areas of image processing such as segmentation of images, color clustering, vector quantization etc. [15]. In embedded system, in order to handle large number of clusters, a variant of k-means which is hierarchical in nature was proposed. The binary tree traversal and distance measures are used along with ten processing elements to compute the clusters. Clustering of text documents with Non-negative matrix factorization overcomes the issues in clustering [16]. The initial values of the parameters of non-negative matrix factorization are used to produce clustering results. The clusters of arbitrary shape in nature are correctly detected by spatial densitybased clustering which is robust and not affected by data distributions. Even though search space has outliers it results in accurate clusters. A new genetic parameter free clustering algorithm which work with encoding clustering solutions [17] rely on density-based clustering parameters. Several density based clustering attributes are used to define genotype, gene position in each clustering solution which plays a vital role in recovering the encoded partition. Table 1 depicts overview of analysis of clustering.

3 Evaluation of Related Research Data Clustering have been more challenging task due to its unsupervised approach. The structural characteristics, shape of cluster and number of clusters these facts are difficult to produce due to unsupervised nature. Data Clustering with self organized GA successfully evolves the cluster as compared to traditional clustering algorithms whereas; correlation clustering helps to estimate the performance in complicated search space. Mutation is one of the important operator in GA. Simultaneous and ranked mutation rate works best to produce next generation in genetic process. For data stream clustering, the use of appropriate mathematical model may reduce the bad clusters. Specific threshold value for clustering must be identified. Document Clustering requires a strategy to discover and describe the topics. A spectral analysis for document summarization is an important area of research. Clustering can be implemented with multiple objectives like compactness, connectedness and similarity. Real world problems requires multiobjective clustering. Cluster optimization with strong robustness and global optimization can be achieved by fuzzy clustering approach. Swarm Intelligence algorithms for data clustering results in optimal solutions but converges in early stage. The hybridization of clustering algorithm with bio-inspired algorithm results into accurate clustering. Genetic Algorithms are proven to be best for producing optimal results. But, its computational time requirement is high especially for large datasets. Hence, a parallel or distributed architecture is required for Genetic Algorithms to process large scale data. In the

Comparative Analysis of Evolutionary Approaches and Computational. . .

591

Table 1 Overview of analysis of evolutionary approaches for clustering Author and Publication Constraints Remarks [Zhenya Zhang Clustering precision as Estimated the et al., 2008, IEEE] a measure performance of a clustering division and data correlation Document clustering [K. Premlatha Adapts the mutation Several mutation based GA with et al., 2009, IEEE] rate on the operators are used simultaneous mutation chromosomes based on concurrently to produce and ranked mutation rate fitness rank of the the next generation earlier population Self-organized GA for [Wei Song et al., Semantic similarity Effectively evolve document clustering 2008 IEEE] measure for clustering clusters in comparison with k-means strategy Multi-document [Qinglin Guo et al., Twice word Automatic abstraction automatic abstracting 2009, Elsevier] segmentation algorithm of multi-documents based on text clustering based on the title and and semantic analysis the first word in paragraph Text document clustering [Congnan Luo a Neighbours and link Neighbours and links based on neighbours et al., 2009, used for cluster represent global Elsevier] initialization information for similarity identification Multiple view clustering [Grigorios F. Exemplar-based A weighted multiview using the weighted Tzortzis et al., mixture model for convex mixture models combination of 2010, IEEE] clustering associates a weight with Exempler-based mixture each view and learns models these weights automatically Data stream clustering [GAO Ming-ming Random split and Threshold value is based on Gaussian et al., 2010, IEEE] merge operation of GA decided by the function mixture model genetic to determine Guassian to reduce bad clusters algorithm clusters and parameters effect on the clustering results A document clustering [Henry A parameter-less Topics are identified algorithm for discovering Anaya-Sanchez b method for discovering from highly probable and describing topics et al., 2010, the topics term pairs Elsevier] A spectral analysis [Xiaoyan Cai et al., Clustering of only Spectral analysis for approach to document 2011, Elsevier] highly related sentences simultaneous clustering summarization and ranking of sentences A two-stage genetic [Hong He, et al. Two stage selections Automatically algorithm for automatic 2012, Elsevier] and mutation operations determines the proper clustering number of clusters and partitions Approach Correlation clustering based on GA for document clustering

(continued)

592

A. D. Thakare

Table 1 (continued) Approach Clustering data set with categorical feature using multi objective GA Variable Density Based Genetic Clustering

Author and Publication [Dutta, D., et al. 2012, IEEE]

Constraints K-clustering method based on a real coded multiobjective GA

[Sabau, A.S., et al. Parameter free genetic 2012, IEEE] clustering method

Automatic clustering in a [S Saha, Sangmitra Effective for point multi-objective B. et al. 2013, symmetry clusters and framework Elsevier] well separated clusters Automatic abstracting of [Qinglin Guo et al., multi-documents using 2009, Elsevier] clustering and semantic analysis

Segmentation method with twice word based on the title and the initial word in paragraph

Remarks Simultaneous optimization of intra-cluster and inter-cluster distances Every object has single assignment status. Once assigned, not attracted for further clusters Multi-objective genetic clustering that automatically partition the data Automatic abstraction of multi-documents

future, the algorithms may be extended to evaluate on larger datasets in a distributed environment. Such system will be much useful for Big Data analysis.

4 Conclusion This article presents an overview of literature relevant to the areas of data clustering using evolutionary approaches. Aim is to put the various ideas in proper perspective while identifying the advancement in data clustering issues. This survey briefs about the critical analysis of existing approaches and summarizes the existing approaches with due comparison. The research in data clustering requires investigation of above research review and suggesting the optimization algorithms which give better results in certain situations. The research may focused towards producing the optimized clusters thereby improving the quality of clusters This survey will help the research community to investigate and produced advanced research.

References 1. Zhenya Zhang, Hongmei Cheng, Wanli Chen, Shuguang Zhang and Qiansheng Fang, “Correlation Clustering Based on Genetic Algorithm for Documents Clustering”, 2008 IEEE, pp 3193–3198 2. Wei song, soon Cheol Park, “An Improved GA for Document Clustering with semantic Similarity measure”, Fourth International IEEE conference on Natural computation, 2008, pp 536–540.

Comparative Analysis of Evolutionary Approaches and Computational. . .

593

3. Congnan Luo a, Yanjun Li b, Soon M. Chang c, “Text document clustering based on neighbors”, Data & Knowledge Engineering 68 (2009) 1271–1288, Elsevier 4. Qinglin Guo, Ming Zhang “Multi-documents Automatic Abstracting based on text clustering and semantic analysis”, Knowledge-Based Systems 22 (2009) 482–485, Elsevier 5. K. Premalatha, A.m. Natrajan, “Genetic Algorithm for Document Clustering with Simultaneous and Ranked Mutation”, Modern Applied Science, February, Vol. 3, No.2, Feb 2009, pp 75–82 6. Henry Anaya-Sánchez b, Aurora Pons-Porrata a, Rafael Berlanga-Llavori b “A document clustering algorithm for discovering and describing topics”, Pattern Recognition Letters 31 (2010) 502–510, Elsevier 7. GAO Ming-Ming, Chang Tai-Hua, and GAO Xiang-Xiang, “Research in Data Stream Clustering based on Gaussian Mixture Model Genetic Algorithm”, 978-1-4244- 7618-3, 2010 IEEE 8. Grigorios F. Tzortzis and Aristidis C. Lucas, Senior Member, IEEE “Multiple View Clustering Using a Weighted Combination of Exemplar-Based Mixture Models”, IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 12, DECEMBER 2010 9. Xiaoyan CAI, Wenjie Li “A spectral analysis approach to document summarization: Clustering and ranking sentences simultaneously”, Information Sciences 181 (2011) 3816–3827, Elsevier 10. Menendez H.D.; Barrero D.F.; Camacho, D., A Multi-Objective Genetic Graph-Based Clustering algorithm with memory optimization, IEEE Congress onEvolutionary Computation (CEC), 2013. 11. Clustering data set to categorical feature using a multi-objective genetic algorithm, Dutta, D.; University Institute of Technology., Golapbug, India; Dutta, P.; Sil, J., International Conference onData Science & Engineering (ICDSE), 2012 12. Jahangheer Shaik and Mohammed Yeasin, “Fuzzy-Adaptive-Subspace-Iteration-Based TwoWay Clustering of Microarray Data”, IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, VOL. 6, NO. 2, APRIL-JUNE 2009 13. Zhi Zheng; Key Lab. Of Intell. Perception & Image Understanding of Ministry of Educ. of China, Xidian Univ., Xi”an, China ; Maoguo Gong ; Jingjing Ma ; Licheng Jiao, Unsupervised evolutionary clustering algorithm for mixed-type data, IEEE Congress onEvolutionary Computation (CEC), 2010. 14. WenliangZhong, Weike Pan, James T. Kwok, and Ivor W. Tsang, “ Incorporating the Loss Function into Discriminative Clustering of Structured Outputs” IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 10, OCTOBER 2010 15. Tse-Wei Chen and Shao-Yi Chien, Member, IEEE “Flexible Hardware Architecture of HierarchicalK-Means Clustering for Large Cluster Number”, IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 19, NO. 8, AUGUST 2011 16. Xiaodi Huang, Xiaodong Zheng, Wei Yuan, Fei Wang, Shanfeng Zhu “Enhanced clustering of biomedical documents using ensemble non-negative matrix factorization”, Information Sciences 181 (2011) 2293–2302, Elsevier 17. Sabau, A.S., Variable Density Based Genetic Clustering, International Symposium onSymbolic and Numeric Algorithms for Scientific Computing (SYNASC), 2012, IEEE.

Bringing Digital Transformation from a Traditional RDBMS Centric Solution to a Big Data Platform with Azure Data Lake Store Ekta Maini, Bondu Venkateswarlu, and Arbind Gupta

1 Introduction Enormous, immense, complex and big data sets are referred as Big data. There are innumerable sources of data. Huge data is created from several heterogenous sources. This data can be unstructured, semi-structured or structured. Big data is associated with many features of data like Volume, Velocity, Variety, Veracity and Value [1]. Data of the order of exabytes has been generated and is still growing. It is desirable to extract information from unstructured data also. However, it is a useless, time consuming and fruitless effort to manage big data if meaningful insights cannot be drawn from it. There are a few shortcomings of Enterprise Data Warehouse which may affect the performance in dealing with big data. As compared to Enterprise Data warehouse, the structured, unstructured and semi structured data is stored in its native format. A larger volume and variety of data is effectively gathered and worked upon using the data lake without the rigidity and overhead of traditional data warehouse architectures. As the volume, velocity, and variety of data grow, organizations increasingly rely on data lakes for data storage, governance, blending, and analysis [2]. Data lakes are associated with powerful capabilities like ability retain all data, ability to store any data type and also provides faster insights. Data lakes are the best solution for the organisations where insights are needed from a variety of data types. Healthcare sector is one of the important organisation which can be benefitted by the data lakes [3]. Figure 1 describes the fundamental difference between a data warehouse and a data lake.

E. Maini () · B. Venkateswarlu Dayananda Sagar University, Bengaluru, Karnataka, India e-mail: [email protected]; [email protected] A. Gupta Dayananda Sagar College of Engineering, Bengaluru, Karnataka, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_58

595

596

E. Maini et al.

Fig. 1 Data warehouse vs Data Lake

The remaining paper is discussed in two sections. Section 2 points out the shortcomings of present day technology which can be overcome by introducing the concepts of data lake. Section 3 discusses the evolution history and the architecture of data lake.

2 Need of Data Lake There are a few shortcomings from which the traditional Enterprise Data Warehouses (EDW) and Data Mart suffer. They require planning, design, modeling, and development before data is made visible to end-users [4]. This process may take weeks. During this period, the key elements/requirements in the business may get changed which requires re-design and thus delays time-to-value. EDW rigidity often tempts end-users to build their own solutions using spreadsheets, local databases, and other proprietary tools. This inevitably creates data silos, shadow IT, and a fragmented data landscape [5]. Furthermore, the scarcity of cataloged business data resources limits the data that the organization uses to answer business questions, resulting in decision makers acting on incomplete information. These limitations can easily be overcome by data lake. Data lake is gaining a lot of popularity in the recent times. The biggest advantage is that the data is stored in its raw format and there is no need of ETL (Extract, transform and load) as in data warehouse. No fixed schema is defined beforehand. A metadata repository records the complete information of the data. An ideal solution is provided by data lake which can deal with structured, unstructured and semi-structured data. Not only this, advanced analytics are provided by data lake to produce actionable insights [6]. Data is obtained from multiple sources in various formats. Data lake assists in fusing this data. Then it becomes easy to store, manage and analyze the data. Figure 2 illustrates the concept of a data lake. This transition to Data Lake Store provides flexibility. i.e. there is no need to worry about whether the infrastructure can accommodate ever-increasing amounts of data that is needed to store and process. Not only this, it also enables to quickly onboard new data assets—what used to take weeks now takes days. Moreover, simpler support is provided for large-scale encryption at rest and in transit. Out-of-the-box support for enterprise security and compliance is also ensured.

Bringing Digital Transformation from a Traditional RDBMS Centric Solution. . .

597

Fig. 2 Conceptual representation of a data lake

3 An Overview of Azure Data Lake Azure Data Lake is a service hosted in Microsoft’s public cloud Azure. It ensures agile and scalable data storage and analytics. Its basic foundation lies in COSMOS which is the internal technology for big data management in Microsoft. U-SQL was built on SCOPE, query engine which is featured in ADLS. As discussed already, huge amount of structured, unstructured or semi structured data can be stored in ADL. Data can be collected from a variety of sources like social media, mobile devices, transducers and sensors etc. A single Azure Data Lake Store account can store immense data. It is possible to store trillions of files of sizes in the order of petabytes. In ADL, resource management is governed using Apache YARN [7]. All the applications that use HDFS (Hadoop Distributed File System) are supported by ADL. The following subsections discuss the architecture of a data lake.

3.1 Data Lake Structure The performance of the data lake is dependent on its design. ADLS is designed especially for data analytics on a large scale. It can be built using HDFS or Azure Blob Store. Not only this, even SQL Server can also be used depending on the special constraints and usage. A poor design can badly affect the performance and can lead to failure. Thus it is needed to have a good method of organising the gathered data to ensure easy processing. A good design shall ensure reduced

598

E. Maini et al.

Fig. 3 Architecture of Azure Data Lake

ETL development time [8]. Emphasis should be paid on design hierarchy ensuring smooth flow of data from one stage to the other. Figure 3 provides the detailed view of Azure Data Lake.

3.1.1

Raw Zone

Data is landed and stored in its raw format in this zone [9]. The data is kept here till further operations are carried out. Data is tagged and stored in Azure Data Catalog. With tagging, business analysts and subject matter experts can easily locate the data in Azure.

3.1.2

Stage Zone

Data is landed here for further operational tasks. Data is further prepared for processing so that it can be comfortably loaded in the next stage. The next stage can either be curated zone or an analytical store.

3.1.3

Curated Zone

The most reliable layer of data is the curated zone. After thorough processing, the data is ready for final storage [10]. U-SQL/Hive can be used for this stage. There may be a need to extract the data back to a file. Security is a key issue. In the absence of security, the data lake may become just a collection of unorganised data or a data swamp. Users are allowed to use Azure Active Directory to control

Bringing Digital Transformation from a Traditional RDBMS Centric Solution. . .

599

enterprise security. In addition to this, they can control security features specific for authorized access to Azure Data Lake. Security issues like authentication, auditing and encryption are provided in Azure data lake. Security is ensured both in motion and at rest in ADLS. Encryption and decryption are managed in ADLS.

3.2 Data Lake Operations A metadata strategy should already be in place before data ingestion planning begins. The strategy should include knowing what data is going to be captured at ingestion, which can be captured in Azure Data Catalog, HCatalog, or a custom metadata catalog that will support data integration automation as well as data exploration and discovery. It may be more beneficial to use various HDFS specific files such as AVRO depending on the scenario. Once ingestion of data has been completed, the next step is to begin any processing of the data while it resides in the data lake [11]. This can involve data cleansing, standardization, and structuring. Within Azure, there are two groups of tools available for working with Azure Data Lake namely Azure Data Services and Azure HDInsight. For most cases, Azure Data Services will be able to solve the lower 80% of data lake requirements, reserving HDInsight for the upper 20% and more complex situations. Azure Data Services is all about bringing big data to the masses, allowing organizations to acquire and process troves of data as well as gain new insights from existing data assets through machine learning, artificial intelligence, and advanced analytics. A variety of tools are available e.g. Blob Store, Data Factory, Data Lake Analytics, Data Lake Store, Event Hubs, Hadoop, Hive/Hcatalog, IoT Hub, Kafka, Pig, PolyBase Power BI, R Server etc. When processing data within Azure Data Lake Store (ADLS), it is a customary practice to leverage few larger files rather than a high quantity of smaller files. U-SQL and Hive processes data in Extents and Vertices. Vertices are packages of work that are split across the various compute resources working within ADLS. Partitioning distributes how much work a Vertex must do. An Extent is essentially a piece of a file being processed, and the Vertex is the unit of work.

3.3 Data Lake Discovery Metadata-capture and tagging are closely related and support data exploration and governance within the business [12]. Metadata is the data about data, such as the source that the data originated from, how the data are structured and formatted, and when it was captured. Tagging is used to understand the inventory of a Data Lake through the notion of tags, like those attached to social media and blog posts. Consider a data lake in a healthcare organization is housing data from multiple electronic medical record systems. You might have a tagging strategy of

600

E. Maini et al.

EMR > System Name > Patient Encounters (the dataset). This allows individuals throughout the organization to search for EMR system data by Tag.

4 Conclusion The limitations of the traditional enterprise data warehouses and data marts can be easily overcome by introducing the concept of data lakes. Data lakes are emerging as valuable tool for organizations seeking data operations flexibility and agility while retaining governance. Azure Data Services is all about bringing big data to the masses, allowing organizations to acquire and process troves of data as well as gain new insights from existing data assets through machine learning, artificial intelligence, and advanced analytics. Azure Data Lake permits organizations to harness data without traditional database rigidity and overhead, drastically reducing time-to-value.

References 1. Anuradha J, “A Brief introduction on Big Data 5Vs characteristics and Hadoop Technology” Procedia Computer Science 48(2015)319–324, Elsevier 2. Jordan Anderson, Sean Forgatch,” Architecting Azure Data Lake white paper, Feb.2018 3. Ekta Maini, Bondu Venkatesvarlu, “Applying Machine Learning algorithms to develop Universal Cardiovascular Disease Prediction System” Springer conference 2018 4. Raghu Ramakrishnan, Baskar Sridharan, John R. Douceur, Pavan Kasturi, Balaji Krishnamachari-Sampath, Karthick Krishnamoorthy, Peng Li, Mitica Manu, Spiro Michaylov, Rogério Ramos, Neil Sharman, Zee Xu, Youssef Barakat, Chris Douglas, Richard Draves, Shrikant S Naidu, Shankar Shastry, Atul Sikaria, Simon Sun, Ramarathnam Venkatesan,” Azure Data Lake Store: A Hyperscale Distributed File Service for Big Data Analytics”, ACM press DOI: https://doi.org/10.1145/3035918.3056100 5. Wang, H., Zhang, Z., Taleb, T.: Special issue on security and privacy of IoT. World Wide Web (2017) 1–6 6. R. Chaiken, B. Jenkins, P-A Lar.son, B. Ramsey, D. Shakib, S. Weaver, and J. Zhou. 2008. SCOPE: easy and efficient parallel processing of massive data sets. Proc. VLDB Endow. 1, 2 (August 2008), 1265–1276. 7. K. Shvachko, H. Kuang, S. Radia, and R. Chansler. The Hadoop distributed file system. Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), pages 1–10, 2010. 8. Scott Klein. “IoT Solutions in Microsoft’s Azure IoT Suite”, Springer Nature, 2017 9. azure.microsoft.com 10. Chun Chen. “Providing Scalable Database Services on the Cloud”, Lecture Notes in Computer Science, 2010

Bringing Digital Transformation from a Traditional RDBMS Centric Solution. . .

601

11. Dr Yuanzhu Zhan, Dr Kim Hua Tan. “An Analytic Infrastructure for Harvesting Big Data to Enhance Supply Chain Performance”, European Journal of Operational Research, 2018 12. “Emerging Technologies in Data Mining and Information Security”, Springer Nature America, Inc, 2019

Smart Assist for Alzheimer’s Patients and Elderly People B. Swasthik, H. N. Srihari, M. K. Vinay Kumar, and R. Shashidhar

1 Introduction Alzheimer’s—a disease that destroys human memory and also affects other mental functions. This is one among the major problem faced by the people who are above the age of 60. The symptoms can vary widely, in the primary stage people notice that, inability to remembering severe enough to affect their ability to function at home or at work-place or to enjoy their hobbies. Alzheimer’s is an irreversible and progressive brain disease that eventually destroys memory and thinking skills and finally, the ability to carry out the simplest of tasks. People with Alzheimer’s face trouble doing routine tasks for example; driving a vehicle, cooking or during paying of bills. The primary symptoms are not severe to start with but as the situation get worse as the time passes. The survey shows the number of patients of Alzheimer does include 11% of those ages 65 and older and one-third of those 85 and older people. In the Indian sub-continent, more than 4,000,000 people having some form of dementia [1]. The main cause of this disease is due to the severe damage to the part of the brain called hippocampus, the part of the brain which plays a majority of role in the routine activities. As the time progresses, the memory loss due to Alzheimer’s disease increases quickly. The patient may also face difficulties by loss of items in and around his own house, the difficulty to find the right word while communicating or struggling to follow them and repeating the same words, inability to remember someone’s name, getting lost in a familiar place or in a familiar journey, forgetting appointments or the date of anniversaries, problem in judging distances or visualizing objects in three dimensions, also in making deductions, solving problems and handling a sequence of jobs (say, cooking) loss of balance

B. Swasthik () · H. N. Srihari · M. K. V. Kumar · R. Shashidhar Department of Electronics and Communication, JSS Science and Technology University, Sri Jayachamarajendra College of Engineering, Mysuru, Karnataka, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_59

603

604

B. Swasthik et al.

(becoming confused or losing track of day or date) etc. The patients will often have mood swings. They also become nervous, irritating or saddened. In later steps the problems with loss of memory, conversation, logically judging and body-balance become severe and therefore they are in need of more routine support from their attendants. Few of the patients might be suffering from delusions and apparitions which will put their life in jeopardy, in case of no proper care. There have been two kinds of approach, which are undertaken to overcome the problems of Alzheimer’s patients. (1) Medical approach. (2) Non-medical approach. In the former approach, the medications are being employed and many medicaments have been invented to provide conjectural treatment for Alzheimers, Donepezil, Galantamine, Memantine are drugs to name a few [2]. In the later approach, the activities of daily living; cognitive function; depression, sleep-wake reversal, mania, agitation; nursing home placement; mortality and treatment-related adverse events [3]. The approach to help the patients involve few other techniques to say, Prospective memory (PM) denotes the ability to remember and perform desired action at a certain point in the future. Taking medication, going to the doctor for an appointment or switching off the stove after cooking are few examples of prospective memory. It contrasts the idea of retrospective memory which means the ability to recall information or events from the past, for example, recalling items in a groceryshopping list [4, 5]. The study of prospective memory is distinguished between time based prospective memory and event based prospective memory. Time Based Prospective Memory study requires the participant to complete the task within a stipulated time, while in Event Based Prospective Memory involves the task prompted by an external trigger [6]. The results from the MRD-CFAS study which consisted around 12,000 elderly participants who had the prospective memory impairment, indicated that the Event Based Prospective Memory performance was notably related to the age, only 54% of the subjects were able to perform the task successfully. Also, high rates of prospective memory impairment were found in people with chance of affecting, very mild and early onset dementia. In these groups, only 8% of the subjects with dementia were able to perform prospective memory tests perfectly [7]. In earlier days of research Tele-minder TBC system (an automated phone messaging system) was used to increase the attendance of patients in a tuberculosis clinic. These patients were reminded about visiting the clinic often. This technique increased the attendance of around 3158 patients by the factor of almost 31% [8]. One other study by examining the tele minder, conducted a sample of around 2000 patients of wide range, both genders, found that the attendance increased from 50% to 60% approximately [9]. The earliest invention of tracking device involved alerting people who are located within a constant radius. An attack warning system had been designed where emergency vehicles will travel to the destination quickly and safely. With different visual characteristics, the positions of the vehicle are indicated [10]. One other proposed method involved locating wireless devices and detecting if the devices

Smart Assist for Alzheimer’s Patients and Elderly People

605

entered 2-D or 3-D dimension of the zone which alerts a care taking responsible whenever they move away from the geographical area [11]. GPS technology is used in wide range of applications. It is used to trace and monitor things. A less cost automobile uses GPS and GPRS services. There exists a work which uses GPS technology, a Controller Unit and GSM phone or module to track kids driving the vehicle. A monitoring system that accepts message from this device when it is found outside a virtual pre-fixed range of radius. The map location is calculated and exact location is sent to caretakers’ phone/supported device [12]. There is a proposal for a smart phone based tracking band for people affected with dementia, autism and Alzheimer’s [13]. The architecture of this locator consists of GPS, GSM and a Controller unit. The GSM module receives SMS where map coordinates are sent to the monitoring application. The message includes automatic location of the wearer and the virtual radius entering and leaving information [14]. A design for developing a prototype which provides real time, mobile and indoor/outdoor location tracker for medical staff and patients is developed. It is developed by integration GPS and UWD (ultra-wide band). Ultra-wide band is a protocol which locates a person or thing with an accuracy of centimeters. There will also be a possibility to give perfect position keeping aside the fact of the person being in indoors or outdoors [15]. This work is an idea to implement the technological support for the patients suffering from Alzheimer’s. The prototype model can be designed to help in backing up present location, time and other important data required for that present situation. It can be made to operate in both indoor and outdoor modes. In indoor mode, it can be used as pill remainder (exclusively). While in outdoor mode, it can be used to get guidance for regular intervals of time. The module can also be designed to have a panic button to help out in emergency situations.

2 Prototype Design The project is built with nodemcu as the base component. RTC module is used to read the time and a particular event is triggered when it is a certain time nodemcu accesses event from the database and it is displayed on the screen. Also, the nodemcu accesses IFTTT server to obtain the location and send it to the caretaker, this event is also triggered when the patients press SOS button and fall sensor is activated whenever the person faints to obtain medical attention (Fig. 1). Whenever a person faints, his location is sent to the caretaker. A panic/emergency/ SOS button is provided in the module, which when pressed, the caretaker is sent an emergency situation SMS (Fig. 2). The assistant consists of an event reminder, location detector, a fall sensor, a Panic or SOS button for emergency. Basically, the assistant is a wearable module, which has a display, built in power supply, also it comes with all the required modules that does the above described functions. The patient’s caretaker is responsible for input data that is, the time during which the pills have to be taken

606

B. Swasthik et al.

Fig. 1 Block diagram of the proposed system

Fig. 2 Design flow

must be provided initially by him. This information is stored in the database. Based on the information provided by him the reminders are set. The RTC module keeps running and the time also displayed continuously on the LCD when the time of alarm is occurred the nodemcu triggers the event and the LCD displays the reminder which is accessed from the server. Also, the module has a location providing facility, for every 15 min the patient’s location is updated and sent to the caretaker’s phone as a SMS. The nodemcu accesses IFTTT for location and it keeps updating the location, when required it sends the message to caretaker’s phone number. A fall sensor also has been integrated, whenever a person faints, his location is sent to the caretaker. A panic/emergency/SOS button is provided in the module, which when pressed, the caretaker is sent an emergency situation SMS.

Smart Assist for Alzheimer’s Patients and Elderly People

607

3 Result and Analysis Following are the results obtained for the implemented idea. The results obtained follows triggered message, received message, obtained location.

3.1 Triggered Event This triggering message was implemented in order to remind the patient regarding the daily routines. Here the main intention was to create a watch which keeps on displaying time each and every second during one specific time, if patient has some routine which is implemented on a daily basis, then that routine will be displayed along with the time with a buzzer detection, so that patient will get reminded about his routine. To display the time the real time clock module was interfaced with the nodemcu. This clock will be continuously running in the background. At some particular time, we have created a function to trigger one particular event. The series of events can also be obtained from a temporary database which a care taker can employ in order to store, modify or update the daily routine table along with the time specified. This function fetches the time and displays the routine which is to be occurred at that corresponding time. In this figure above, we can clearly see that if the patient has to take some medicine for his health issues, at time 7:06 then the routine along with the time will be displayed as shown in the Fig. 3. In this fashion we maintain a series of events on a daily basis for display. Fig. 3 Triggered event

608

B. Swasthik et al.

3.2 Received Message In order to send the location to the care taker a message service is required. In the project work it has been used IFTTTT maker service in order to accomplish the desired task. IFTTT (if this then that) is a platform where most of the tasks can be accomplished (like sending a message turn on or off a device) by triggering events (such as input from google assistant or trigger by sending a web request etc.) to achieve an applet has to be created. Where key will be provided along with the applet. in this project an applet is created in such a way that when a web request is made, a message is automatically sent to the registered mobile number. To send the web request nodemcu has been used and in return the location link has to be given as the argument to that request (Fig. 4).

3.3 The Obtained Location For a patient suffering from Alzheimer’s disease it is very important for the caretaker in order to monitor the location of the patient continuously. In order to achieve this geo location APIs are used in the project. APIs are usually are the application programs which are written by companies which the other developers can use for their application purpose. In geolocation API trilateration algorithm is used in which, when a device is connected to an internet the list of access points to which the device is connected, is monitored continuously and the signals from the nearest cell towers are also calculated. Using the obtained data, the exact location (latitude Fig. 4 Received message

Smart Assist for Alzheimer’s Patients and Elderly People

609

Fig. 5 The obtained location

and longitude) is found which is obtained due to overlapping of different areas. In order to distinguish different users from using it an API key will be provided. Along with that geocoding API is used in order to generate link to that location. In our project we have used nodemcu in order to connect to an access point using this principle we have used mylocation library which uses geolocation API in order to achieve the purpose of getting the location. The API key is given and the carrier network along with mobile network code (MNC) and mobile country code (MCC) is provided (Fig. 5).

4 Conclusions The research work is for a social cause which is actually a solution for a serious problem among the elderly people i.e. Alzheimer’s disease and other old age problems. Generally, the patient suffering from a short time memory loss tend to get into trouble in order to reduce the chance of getting into troubled. The work helps them in times of their need. The work also helps them to take their pills in timely manner which actually safeguards their health. Whenever they feel they are in trouble the switch of stress (SOS) comes to their rescue. The work in general helps to understand the problems faced by the elderly and motivates to work in a better manner. Although complete help can’t be assured to them the research work tried to build something that will help them in their needy times. The research work

610

B. Swasthik et al.

also helps to learn various things other than the academics which improves technical skills. Future works not only meant for the innovations but also to overcome few minor problems of the module. For example, the module used during the prototype design is really bulky, if gone for the customization of modules the product will definitely come in handy. Also, there is a provision in future to interface audio module for sending the voice instructions and interfacing of pulse sensors, to keep the track of heart rate. One other thing that can be included is Geo fencing such that the patient can be checked moving out of certain area or boundary.

References 1. Robert J. Koester, MS and David E. Stooksbury. “Behavioural profile of possible Alzheimer’s disease patients in Virginia search and rescue incidents” 1995, article S1080603213800075. 2. Kelly Bethune, “Diagnosis and Treatment of Alzheimer’s disease: Current Challenges”, 2010 Thesis U12419052. 3. Institute for Quality and Efficiency in Health Care (IQWiG, Germany). Non-drug therAPIes in Alzheimer’s disease; commission A05-19D. January 13, 2009. 4. Henry J. MacLeod, M.S. Philips, L.H. & Crawford J. (2004). A Meta analytic review of prospective memory and ageing, I9, 27-39. SaloniTanna, Priority Medicines for Europe and the World “A Public Health Approach to Innovation” 1994 medicine docs Js16034e. 5. Einstein, G.O., McDaniel, M.A., Richardson, S.L. Guynn, M.J. & Cunfer, A.R. (1995). Ageing and prospective memory; examining the influences of self-initiated retrieval processes. Journal of Experimental Psychology: Learning, memory and cognition. 21, 996-1007.Evans, D. Funkenstein, H.H., Albert, M.S., et al. Prevalence of Alzheimer’s Disease in a community population of older persons. JAMA 1989;262(18)2551-6 4 6. Huppert, F.A., Johnson, T., Nickson, J. (2000). High prevalence of prospective memory impairment in the elderly and in early-stage dementia; Findings from a population-based study. Applied Cognitive Psychology, 14(1) S63-S81. 7. Leirer, V.O., Morrow, D.G., Tanke, E.D. & Pariante, G.M. (1991). Elder’s non-adherence: its assessment and medication reminding by voice mail. The Gerontologist, 31(4), 514–520. 8. Tanke & Morrow, D.G. (1993). Commercial cognitive/memory systems: A case study. Applied Cognitive Psychology, 7,675–689. 9. Watzake, J. (1994). Personal emergency response systems: Canadian data on subscribers and alarms. In Gutman, G. & Wister, A. (Eds). Progressive accommodations for seniors (pp. 147– 166). Vancouver: Simon Fraser University Gerontology Research centre press. 10. D. Curran, J. Demmel and R. A. Fanshier. “Geo-fence with minimal false alarms”. U.S. Patent no. 8,125,332, February 2012. 11. Lita, I, Cioc, I.B., Visan, D.A. “A New approach of automobile localization system using GPS and GSM/GPRS transmission”. 12. Pankaj Verma, J.S Bhatia. “Design and Development of GPS-GSM based Tracking system with Google Map Based Monitoring”. International Journal of Computer Science, Engineering and Applications (IJCSEA) 3, no. 3(2013). 13. IshaGoel, Dilip Kumar, “Design and Implementation of Android Based Wearable Smart Locator Band for People with Autism, Dementia, and Alzheimer”, 9 December 2014. 14. J.-H. Liu, J. Chen, Y.-L. Wu, and P.-L.Wang, “AASMP-Android Application Server for Mobile Platforms”, Proceedings of the IEEE 16th International Conference on Computational Science and Engineering (CSE ‘13), pp. 643–650, 2013. 15. Lijun Jiang, Republic Polytech., Singapore, Singapore, Lim Nam Hoe, Lay Leong Loon. “Integrated UWB and GPS location sensing system in hospital environment”. 2009.

An Unconstrained Rotation Invariant Approach for Document Skew Estimation and Correction H. N. Balachandra, K. Sanjay Nayak, C. Chakradhar Reddy, T. Shreekanth, and Shankaraiah

1 Introduction Optical Character Recognition or OCR is a technology where the text data in the image is converted into the machine-encoded text. This machine-encoded text can be further processed by the computer. Before bestowing the image to the OCR some pre-processing of the image should be done, such as removal of noise from the image, increasing the contrast of the image and deskewing of the image. Deskewing is a process of straightening the text lines in an image. If an image is skewed and it is bestowed to the OCR without deskewing then the character would not be recognized by the OCR. Figure 1a renders the image with skewness of 30◦ clockwise and Fig. 1b has a skewness of 30◦ anti-clockwise. If either of the two images is bestowed to the OCR then the output would be some data which has no meaning in it. Figure 1c portrays the deskewed image. When this image is bestowed to the OCR then the output would be the same data that is present in the image. So, it can be seen that deskewing is one of the important preprocessing techniques that has to be done on the image before passing it to the OCR to get the data output veraciously. A number of skew estimation and deskewing techniques of document images is proposed in the literature. It has been bracketed into two types, they are, spatial and frequency domain approaches. The following are the frequency domain procedures for skew estimation and correction. Bo Yuan et al. proposed an approach to deskewing based on Hough Transform. Skew estimation method was based on the presence of noises like the straight lines; edges exist in the images and not textual content present in that image. This model

H. N. Balachandra () · K. S. Nayak · C. C. Reddy · T. Shreekanth · Shankaraiah Department of Electronics and Communication, JSS Science and Technology University, Sri Jayachamarajendra College of Engineering, Mysuru, Karnataka, India e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_60

611

612

H. N. Balachandra et al.

Fig. 1 (a) Skewed image by +30◦ . (b) Skewed image by −30◦ . (c) Deskewed image

works if the input image has well-defined edges from the black bars around the pages, the graphical inserts, and the separators of tables, columns or paragraph [1]. Jonathan Fabrizio presented an algorithm for skew estimation for binary images in the frequency domain. In this methodology first preprocessing of the image using KNN clustering method is to be done and then Fourier transform should be applied to the image to the outlines of the convex hulls of the clustered regions. But preprocessing an image using KNN clustering requires parameter (K) number of nearest neighbours and also computation cost is very high [2]. N. Nandini et al. proposed a model algorithm based on connected component analysis and Hough transform. There are two approaches namely word centroid approach and dilate and thin approach. In this approach, an only printed document containing text are taken into consideration. But the method restricts for images containing pictures [3]. M. Sarfraz et al. proposed an algorithm for deskewing of the image by using Haar wavelet transform and principal component Analysis. Initially, the image was decomposed into detailed sub images into various levels with the help of Haar wavelet transform. PCA was used to estimate the orientation of the principal axis in horizontal, vertical direction in each level. Output was accurate for the input image of Arabic font which is connected and English Font which is isolated in nature [4]. Mandip Kaur et al. proposed an algorithm based on Fast Fourier Transform and Discrete Cosine Transform. The main purpose of using FFT is to find the skew angle. Initially, the DCT compression technique was applied on the image to reduce timing computation. Fourier spectrum is obtained for the compressed image. The obtained spectrum is divided into four quadrants and the skew angle of each quadrant is obtained. At last input image is rotated using the bilinear interpolation method. Here skew angle of 45◦ to −45◦ was taken into consideration [5]. Sargur N. Srihari et al. used the Hough transform method for deskewing of the image. Here various problem faced in the Hough transform like Aliasing and Quantization Problem were discussed as well as the solution to those problems was mentioned. The output image in some case was found to be upside-down or right side-up [6]. Xiaoyan Zhu presented an approach where the image with textual and nontextual content is taken into consideration. Initially, document image was divided into blocks of equal size using Fourier Transform and Support Vector Machines

An Unconstrained Rotation Invariant Approach for Document Skew Estimation. . .

613

were applied to determine whether each block is textual or non-textual. They determined the skew for only textual blocks by taking standard deviation of the projection profile of various angles [7]. The following are the spatial domain approaches for skew estimation and correction. D.T. Nguyen et al. came up with a mathematical morphology operation which can be used for deskewing of the image. Filter noise, accents and large connected components are included initially for preprocessing procedure. Next to estimate a range of skew angle a coarse estimation algorithm is used. Here skew angle of every text line was considered to determine the skew angle of the entire document. The output was tested for 1080 images, out of which 900 document images were from Latin, and 180 of other languages. These documents resolutions vary from 150 to 300 dpi and skew angle vary from −90◦ to 90◦ [8]. M. Sarfraz et al. proposed technique based on Histogram statics and connected component analysis. Here four different methods for deskewing of the image are compared. The four methods are Projection Profile Technique, Principal Component Analysis, Peaks and Valleys Analysis, Connected Component Analysis. An Input image of skew angle of 1–25◦ was taken into consideration, accuracy and time taken were noted for all. Connected Component Analysis was found to be more accurate when compared to other methods [9]. Zhiyong Ju et al. came up a skew detection algorithm based on a low pixel set of characters. In this algorithm initially, bottom pixels of some characters present in text line was detected because the number of ascenders is much larger than that of the descender in English text. The Skew angle of text is estimated based on moment calculation. Here document image with the large skew angle as well as document image with few characters is taken into consideration. The input image with inverted text and flipped image are not taken into consideration [10]. K.R. Arvind et al. used correction algorithm based on Horizontal Projection Profiles of a block image and their entropy was considered for deskewing of the image. The output was tested for 100 document images which had text related to English Kannada and Chinese scanned at 200 dpi. These documents contain signatures, logos and text paragraphs. The skew angle ranging from +10◦ to −10◦ was taken into consideration and precision of 0.1o was achieved [11]. Robert S. Caprari proposed an algorithm for up/down orientation determination. The text is divided into three parts, based on the percentage of the text present in that part the orientation of the text was decided. Horizontal and vertical line profile was plotted to check whether the text is horizontal or vertical. Output was tested for 39 pages of 12 point and 10-point roman text, both clean and degraded by 10% impulse noise. Input image with the text of smaller font sizes and sanserif fonts was found not effective [12]. The existing techniques in the literature have certain limitation such as font, font size, angle constraint and quality of the image. The proposed system addresses these issues by deskewing the document image irrespective of its font style, font size and degree of rotation. The rest of the paper is divided into three sections. Section 2, elaborates on the proposed algorithm. Section 3, discusses the results obtained by

614

H. N. Balachandra et al.

evaluating the proposed algorithm and Sect. 4, presents the conclusion and scope for future work.

2 Methodology This section presents the algorithm used for deskewing the image. There are four subsections which would explain the algorithm in detail.

2.1 Image Acquisition The RGB image of the document is gained using CamScanner application. The gained document image is portrayed in Fig. 2a.

Fig. 2 (a) Input skewed image. (b) Initial deskewed image. (c) Output of detection of vertical text subsection. (d) Final deskewed image. (e) Output of Tesseract OCR

An Unconstrained Rotation Invariant Approach for Document Skew Estimation. . .

615

2.2 Initial Deskew The RGB document image gained is transfigured to a grayscale image. In the transfigured image the foreground and background colours are swapped and all the pixels with the value greater than zero are made 255 and the pixel with zero value is kept unchanged. After applying the thresholding, the image would be having either black or white pixels. Next, acquire the pixel coordinates of all the white pixels in the image and with these coordinates compute a bounding box. Then compute the angle of inclination from the base to the first side of the bounding box and then rotate the image by the counter angle of inclination. At this instant apply the WarpAffine transform to fine-tune the shape of the image. Warp-Affine considers inverse transformation as the input and with the specific matrix, transforms the input image by multiplication with the 2 × 3 specified matrix. The input position corresponding to every output pixel is calculated using Eq. (1).

xinput yinput

=

M0,0 M0,1 M0,2 M1,0 M1,1 M2,2



⎤ xoutput ⎢ ⎥ ⎣ youtput ⎦ 1

(1)

The output of this section is portrayed in Fig. 2b. But in some cases, if the input is flipped or the skewness angle is greater than 90◦ clockwise, the resultant image would be in 90◦ or 180◦ or flipped horizontally or vertically. For these cases, it is required to find the present orientation of the image and correct it. In this case, it is flipped 90◦ clockwise.

2.3 Detection of Vertical Text Apply Canny’s edge detector to the initial deskewed image. Then apply Hough’s transform to detect the lines of text. Equation (2) expresses the straight line in Eq. (3) in polar coordinate where ρ, θ defines a vector from the origin to the nearest point on the straight line. This vector will be perpendicular from the origin to the nearest point to the line. Any line in the x, y plane corresponds to a point in the 2D space defined by the parameter ρ and θ. Thus the Hough transform of a straight line in the x, y space is a single point in the ρ, θ space. ρ = xcosθ + y sin θ

(2)

y = mx + c

(3)

Following the Hough’s transform get the x-coordinates of a line and get the difference between the coordinates. If the difference procured is zero then rotate the image clockwise 90◦ . If the image was 270◦ than the output would be perfect

616

H. N. Balachandra et al.

skewed after these operations. If the image was 90◦ rotated then the output would be 180◦ rotated or the input image if flipped it would still be flipped. The output of this subsection is depicted by Fig. 2c.

2.4 Deskew Based on Tesseract Feedback When an image with skewness is bequeathed to the Tesseract OCR, the output would be not the text present in the image. In this subsection to detect proper text, we defined the stipulation that the feedback of the Tesseract should have English vowels and in the next stage of check it should have the word ‘the’ and ‘is’. Counting these two words, we kept the threshold count to be four. If less than four then the image is not deskewed properly. The proper text condition can be changed according to the document which is being scanned. The image is bequeathed to the Tesseract OCR, then the feedback of the Tesseract is checked. If the feedback condition appeases then the image is deskewed otherwise the image is rotated 180◦ clockwise. Now bequeath the image to Tesseract OCR and check the feedback, if it is appeased then end the process, if not then rotate the image by 180◦ clockwise and then flip the image vertically. At this instance again bequeath the image to the Tesseract OCR and check the feedback, if it is appeased then end the process, if not then flip the image vertically and then again horizontally and end the process. The Fig. 2d portrays the final deskewed output and the Fig. 2e portrays the Tesseract Output.

3 Experimental Results and Discussion This section gives a detailed view of the materials used to evaluate the performance of the proposed algorithm. The algorithm is tested with the samples created with the word. Figure 3a, b are bestowed to the proposed algorithm and the output achieved is a 0◦ skewed image which is portrayed in Fig. 3c. Table 1 collates the performance of the proposed algorithm with algorithm [5] and algorithm [11]. The algorithm [5] employs Fourier transform method for deskewing and it performs deskew to images with skewness up to 45◦ in either direction. The algorithm [11] employs the horizontal projection profile to deskew the image with skewness up to 90◦ in either direction.

4 Conclusion and Scope for Future Work The main aim of the proposed algorithm is to deskew scanned images with any degree of skewness to 0◦ skewness and horizontally flipped or vertically flipped was also considered. In the proposed algorithm the Warp-Affine transform is wielded to

An Unconstrained Rotation Invariant Approach for Document Skew Estimation. . .

617

Fig. 3 (a) Image with 90◦ inclined text and flipped vertically. (b) Image with +30◦ skewness. (c) Deskewed image Table 1 Performance comparison Angle in degrees −360 −180 −45 90 135 Flip all angle

Algorithm [12] No No Yes Yes No No

Algorithm [5] No No Yes No No No

Proposed algorithm Yes Yes Yes Yes Yes Yes

reshape the image after initial deskew, the Hough’s transform along with Canny’s edge detector is wielded to detect the vertical alignment of the text and the Tesseract OCR feedback is wielded to check for the flip condition. The proposed algorithm was evaluated with 40 images with the various degree of skewness and the results were comparable. The proposed algorithm can be further improved by removing the Tesseract OCR feedback and implementing an image processing technique.

References 1. Yuan, B. and Tan, C.L “Skew estimation for scanned documents from noises”. Eighth International Conference on Document Analysis and Recognition, 2005, pp. 277–281. 2. Fabrizio, J “A precise skew estimation algorithm for document images using KNN clustering and Fourier transform”. IEEE International Conference on Image Processing (ICIP), October 2014, pp. 2585–2588. 3. Nandini, N., Srikanta Murthy, K. and Hemantha Kumar, G “Estimation of skew angle in binary document images using Hough transform”. International Scholarly and Scientific Research & Innovation, 2008, pp.44–49. 4. Sarfraz, M., Zidouri, A. and Shahab, S.A “A novel approach for skew estimation of document images in OCR system”. International Conference on Computer Graphics, Imaging and Vision: New Trends, IEEE July 2005, pp. 175–180.

618

H. N. Balachandra et al.

5. Kaur, M. and Jindal S. “An integrated skew detection and correction using Fast Fourier transform and DCT”. International Journal of Scientific & Technology Research, Dec 2013, vol 2(12), pp.164–169. 6. Srihari, S.N. and Govindaraju, V “Analysis of textual images using the Hough transform”. Machine vision and Applications, 1989,2(3), pp.141–153. 7. Zhu, X. and Yin, X “A new textual/non-textual Recognition”, 2002. Proceedings. 16th International Conference on (Vol. 1, pp. 480–482). IEEE. 2002’Science, Engineering and Applications (IJCSEA) 3, no. 3(2013). 8. Nguyen, D.T, Nguyen, T.M. and Nguyen, T.G. “A robust document skew estimation algorithm using mathematical morphology”. IEEE, October 2007, pp.496–503. 9. Sarfraz, M., Mahmoud S.A. and Rasheed, Z “On skew estimation and correction of text.” IEEE Conference on Computer Graphics, Imaging and Visualization (CGIV), August 2007 pp.308– 313. 10. Ju, Z. and Gu, G “Algorithm of document skew detection based on character vertices”. Intelligent Information Technology Application, (IITA) 2009, Vol. 2, pp. 23–26. 11. Arvind, K.R., Kumar, J. and Ramakrishnan, A.G “Entropy based skew correction of document images”, International conference on Pattern recognition and Machine Intelligence, Springer, Berlin, Heidelberg, December 2007, pp. 495–502. 12. Caprari, R.S “Algorithm for text page up/down orientation determination. Pattern Recognition Letters” 2000, 21(4), pp.311–317.

Smart Assistive Shoes for Blind N. Sohan, S. Urs Ruthuja, H. S. Sai Rishab, and R. Shashidhar

1 Introduction Blindness is a condition of being unable to see because of injury, disease, or loss of sight by birth. According to statics given by world health organization there are about 200 and 85 billion people in world with visual impairment, 39 billion people are blind, and 246 with low vision. Modern technological solutions have introduced to help blind people navigate independently. Some of the conventional aids used by people with visual disability are the walking cane (also called white cane or stick). A system with extended functionalities to white canes, which acquires front scene of the user using Microsoft Kinect sensor and was mapped into a pattern representation [1]. A navigation system based on stereo vision was mounted on head, and this device scans the scene for field information and includes 3D traversability analysis [2]. Another system developed a cane that communicates with users through voice alerts. The voice signals sent through speakers to the user [3]. Some of the previously proposed systems: A wearable system for mobility improvement of visually impaired people. The system detects an obstacle surrounding the user by using a multi-sonar system and sending appropriate vibrotactile feedback [4]. A navigation aid for the blind based on a microcontroller with synthetic speech output [5]. Smart Canes for the Visually Impaired Capable of Detecting Obstacles [6]. For safe navigation, a system using ultrasonic sensors and USB camera was considered [7]. The USB camera for finding the properties of the obstacle like presence of human based on face detection was used. A robot assistant for blind people was presented [8], which follows obstacle free path, detects the motion of

N. Sohan () · S. U. Ruthuja · H. S. S. Rishab · R. Shashidhar Department of Electronics and Communication, JSS Science and Technology University, Sri Jayachamarajendra College of Engineering, Mysore, Karnataka, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_61

619

620

N. Sohan et al.

user and the speed of robot can be controlled. A model with GPS, GSM along with ultrasonic and IR sensors was designed to guide the way of blind people [9]. Cognitive Aid System for Blind People was developed as an artificial vision system [10]. A head hat and mini hand stick for obstacle free navigation was proposed [11]. Infrared sensors and PIC microcontrollers were used for obstacle detection. Another study presented a blind navigation system based on RFID wireless and mobile communications technologies [12]. A research work aimed at developing visual aid for blind people was based on ZigBee, sonar and specially designed software systems for voice processing and command generation [13]. The IR sensors are less efficient and less accurate to detect an object. Usually the visually challenged people use white cane for their guidance, which has lot of drawback during their daily routines and moving through unfamiliar areas. That sometimes make them feel uncomfortable. It is difficult to place all the components on white cane and it becomes heavy for blind people to carry the stick. To improve the quality of the system, the main idea of this work is focussed on developing technological aid for blind people to travel easily. The system is integrated in blind person’s shoe. Ultrasonic sensor that provides more accuracy in detecting objects. Based on which direction the obstacle is present, voice commands are generated to alert the person, which gives clear information about the obstacles. In case of emergencies, an emergency system to send location information is designed. The idea of smart shoes for blind has also includes power generation using footstep. Therefore, it will help in power back up. Many ideas were presented regarding power generation using footstep. There was a research on development of piezoelectric ceramic Material [14]. A methodology of electrical power generation was presented in 2013 with principle of converting mechanical energy to electrical [15].

2 Prototype Design 2.1 Hardware Description The primary sensors used in the prototype are the ultrasonic sensors. Ultrasonic Module works on the principle of electrical–mechanical energy transformation to measure distance from the sensor to the target object. The ultrasonic sensor comprises of a transmitter and a receiver. They are either available as separate units or together as a single unit. The development board used is the Arduino Uno and Arduino Mega. The GSM module used is GSM A6 Module. The Vcc pin of GSM is connected with power key (PWR) pin, which acts as chip enable. This is to power up the module. A slot to place a valid SIM is provided at the backside of the module. The SIM slot supports a Micro SIM. In case of Nano SIM, a converter needs to be used to place the SIM in the slot. The GPS module is used to give information about the current location and time at regular intervals. A GPS connected to the smart shoe receives the signals

Smart Assistive Shoes for Blind

621

travelling at the speed of light from at least three satellites orbiting the Earth. The process called as Trilateration. The GPS module used in our project work is G702 module, which is of small size from U BLOX with TTL Output and with a Built-in Antenna. The serial mp3 player module is used to give voice commands to the blind. This module has a MP3 audio chip—YX5300. It supports a sampling frequency of 8 kHz 48 kHz and WAV file formats. It operates with a voltage of 3.2–5.2 V. It also has a TF card socket to insert a micro SD card that stores audio files. The microcontroller to which the mp3 player interfaced can control the playback state by sending commands to the module via UART port. There are commands to change audios, to change the volume and play mode and so on. The sensor used for power back up is the piezoelectric sensor, which works on the principle of piezoelectric effect. It converters change in pressure, acceleration, temperature, strain, or force to an electrical charge.

2.2 System Design The Arduino board has USB connection and external power supply. External power can come from either an AC-to-DC adapter or battery. The board operates with a supply of 6–20 V. The board can be supplied with power either from the DC power jack, the USB connector, or the VIN pin of the board. A power back up unit is provided using piezoelectric sensors. Piezo sensors produces AC voltage. Therefore, to convert it to DC output, a rectifier is used. A bridge rectifier circuit can be used. The electrical energy generated from piezo sensor is stored in lithium ion polymer battery. To charge lithium ion polymer battery, tp4056 module used. Piezo sensors provide the input to this module. Alternatively, the module can also be powered using external micro USB port provided on the module. The output voltage from the LiPo battery is around 3.7 V. Therefore, a booster module is required for up converting the voltage from LiPo battery. The voltage is up converted to 9 V. The voltage generated is sufficient to drive the Arduino module. As the GSM module requires a 5-V supply, while powering the module, the output from battery is regulated to 5 V using 7805 IC. The 7805 IC produces sufficient current and voltage required to operate the GSM module. The ultrasonic sensor sends out a sound wave at ultrasonic frequency and waits for it to bounce back from the object. The time delay between transmission and reception of sound is necessary for the calculation of distance. Based on the distance of object, voice commands are produced. The voice commands are played with respect to the distance of object and these are loaded onto a SD card and played using the serial mp3 player module. The serial mp3 player module (YX5300) gives recorded voice output with respect to in which direction the objects are detected. For example: if there is an object closer to left ultrasonic sensor, then mp3 player gives voice command “left obstacle”. So that, the blind person will get to know about the presence of an object.

622

N. Sohan et al.

Fig. 1 Block diagram of the proposed system

The emergency system is interfaced with the GPS and GSM modules. Whenever the blind person needs help, he can press emergency switch available to send an alert message containing the current location to the caretaker. The GPS receiver collects necessary information from the satellite. The method of triangulation is used to determine user’s exact position. This information is sent as a message to the caretaker’s mobile with help of GSM module. Proposed block diagram shown in Fig. 1.

3 Result and Discussion The final model with all the mentioned features was implemented using necessary modules. The modules were mounted on a shoe. Components are placed in such a way that one shoe will be having ultrasonic part and the other with GSM and GPS part. A series of piezo sensors were placed in the sole right below the heels so that the force created from the weight of the body were concentrated on a single point. This pointed force was necessary to generate enough voltage from the piezo sensors. The ultrasonic sensors were placed on the sides of the shoe to detect the obstacles. The range for the ultrasonic sensors were set to 10–60 cm. If an obstacle is detected within this range, respective voice commands were played from the serial mp3

Smart Assistive Shoes for Blind

623

player which he can hear using his earphones (with long wire). The speakers can be used indoor environment. Earphones are best for outdoor environment. The volume of serial mp3 player is programmed to change according to the distance of the obstacle. If a farther object is detected, then user is alerted with low volume. In case of nearer obstacle, alert is produced with little bit high volume. The power back unit produced sufficient power to operate the specific modules. There was a minor problem observed with piezo sensors that it gives very less current output. It was not possible to charge the battery. Hence, the battery was charged via micro USB port available in tp4056 module. The emergency button was interfaced with GSM and GPS modules. When the blind person presses this button, a HELP ME! Message with his current location will be send to the mobile number of caretaker. Fig. 2 shows the Proposed prototype model of smart shoe in (a), sole of the shoe attached with Piezo sensors in (b) and Power unit connections in (c) and Fig. 3 is shows what type of help message received by the caretakers.

Fig. 2 (a) Prototype model of smart shoe, (b) Piezo sensors attached to sole of the shoe, (c) Power unit connections

624

N. Sohan et al.

Fig. 3 Help message received by the caretaker

4 Conclusion The research work is for social cause and it can help those who need it. The drawbacks of other proposed systems like reduced efficiency of IR sensor and need for stick can be overcome by using our system. Use of shoes is ordinary so there would be no embarrassment in using this in public. The work is successfully implemented with high efficiency of object detection using ultrasonic sensors and giving clear guidance to the blind person. In future, the performance of the system can be enhanced by reducing the load on the user by adding a camera to guide the blind. Images obtained by the web camera lends a helping hand in identification of objects as well as scan the entire instances for the presence of number of objects in the path of the blind person and even the shape of the object can be detected if needed. The concept of Safe path detection based on neural networks can be included. Hence, it can be concluded that idea of this paper can play a major role in assisting the blinds.

References 1. Filipe, Vítor, Nuno Faria, Hugo Paredes, Hugo Fernandes, and João Barroso. “Assisted Guidance for the Blind Using the Kinect Device.” In Proceedings of the 7th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, pp. 13–19. ACM, 2016. 2. Pradeep, Vivek, Gerard Medioni, and James Weiland. “Robot vision for the visually impaired.” In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pp. 15–22. IEEE, 2010. 3. Wahab, Mohd Helmy Abd, Amirul A. Talib, Herdawatie A. Kadir, Ayob Johari, Ahmad Noraziah, Roslina M. Sidek, and Ariffin A. Mutalib. “Smart cane: Assistive cane for visuallyimpaired people.” arXiv preprint arXiv:1110.5156 (2011). 4. Cardin, Sylvain, Daniel Thalmann, and Frédéric Vexo. “A wearable system for mobility improvement of visually impaired people.” The Visual Computer 23, no. 2 (2007): 109–118. 5. Bousbia-Salah, Mounir, Maamar Bettayeb, and Allal Larbi. “A navigation aid for blind people.” Journal of Intelligent & Robotic Systems 64, no. 3–4 (2011): 387–400. 6. Velázquez, Ramiro. “Wearable assistive devices for the blind.” In Wearable and autonomous biomedical devices and systems for smart environment, pp. 331–349. Springer, Berlin, Heidelberg, 2010. 7. Kumar, Amit, Rusha Patra, M. Manjunatha, Jayanta Mukhopadhyay, and Arun K. Majumdar. “An electronic travel aid for navigation of visually impaired persons.” In Communication systems and networks (COMSNETS), 2011 third international conference on, pp. 1–5. IEEE, 2011.

Smart Assistive Shoes for Blind

625

8. Toha, Siti Fauziah, Hazlina Md Yusof, Mohd Fakhruddin Razali, and Abdul Hadi Abdul Halim. “Intelligent path guidance robot for blind person assistance.” In Informatics, Electronics & Vision (ICIEV), 2015 International Conference on, pp. 1–5. IEEE, 2015. 9. Swain, Kunja Bihari, Rakesh Kumar Patnaik, Suchandra Pal, Raja Rajeswari, Aparna Mishra, and Charusmita Dash. “Arduino based automated STICK GUIDE for a visually impaired person.” In Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), 2017 IEEE International Conference on, pp. 407–410. IEEE, 2017. 10. Dunai, Larisa, Ismael Lengua Lengua, Ignacio Tortajada, and Fernando Brusola Simon. “Obstacle detectors for visually impaired people.” In Optimization of Electrical and Electronic Equipment (OPTIM), 2014 International Conference on, pp. 809–816. IEEE, 2014. 11. Al-Fahoum, Amjed S., Heba B. Al-Hmoud, and Ausaila A. Al-Fraihat. “A smart infrared microcontroller-based blind guidance system.” Active and Passive Electronic Components2013 (2013). 12. Ding, Bin, Haitao Yuan, Li Jiang, and Xiaoning Zang. “The research on blind navigation system based on RFID.” In Wireless Communications, Networking and Mobile Computing, 2007. WiCom 2007. International Conference on, pp. 2058–2061. IEEE, 2007. 13. Amutha, B., and Karthick Nanmaran. “Development of a ZigBee based virtual eye for visually impaired persons.” In Indoor Positioning and Indoor Navigation (IPIN), 2014 International Conference on, pp. 564–574. IEEE, 2014. 14. Wang, Ya, and Daniel J. Inman. “A survey of control strategies for simultaneous vibration suppression and energy harvesting via piezoceramics.” Journal of Intelligent Material Systems and Structures 23, no. 18 (2012): 2021–2037. 15. Ghosh, Joydev, Supratim Sen, Amit Saha, and Samir Basak. “Electrical power generation using foot step for urban area energy applications.” In Advances in Computing, Communications and Informatics (ICACCI), 2013 International Conference on, pp. 1367–1369. IEEE, 2013.

Comparative Study on Various Techniques Involved in Designing a Computer Aided Diagnosis (CAD) System for Mammogram Classification A. R. Mrunalini, A. R. NareshKumar, and J. Premaladha

1 Introduction Mammography is an effective breast screening technique that is used to identify abnormalities in breast such as masses and micro-calcifications [1]. Mammogram images are subjected to various pre-processing techniques to improve the visual quality of mammograms that can be used for accurate segmentation [2]. The overall process for designing a CAD system is presented as a flow chart in Fig. 2. • Various pre-processing techniques can be used to remove noises and enhance the visual quality of an image, such as mean filter, median filter, adaptive median filter, gaussian filter, Partial Differential Equations a subsection of mathematics etc. [3]. Adaptive Fuzzy Logic based Bi-Histogram Equalization (AFBHE) is a technique for improving the quality of mammograms for better interpretation [4]. • In segmentation the representation of the image is changed to make it more meaningful for better analysis. The region of interest with abnormalities is partitioned from the rest of the image. Segmentation can be classified as regionbased, edge-detection, feature-based clustering, thresholding and model-based approaches. Region-based approach is based on homogeneity and classifies the images based on regions [5]. • Feature learning techniques can be used for regular and irregular object detection. Feature learning algorithms can be used for distinguishing between patterns and classifying them into classes. These algorithms help to extract the common patterns automatically so that they can be used for further classification. Feature learning can be done using deep learning techniques and, convolutional neural networks to study the patterns in the mammogram images for classifying them.

A. R. Mrunalini · A. R. NareshKumar · J. Premaladha () School of Computing, SASTRA Deemed-to-be-University, Thanjavur, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_62

627

628

A. R. Mrunalini et al.

• Classification is the final step in the design of a CAD system where the type of abnormality is identified and distinguished into separate classes. The accuracy of the classification depends on the first three steps mentioned above.

2 Materials and Methods 2.1 Pre-Processing Techniques Pre-processing removes noise and improves the image contrast, its accuracy mainly impacts the results of segmentation. Weak boundaries and unrelated parts can be removed using pre-processing. Noise can get introduced into an image during storage, transmission, processing or it might be present when the image was created. It can be removed through various pre-processing techniques. Different preprocessing techniques discussed in this paper include: • • • • • • •

Median filter Mean filter Adaptive median filter Gaussian filter Partial Differential Equations Adaptive Fuzzy Logic based Bi-Histogram Equalization (AFBHE) Unsharp Masking (UM) based enhancement techniques

2.1.1

Mean Filter

Mean filter or Average filter is a low pass filter where the average value is placed as the centre value for every position [1]. The aim of using a mean filter is to improve the appearance of an image for interpretation. Mean filter is a linear filter or a convolution filter which can be represented as a matrix. It moves through the image, pixel by pixel by replacing the centre value by the mean value of the neighbouring pixels.

2.1.2

Median Filter

Median filtering is a non-linear filtering technique that decreases salt and pepper noise while preserving the sharpness of the edges in an image. This works better than the mean filter as it removes noise, while mean filter spreads the noise evenly. Median is the middle value of its neighbouring pixels where half of the neighbour pixels are smaller and half are larger. The disadvantage of median filter is that it treats noise and fine detail in a similar manner and hence it removes fine details also.

Comparative Study on Various Techniques Involved in Designing a Computer. . .

2.1.3

629

Adaptive Median Filter

This filter preserves the edges by lessening the bends and it traces out which pixel is influenced by impulse noise by performing spatial handling [9]. Adaptive median filtering is an advanced technique when compared to median filtering. The pixels affected by impulse noise can be determined by performing spatial processing. In an Adaptive Median Filter, each pixel in the image is compared to its surrounding neighbour pixels and based on that the pixels are classified as noise. Median filter can perform well only until the spatial density of the impulse noise is not large but adaptive median filter can work even if it is large.

2.1.4

Wiener Filter

Image with lower MSE value has better visual quality and wiener filter lowers this MSE value during noise smoothing and inverse filtering. It does not alter the mammogram image but improves its quality. It can simultaneously remove noise and invert blurring. It depends on the Fourier cycle and requires only a short computational time to identify a solution to a problem [2].

2.1.5

Gaussian Filter

A Gaussian filter is a non-uniform low pass filter with Gaussian function using which the low and high signal distortion can be controlled. An image can be smoothened and interpolated simultaneously using Gaussian filter and the variance, σ2 , should be ≥1. At location (y, x), for non-integer row index y and column index x, the estimated intensity is an average of local pixel values, weighted by the Gaussian density function in Eq. (1):

   i=[y+3σ ] j =[x+3σ ] fij g y, x = i=[y−3σ ] j =[x−3σ ] 2ψ2

⎛  ⎞  2  2   ⎜ − i−y + j −x ⎟ ⎜ ⎟ exp ⎜ ⎟ ⎝ ⎠ 2σ 2 (1)

2.1.6

Partial Differential Equations

PDE can be applied for image de-noising and enhancement, edge detection, shape extraction and analysis, image recognition, image retrieval, image segmentation, image reconstruction, medical image processing, and motion analysis. By measur-

630

A. R. Mrunalini et al.

ing it with Contrast Improvement Index (CII) and PSNR it was found that mesh free approach was computationally more effective and accurate.

2.1.7

Adaptive Fuzzy Logic Based bi-Histogram Equalization (AFBHE)

This method preserves the local information from original mammograms and can evaluate dense breasts to detect abnormalities and improves the quality of mammograms. It is a fully adaptive method that enhances each image based on its characteristics. The ROI and the other areas differ in contrast only by a small value, hence a controlled enhancement is required which is provided by the Adaptive Fuzzy Logic based Bi-Histogram Equalization (AFBHE).

2.1.8

Unsharp Masking (UM) Based Enhancement Techniques

It specifically enhances the tumour region with respect to its background and its best suited for medical images. Enhancement is achieved by subtracting the unsharp version of the image from the original image. UM is a computationally simple approach that enhances the contrast by sharpening the features in the image including the tumour region. This method improves the contrast in the lesion region and enhances the lesion margins and fine details for better visualization. The preprocessed image is then segmented to identify abnormal regions for feature learning.

2.1.9

Contrast Stretching

Contrast stretching is a technique where the intensity range of values in an image is stretched to improve the contrast in an image. It is a measure of the complete range of intensity values contained within an image and can be calculated by subtracting minimum pixel value from the maximum pixel value. Contrast stretching has been done in the Fig. 1 [6]. The results were evaluated using MSE, PSNR and CNR parameters for Wiener, Gaussian, Mean Median and Contrast Stretching techniques and is presented in Table 1.

3 Segmentation Mammogram segmentation helps to classify mammograms into distinct regions to accurately detect the presence of abnormalities in the mammograms. Active contour is an important method in image segmentation. Edge based and regionbased methods are active contour methods. Chan-Vese models can be used for segmenting noisy images. Though Chan-Vese level set segmentation provides best

Comparative Study on Various Techniques Involved in Designing a Computer. . .

631

Fig. 1 ‘mdb001’ sample from MIAS database before and after contrast stretching done using Matlab Table 1 Performance of the enhancement techniques based on MSE, PSNR and CNR parameters

S. No 1 2 3 4 5

Technique Wiener filter Gaussian filter Mean filter Median filter Contrast stretching

MSE 1.4194 1.3935 5.8467 4.6552 2.8426

PSNR 26.9395 36.6896 42.3309 42.6932 23.5935

CNR 6.9742 6.8582 6.7085 1.2423 1.6703

recall, regions of interest cannot be segmented accurately in all cases. Identification process for lesions should be done only in the relevant areas of the image. The efficiency of the CAD system strongly depends on the accuracy of result segmentation. Segmentation can be done in many ways few of the method are compared with their efficiency.

3.1 FCM Segmentation Fuzzy C Means clustering segmentation method considers the mammogram image as two grey-level distinct images and can cluster overlapping data effectively. The drawback is random initialization of clusters which produces less accurate segmentation results. The performance of this segmentation method was evaluated using accuracy, completeness and correctness. When compared with the other approaches it was found that the correctness percentage of FCM segmentation method was higher [9]. But to achieve better results using this method, analysis can be done by including data from various datasets and steps can be taken to improve the boundary extraction accuracy.

632

A. R. Mrunalini et al.

3.2 Segmentation with Prior Knowledge Learning In this method breast tumour segmentation is done with prior knowledge learning as the first step where prior information is gathered to classify the abnormal regions accurately and the base segmentation model is developed using prior knowledgebased information. Intensity distribution, shape compactness and texture is taken as information. In the prior knowledge segmented image, incorrectly segmented abnormal regions can be detected and segmented exactly. That is the results of base segmentation can be corrected through the learned prior information. Figure 2, shows the steps involved in designing a CAD system.

3.3 Chan-Vese Level Set Segmentation Chan-Vese level set segmentation can represent even weak boundaries where the image is segmented into two parts and the pixels are equally divided with similar intensities. Initially the evolution starts with a circle then it converges when the difference between the variances become maximum. The segmented image is then subjected to feature learning techniques.

3.4 Active Contour-Based Method A region based active contour method was suggested [6] which accumulates the advantages of level set evolution without re-initialization. Chan-Vese level set model does not work well for image with intensity inhomogeneity and also the converging of the model at the global minimum cannot be guaranteed. Hence this model was suggested for guaranteeing the convergence of the image at the global minimum (Figs. 3 and 4) [7]. Fig. 2 Overall workflow of designing a CAD system

IMAGE ACQUISITION

PRE-PROCESSING

SEGMENTATION

FEATURE LEARNING

CLASSIFICATION

Comparative Study on Various Techniques Involved in Designing a Computer. . .

633

Fig. 3 Mammogram image with high intensity

Fig. 4 Evolution of the contour for the mammogram image from MIAS database (mdb184), where (a) shows the original image, (b) shows the initial contour, (c) shows the image with final contour obtained after 5 iterations

3.5 Segmentation Based on Pixel-Wise Clustering Segmentation between different tissues was done using K-means clustering [8]. The consistency of the segmentation method depends on validating the region of interest in various methods after the removal of noises, artifacts and pectoral muscle. The evaluation was done by comparing the Breast region with Background and Pectoral muscle region and performance of this method was measured by its accuracy in separating the Breast region alone from the Background and Pectoral muscle region [9]. Figure 5 shows the different segmentation results of the mammogram image retrieved from MIAS database.

3.6 Segmentation Based on Morphological Operators Two morphological techniques were used namely, opening by reconstruction and closing by reconstruction [10]. The first step involves erosion and morphological

634

A. R. Mrunalini et al.

Fig. 5 (a) Good quality image where segmentation has been done as desired. (b) sample image of over segmentation of the breast. (c) Under segmentation of the breast. (d) Pectoral muscle boundary has not been detected Fig. 6 (a) Cropped image, (b) Tumor

reconstruction where eroded image is the marker image and the original image is the mask image. In closing by reconstruction, which removes imperfections by preserving the contours, dilated image is the marker and the original image is the mask. Hence through this the region of interest that contains the tumour can be detected as shown in Fig. 6.

4 Feature Learning Feature learning done through deep learning technique proves to be a powerful diagnostic tool for classification as it uses multiple levels of representation and the memory consumption can be reduced by a technique called pooling. Evaluation of the performance of the feature extraction techniques can be done using statistical features, ridgelet, curvelet and methods using contourlet transforms [11].

Comparative Study on Various Techniques Involved in Designing a Computer. . .

635

4.1 Feature Extraction Using Phylogenetic Tree Samples can be compared using phylogenetic trees which use texture analysis, where the matrix of different grey-level areas of the image in the region of interest is derived from a cladogram [12]. Images were retrieved from DDSM, MIAS and IRMA database. This method performs texture-based extraction of features from the region of interest. The phylogenetic diversity in the region of interest is calculated using the indexes. The phylogenetic diversity is calculated from the matrix of distances between the different grey-level areas of the image. The accuracy, specificity and sensitivity values obtained in [13] proves that when using random forests for classification the results were superior or equal to the existing methods. The worst results were obtained by using neural networks. Figure 7 shows the ROI separated from the image retrieved from DDSM database. Table 2 shows the result of this method.

4.2 Sequential Forward Feature Selection Approach (SFFS) This method was combined with quadratic discriminant analysis for identifying important features. Features are added to the set based on the degree of relevancy, samples were added based on random sampling. The features testing involves a statistical technique called 2 sample t test, and the features that are significant can be taken for classification [14]. Figure 8 shows, A, shape and texture analysis in Regions of Interest, B, Low-energy, craniocaudal view. C, Recombined image, mediolateral oblique view. D, Recombined image, craniocaudal view. Fig. 7 (a) shows the benign ROI. (b) shows the malignant ROI

Table 2 Performance of the phylogenetic tree- based feature extraction method Base DDSM MIAS

Accuracy (%) 99.73100

Sensitivity (%) 99.41100

Specificity (%) 99.84100

ROC 1.00 1.00

636

A. R. Mrunalini et al.

Fig. 8 Contrast accumulation within a breast Fig. 9 MIAS database images, (a) is original image, (b) Region of abnormality

4.3 Dual Tree M-Band Wavelet Transform (DTMBWT) This method was performed with microcalcification images obtained from miniMIAS database as mentioned in [6] and mammograms were represented in a multi-resolution manner and classified using SVM classifier. The mammogram is decomposed by DTMBWT which results in generation of sub bands by using M band filter banks. The number of levels used for decomposition and also the number of filter banks used decide the number of sub-bands created by DTMBWT (Fig. 9) [15].

5 Classification Classification of mammogram images is the final step in the design of a CAD system where the image is validated to assess the performance of feature extraction technique. Classification techniques discussed here are Support Vector Machine (SVM), Random Forests (RF) and K-Nearest Neighbour.

Comparative Study on Various Techniques Involved in Designing a Computer. . . Table 3 Performance of polynomial kernel based on classification accuracy

Level of decomposition 1 2 3 4 5

Benign 56.5 67.5 94.33 75.67 61.17

Malignant 52.83 73.17 89.33 72.67 52.5

637 Average 54.67 70.33 91.83 74.17 56.83

5.1 Support Vector Machine (SVM) The type of kernel used has an effect over the accuracy of the SVM. When a polynomial kernel was used along with DTMBWT the accuracy achieved was higher when compared with other feature extraction methods like fractal features and DWT coefficients. Leave One Out Cross Validation (LOOCV) technique was used for constructing the model and the evaluation was done using every sample as test sample atleast once. The performance was estimated based on accuracy, sensitivity, specificity and area under receiver operating characteristic curve. Table 3 shows the performance of using polynomial kernel with SVM based on the percentage of classification accuracy of benign, malignant and average regions [16].

5.2 Random Forests It combines the results of hundreds of decision trees and is based on bootstrap aggregating principle and random subspace method. Cross validation or a separate test set data is not essential and the errors in classification are measured in observations from Out Of Bag data [17]. When compared based on its ability to predict the labels of unknown data, it was found that Random forest was accurate than decision tree, K-Nearest Neighbour and SVM [18].

5.3 K-Nearest Neighbour The classification accuracy depends on the choice of the integer k value, which is a critical step. If the value is large it becomes computationally expensive, if it is small then noise has great impact on the outcome. Thus, various values of k are tried and the value which gives best classification accuracy is chosen [18]. The results of the performance of various classifiers have been displayed in Fig. 10, in the form of graph taken from [18].

638

A. R. Mrunalini et al.

Accuracy (%)

120 100

90

96

98.89

94.44

80

80.4

77.78

KNN

SVM

60 40 20 0 Vibha et al

Neeraj et al

Decision Tree

Random Forest

Fig. 10 Performance Analysis of various classifiers

6 Results Among the filters used for pre-processing, it was inferred that Gaussian and wiener could remove speckle noise better, median and mean were good in removing salt and pepper noise. Mean could also remove Gaussian noise effectively. The active contour method for segmentation was found to perform well for images that converge at the global minimum, Chan-Vese level set segmentation had an advantage of representing weak boundaries and segmentation based on morphological operators had a higher accuracy than K-Means. Feature extraction using phylogenetic tree when analysed was found to that if the distance is higher and the number of hierarchical levels is higher in a phylogenetic tree, then the diversity is more significant in a community. The classifier, Random Forests could perform better than Support Vector Machine (SVM) and K-Nearest Neighbour [19].

References 1. Qayyum A, Basit A. Automatic breast segmentation and cancer detection via SVM in mammograms. Conf Proc ICET IEEE. 2016; (pp. 1-6). 2. George MJ, Sankar SP. Efficient preprocessing filters and mass segmentation techniques for mammogram images. Conf Proc ICCS IEEE. 2017; pp. 408-413. 3. Roty S, Wiratkapun C, Tanawongsuwan R, Phongsuphap S. Analysis of microcalcification features for pathological classification of mammograms. Conf Proc BMEiCON IEEE. 2017; (pp. 1-5). 4. Sheba KU, Gladston Raj S. Adaptive fuzzy logic based bi-histogram equalization for contrast enhancement of mammograms. Conf Proc ICICICT. 2017; pp. 6-7. 5. Bhateja V, Misra M, Urooj S. Unsharp masking approaches for HVS based enhancement of mammographic masses: A comparative evaluation. FGCS. 2017; 6. Berbar MA. Hybrid methods for feature extraction for breast masses classification. Egyptian Informatics Journal. 2017; 7. Feudjio C, Tiedeu A, Klein J, Colot O. Automatic Extraction of Breast Region in Raw Mammograms Using a Combined Strategy. Conf Proc SITIS. 2017; pp. 158-162.

Comparative Study on Various Techniques Involved in Designing a Computer. . .

639

8. Ponraj N, Winston J, Mercy M. Novel local binary textural pattern for analysis and classification of mammogram using support vector machine. Conf Proc ICSPC IEEE. 2017; pp. 380-383. 9. Soomro S, Choi KN. Robust active contours for mammogram image segmentation. Conf Proc ICIP IEEE. 2017; pp. 2149-2153. 10. Jothilakshmi GR, Raaza A. Effective detection of mass abnormalities and its classification using multi-SVM classifier with digital mammogram images. Conf Proc. ICCCSP IEEE. 2017; pp. 1-6. 11. Singh S, Kumar V, Verma HK, Singh D. SVM based system for classification of microcalcifications in digital mammograms. Conf Proc EMBS’06 IEEE. 2006; pp. 4747-4750. 12. Joseph AM, John MG, Dhas AS. Mammogram image denoising filters: A comparative study. Conf Proc ICEDSS IEEE. 2017; pp. 184-189. 13. Kumar V, Mohanty F, Dash B, Rup S. A hybrid computer-aided diagnosis system for abnormality detection in mammograms. Conf Proc RTEICT IEEE. 2017; pp. 496-500. 14. Ponraj N, Mercy M. Texture analysis of mammogram for the detection of breast cancer using LBP and LGP: A comparison. Conf Proc ICoAC IEEE. 2016; pp. 182-185. 15. Shi P, Zhong J, Rampun A, Wang H. A hierarchical pipeline for breast boundary segmentation and calcification detection in mammograms. Computers in biology and medicine. 2018 May 1;96:178-88. 16. Smaoui N, Hlima A. Designing a new approach for the segmentation of the cancerous breast mass. Conf Proc SSD IEEE. 2016; pp. 313-317. 17. Rodríguez-Cristerna A, Gómez-Flores W, de Albuquerque Pereira WC. A computer-aided diagnosis system for breast ultrasound based on weighted BI-RADS classes. Computer methods and programs in biomedicine. 2018 Jan 1;153:33-40. 18. Selvathi D, AarthyPoornila A. Performance analysis of various classifiers on deep learning network for breast cancer detection. Conf Proc ICSPC IEEE. 2017; pp. 359-363. 19. Carvalho ED, de Carvalho Filho AO, de Sousa AD, Silva AC, Gattass M. Method of differentiation of benign and malignant masses in digital mammograms using texture analysis based on phylogenetic diversity. Computers & Electrical Engineering. 2018 Apr 30;67:210-22.

Traffic Flow Prediction Using Regression and Deep Learning Approach Savita Lonare and R. Bhramaramba

1 Introduction Population of urban cities and developed metro cities is growing so rapidly. Number of vehicles on road is increasing sharply with the social economy development. The existing transportation systems and capacity of road network is not capable to hold growing number of vehicles on road. This results in heavy traffic, deadlocks on road and loss of time and money too. Expanding road or increasing lanes to handle increasing traffic is quite a costly solution. And expanding road is not always feasible as it requires lots of money and extra space. Many times extra land is also not available. Another solution is to use existing road network in wise way. The efficient use of road network can solve the traffic problem up to a certain extent. This can save the money as well as the time required for the expansion of the infrastructure. The strategies used to control the traffic involve the short term forecasting of the traffic. Short term forecasting technology can be used by people to make more appropriate route decisions, so as to avoid the traffic and delay. Thus precise short term traffic forecast is an important part of traffic control and obviously of Intelligent Transportation System (ITS). As the name suggests, short-term traffic forecast only predicts the traffic flow in the near prospect, say some minutes. Due to lack of traffic data capturing resources like GPS and surveillance camera there were limited source of realtime data. Therefore earlier short term traffic forecasting technologies has variation when compared with real time traffic data. Now a days huge historical traffic data including variety of parameters such as traffic volume, time-stamp, vehicle velocity, events in proximity is easily available, and can be known timely. This information can be used to improve forecasting result thus making traffic prediction more

S. Lonare () · R. Bhramaramba Department of CSE, GITAM, Visakhapatnam, India © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_63

641

642

S. Lonare and R. Bhramaramba

reliable. Google have their own algorithm which is quite good at predicting but it is not in open domain. Many traffic data analysis techniques have been proposed over the past few years. Many of them solves short-term traffic forecast issues. Forecast techniques can be broadly categorized into two types. First is parametric and another one is non-parametric approach. Parametric learning model always have set of parameters of fixed size. It accepts the whatever data you input, it won’t change parameter size. It works in two steps i.e. selection of function and then learn the coefficient for the function using training data. For example, equation of line is given as. a0 + a1 ∗ x1 + a3 ∗ x2 = 0

(1)

where a0 , a1 , a2 are coefficients in the equation of line. x1 and x2 are input variables. Now our aim is to design a predictive model that will estimate a0 , a1 , a2 that satisfies the line equation. Logistic Regression, perceptron, Naïve Bayes, Linear Discriminate Analysis, Simple Neural Networks are examples of parametric method. Due to fixed size parameters, Parametric methods are simpler, faster and can work with lesser data. Constraint of using Parametric methods is, it strictly follows specified form, cannot be used for complex problems as they are unlikely to match principal mapping function. On the other hand Nonparametric method do not follow strict rules about form of mapping function. In nonparametric method number of parameters grows as the size of the training set increases. They are capable to fit various form of mapping functions. This is obvious advantage of nonparametric method. Parametric methods result in higher performance in prediction. Limitations of nonparametric methods are, it requires more data, so quite slower than parametric methods. It has higher risk of outfit with respect to the training data.

2 Related Work Amongst the parametric method, Autoregressive Integrated Moving Average (ARIMA) model is widely known parametric approach. ARIMA is most accepted framework to construct traffic forecast model. Many researchers have worked with ARIMA in last few years. In 1980s, M. Levin and Y. Tsao applied Box-Jenkins time series analyses to forecast freeway traffic flow, and their research showed that ARIMA (0, 1, 1) model was most statistically noteworthy [1]. In 2010, Hamedi et al. applied the ARIMA model for traffic flow forecast in urban roads [2]. Some other enhanced approaches such as Kohonen-ARIMA, vector autoregressive ARIMA, subset ARIMA, are also used for short-term traffic forecast [3–5]. ARIMA is proven for theoretically sound and practically effective technique. These Parametric approaches are capable of attaining a good performance when traffic shows usual variations. However, limitations of parametric approached as discussed earlier, forecast error can be seen when the traffic does not show regular variations.

Traffic Flow Prediction Using Regression and Deep Learning Approach

643

Nonparametric methods for prediction address this problem. Researchers came up with methods such as Neural network prediction [6], Kalman filtering [7, 8], nonparametric regression [9], Support Vector machine (SVM) [10]. Later on many researchers worked on combination of these algorithms [11–14]. Li Ying-Hong renamed a blend of predictive models for the short-term traffic flow forecast [15]. In recent years use of deep learning algorithm have gained attention of researchers to its adaptive nature with the number of parameters. The most widely used techniques are SAE, CNN and LSTM. Zheng Zhao et al. put forward traffic prediction method that considers temporal–spatial correlation. They used greedy layer wise unsupervised learning algorithm to forecast the traffic. They have also proven that LSTM works better than SAE, RBF, SVM and ARIMA model [16]. Yipeng Liu et al. used spatial-temporal information and Bidirectional LSTM for accurate traffic prediction [17]. A. Moussavi Khalkhali and Mo Jamshidi used sparsed autoencoder to increase the accuracy of regression model [18]. Arief Koesdwiady et al. have proposed a method for accurate traffic prediction that investigated the correlation between traffic flow and weather conditions [19]. Reference [19] proposed weather and traffic data can be predicted separately and then both results can be merged for improving the results. In [20] authors have proposed a GRNN model for traffic prediction. Authors also compared the results of GRNN with ARIMA, SES (Single Exponential Smoothing), and MA(Moving Average). Since last decade remarkable traffic sensors, GPS, surveillance camera have been installed on the road network. These resources generate a massive amount of traffic data with high density. But this huge data has come up with the problem of ‘data explosion’. Data explosion can be defined as excess data that may be or may not be useful for machine learning algorithm. This data may be redundant which may affect the performance of the algorithm. This ‘data explosion’ gained increasing attention. It was challenging to deal with these data by using parametric prediction method due to parametric method’s dimensionality limitation. Most conventional traffic forecast techniques have some limited approach. e.g. it searches the trivial correlation within the limited data.

3 Methodology One can define traffic flow (F) as the number of vehicles passed by on a given location in a given interval of time. Traffic flow prediction can be defined as follows: F=

n T

(2)

Here n is number of vehicles present at particular location at change in time i.e. T.

644

S. Lonare and R. Bhramaramba

Let Fti be the flow of traffic observed at tth time interval at the observation point, i on a road transportation network. Here value for time interval t is 1, 2, 3, . . . , T, and i may have value 1, 2, 3, . . . , m. The prediction problem is finding traffic flow at ith location at future time interval (t + Δ). Here Δ is also called as prediction horizon. There are large numbers of traffic flow prediction algorithms which have been proposed and developed due to growing need of real time traffic prediction for ITS. The algorithms developed for traffic prediction involves various techniques and different disciplines. While taking on traffic forecast result one should consider the previous knowledge and the current state of the traffic. We have to relate the traffic flow with spatio-temporal entity. That means traffic prediction is strongly related to the space and time. In this work we integrated the spatio-tempotal correlation while using LSTM network for short time traffic prediction. The forecast result for next moment denoted by t + 1, is based on the previous learning of LSTM network and the current input.

4 Simulation Results 4.1 Data Description In this work we have collected data from https://data.gov.uk. The traffic data is collected for 3 months for the highway A38, Alfreton, UK. The data is then filtered for the training purpose. We have used 2 month of data for training 1 month data for inference. LSTM network is trained and tested for sets for 40 epochs for road AL2134.

4.1.1

Experiment Result

Evaluation for Forecast Result Three criteria are commonly used to evaluate the performance of traffic forecast model. The criteria that are commonly used for evaluating the performance of forecast model of traffic are 1. Mean absolute error (MAE) 2. Root mean square error (RMSE) 3. Mean relative error (MRE) In this preliminary study the Fig. 1 clearly shows that the traffic on Saturday and Sunday is comparatively lower than other 5 days of the week. We have performed data cleaning step before experiment. The parameter considered are date, flow, time period, average journey time, average speed of vehicle, data quality, link length and day of week.

Traffic Flow Prediction Using Regression and Deep Learning Approach

645

Fig. 1 Actual vs. predicted [ARIMA and LSTM] for 15 min ahead prediction

Fig. 2 Actual vs. predicted for 15 min and 45 min ahead prediction for LSTM

4.2 Performance Using Different Algorithms Following graphs show the performance of LSTM versus ARIMA and regression. Figure 1 simply shows Actual versus Predicted traffic flow for 15 min ahead using ARIMA and LSTM. MAE for LSTM is 11.004434 where as for ARIMA is 13.476508. Graph shows that ARIMA is improving gradually as the difference between actual and predicted is getting less but still LSTM trained model is showing less error since beginning as it is trained on recent month data. Figure 2 shows performance of LSTM of 15 min ahead and 45 min ahead traffic prediction. MAE for LSTM for 45 min ahead prediction is 21.586245. Figure 3 shows actual vs.

646

S. Lonare and R. Bhramaramba

Fig. 3 Actual vs. LSTM, linear regression and logistic regression for 15 min ahead prediction

Fig. 4 LSTM vs. linear regression vs. logistic regression vs. ARIMA

LSTM, Linear Regression and Logistic Regression for 15 min ahead prediction. Figure 4 shows performance of LSTM vs. Linear Regression vs. Logistic Regression vs. ARIMA for 15 and 45 min (Table 1).

5 Conclusion Accurate short term Traffic flow forecast is very crucial in ITS. In this paper, we have tried to compare LSTM with ARIMA and regression algorithm for 15 min and 45 min ahead traffic flow prediction. From graph, MAE and MSE, LSTM is proven

Traffic Flow Prediction Using Regression and Deep Learning Approach

647

Table 1 Traffic flow prediction performance for 15 min and 45 min Algorithm LSTM (15 min) LSTM (45 min) Linear regression (15 min) Linear regression (45 min) Log regression (15 min) Log regression (45 min) ARIMA (15 min)

Mean square error 252.682709 695.013026 1643.254313 3097.406573 1792.698885 5057.469805 362.996155

Mean absolute error 11.004434 21.586245 36.423444 51.482132 22.497594 38.404904 13.476508

to have more accuracy. Also LSTM shows less errors from beginning on other hand ARIMA improves its performance gradually. In future work we will try to relate other neural network algorithm with LSTM. We will also introduce more parameter while finding traffic prediction.

References 1. Levin, M., and Tsao Y.: On forecasting freeway occupancies and volumes. Transportation Research Record, Volume 773, pp. 47–49, Transportation Research Board (1980) 2. Farokhi Sadabadi Kaveh Hamedi, Masoud Haghani, Ali, A.: Evaluating moving average techniques in short-term travel time prediction using an AVI data set, Transportation Research Board 89th Annual Meeting, Washington DC (2010) 3. Mascha, V., D., Voort, Mark Dougherty, Susan, W.: Combining Kohonen maps with ARIMA time series models to forecast traffic flow, Transport. Res. C, pp 307–318 (1996) 4. Lee, S., Fambro, D.: Application of subset autoregressive integrated moving average model for short-term freeway traffic volume forecasting, Trans. Res. rec, pp. 179–188 (1999) 5. Williams, B., Multivariate vehicular traffic flow prediction: evaluation of ARIMAX modeling, Trans. Res. rec, vol 1776, pp. 194–200 (2001) 6. Karlaftis, M.G., Vlahogianni, E.I.: Statistical methods versus neural networks in transportation research: Differences, similarities and some insights, Transport. Res. C Emerg. Technol., Vol 19, Issue 3, pp. 387–399 (2010) 7. Okutani, I., Stephanedes, Y.J.: Dynamic prediction of traffic volume through kalman filtering theory, Transport. Res. B Methodol., vol-18, (1), pp. 1–11 (1984) 8. Liu, H., Zuylen, H.J.V., Lint, H.V., et al.: Predicting urban arterial travel time with state-space neural networks and kalman filters, Transport. Res. Record, 1968, pp. 99–108 (2006) 9. Andreas Rosenblad, J.J. Faraway.: Extending the linear model with R: generalized linear, mixed effects and nonparametric regression models, Chapman and Hall, vol. 24, pp 369–370, Springer-Verlag (2009) 10. Zhang, Y., Liu, Y.: Traffic forecasting using least squares support vector machines, Transportmetrica, vol 5, Issue 3, pp. 193–213 (2009) 11. Yanru Zhang, Ali Haghani: A gradient boosting method to improve travel time prediction, Transport. Res. C Emer. Technol, 58, pp. 308–324, Elsevier (2015) 12. Zou, Y., Hua, X., Zhang, Y., et al.: A Hybrid Method for Short-term Freeway Travel 13. Markov, C., Can J., Civ. E.: Time Prediction Based on Wavelet Neural Network and vol 45, Issue 2, pp. 77–86 (2017)

648

S. Lonare and R. Bhramaramba

14. Li, S., Liu L.J., Zhai, M.: Prediction for short-term traffic flow based on modified PSO optimized BP neural network, Syst. Eng. Theory Practice, vol 9, pp. 2045–2049 (2012) 15. Li Ying-hong, Liu Le-min, Wang Yu-quan: Short-term traffic flow prediction based on combination of predictive models, J. Transport. Syst. Eng. Inf. Tech., vol 13, issue 2, pp. 34–41 (2013) 16. Zheng Zhao, Weihai Chen, Xingming Wu, Peter C. Y. Chen, Jingmeng Liu: LSTM network: a deep learning approach for short-term traffic forecast, IET Intelligent Transport Systems, vol 11, issue 2, pp. 68-75 (2017) 17. Yipeng Liu, Haifeng Zheng, Xinxin Feng: Short-term traffic flow prediction with Conv-LSTM, WCSP-9, pp. 1-6, IEEE-China (2017) 18. Arezou M-K. and Mo J.: Constructing a Deep Regression Model Utilizing Cascaded Sparse Autoencoders and Stochastic Gradient Descent, ICMLA, pp. 559-564 (2016) 19. Arief, K., Ridha, S., Fakhreddine, K.: Improving Traffic Flow Prediction With Weather Information in Connected Cars: A Deep Learning Approach, TVT, vol 65, issue 12, pp. 95089517 (2016) 20. Joko, L. B., Victor, H., Ahmad S, Saprina, M.: Generalized Regression Neural Network for Predicting Traffic Flow, ICTS 2016, pp. 199-202, IEEE (2016)

A Comparative Study on Assessment of Carotid Artery Using Various Techniques S. Mounica, B. Thamotharan, and S. Ramakrishnan

1 Introduction Ultrasound imaging performs the estimation of the atherosclerotic concern in the carotid artery. For disclosure at recent point of atherosclerosis, IMT is the most widely used parameter. It is described by the gap among the lumen-intima interface (LII) to the media-adventitia interface (MAI). The major role that plays an initiation and progression of atherosclerosis is the increased levels of cholesterol for specific low-frequency lipoproteins. In prevention of atherosclerosis, highfrequency lipoproteins are treated to be favorable. A quick and easy method for anazlysing inflammatory status is the Neutrophil-to-Lymphocyte Ratio (NLR), evaluated as the ratio of exact neutrophil count to exact lymphocyte count [6]. Carotid artery occlusion was resolved if embolic material was identified in the artery wall by lengthwise and declination ultrasound without color flow signal and the Doppler spectrum. Using magnetic resonance imaging, the associations between atherosclerotic plaque features and the deviations in symptomatic and asymptomatic carotid atherosclerotic plaques (Fig. 1).

S. Mounica · B. Thamotharan () · S. Ramakrishnan School of Computing, SASTRA Deemed-to-Be-University, Thanjavur, India e-mail: [email protected] © Springer Nature Switzerland AG 2020 S. Smys et al. (eds.), New Trends in Computational Vision and Bio-inspired Computing, https://doi.org/10.1007/978-3-030-41862-5_64

649

650 Fig. 1 Overall workflow of a system

S. Mounica et al.

IMAGE ACQUISITION

PRE-PROCESSING

SEGMENTATION

PERFORMANCE ANALYSIS

2 Image Acquisition 2.1 Carotid Endarterectomy From asymptomatic and symptomatic patients, plaques were detached by endarterectomy and to show different histological vulnerability it is divided into three different regions: A, upstream of the maximal stenosis; B, a station for maximal narrowing; C, ensuing of the maximal narrowing [1]. When the fundamental narrowing was detached in an uninterrupted manner, endarterectomy was performed. Immunohistochemistry analyzes plaque content and multiplex technology content by plaque cytokine. It was efficiently greater in plaques resulted from the patients with diabetes and a high fibromodulin solution was grouped with a greater extent of checkup cerebrovascular functions [11], although no alike grouped were examine for lumican. Fibromodulin solution also combination with plaque lipids and many proinflammatory cytokines (Fig. 2).

2.2 Doppler Ultrasound The development of micro embolic signals (MES) on transcranial Doppler is the reparative comparison ultrasound discovery of the cervical carotid arteries that were grouped during the disclosed of the arteries in CEA, the assumption efficiency is compared to contrast-enhanced ultrasound solution of a certain of the grayscale median (GSM) [2]. The extent of establishment of carotid atherosclerotic plaque were associated on carotid intima-media thickness (IMT). The visualization of blood flow velocities and the proximal wall of ICA allows imaging of color duplex ultrasound. This quick noninvasive method which does not require contrast administration offers a dynamic view of vessels [17]. The best Doppler parameter is the Peak systolic velocity (PSV) used for computing carotid artery extent of stenosis.

A Comparative Study on Assessment of Carotid Artery Using Various Techniques

651

Fig. 2 Carotid enterectometry [14] Fig. 3 Standard scanner using spectral Doppler ultrasound

The Doppler calculation of the middle third of common carotid artery was fixed with a longitudinal view, with specimen quantity in the center of the vessel (Fig. 3).

652

S. Mounica et al.

2.3 B Mode Ultrasound Coupled plasma mass spectrometry analyses the Cadmium level in blood by inductive method. For the occurrences of plaque, the “window” compressing 3 cm of the distal common carotid artery, the bifurcation, and 1 cm of the internal and external carotid artery was scanned with consequence field in right common carotid artery [20]. The structures for their identification and picture contradiction function of lumen and the consequences in carotid artery are correlated to other tissues. For speckle noise removal, the intake picture is detected using morphologic operative and sequenced by an anisotropic dispersal drain that presents the appropriate ultrasound values corresponds to the artery. To describe two initial curves, the information obtained is used, i.e. one corresponds to the lumen and the alternative one corresponds to consequence ends. This method is done in the Chan-Vese level set model.

2.4 Computed Tomography The information of the vascular tree, the extent of stenosis, and the occurrence of calcified plaque are provided by CT. The limitation in CT involves soft plaque characterization. The histology is available only if the tissue is excised but still, it is the gold standard. The potential for the US is validated with diagnosis and X-ray calculated tomography (CT) as a technique for non-invasive pictures in the carotid artery [4]. For the assessment of WMC (white matter changes), pre-interventional cerebral CT/MRI is used. A actual particular sensitive sign of arterial examination on CT Angiography is the broad eccentric lumen occupied by crescent-shaped mural narrowing and thin annular establishment (Fig. 4).

Fig. 4 Computed tomography

A Comparative Study on Assessment of Carotid Artery Using Various Techniques

653

Fig. 5 Standard scanner using spectral Doppler ultrasound

2.5 Reflection-Mode all-Optical Laser-Ultrasound (LUS) This imaging method is to evaluate high-decision, non-relationship, non-ionizing pictures of the carotid artery block and concretion as shown in Fig. 5. For soft recovery, calculation and user-vulnerable data recovery for high occurrence, alloptical LUS is used. When compared with confocal LUS to confocal LUS, the inner row of the artery wall, growth of the vessel, and concretion are analysed with greater decision and decreased artifacts [5].

2.6 3D Ultrasound Imaging The optical motion tracking technology is a freehand 3D ultrasound imaging technique. L-frame calibration determines 3D space coordinates. The eight digital cameras detect using the positions of an L-frame, which could provide a volume of 3D view when it captures simultaneously. The exhaustive stenosis function lacks the determination of the plaque characteristics and the vessel boundary and the imaging of stenosis move towards the inbound flow can be given. The different techniques that performs 3D imaging of carotid stenosis involves as shown in Fig. 6: (1) The US B-Mode towards the vessel wall parenchymal (CT/MRI) imaging the 3D restructure of internal carotid artery plaque structure and (2) 3D restructure in internal residual lumen, visioned by Power Doppler or with any imaging methods [16].

654

S. Mounica et al.

Fig. 6 3D ultrasound imaging

2.7 MRI A state-of-the-art Diffusion Tensor Imaging (DTI) carotid plaques were visualised with ex-vivo using high magnetic field MRI scanner. The 3-D data sets is provided by MRI with isotropic or nearly isotropic voxels that are highly reproducible, selfreliant with size angle and less vulnerable on the experience of the operative. Using multiple parameter MRI consisting of the time of flight (TOF), T1-weighted, and T2-weighted sequences, plaque appearance like new hemorrhage, lipid-rich necrotic core, intimal calcification, and fibrous tissue can be determined. The accurate relationship among thick and a thin (