Congress on Intelligent Systems: Proceedings of CIS 2020, Volume 2 (Advances in Intelligent Systems and Computing, 1335) 9813369833, 9789813369832


140 23 28MB

English Pages [785]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Editors
Model-Based Data Collection Systems on Fog Platforms
1 Introduction
2 CPS Platforms
3 AmIS as a Set of FS Applications
4 Typical Tasks of Data Collection in Software-Intensive Systems
5 Proposed Approach to Data Collection Systems Development
6 Models Based on Perception Transformation
7 CAS Generalized Model
8 Data Collection System as an Element of a System of Control
9 Work with Knowledge and Big Data Problem
10 Control in Model Terms
11 Model Operations
12 Model Approach as a Form of Virtualization
13 Potential Benefits
14 Practical Use of the Proposed Approach
15 Conclusion
References
Notification System for Text Detection on Document Images Using Graphical User Interface (GUI)
1 Introduction
2 Methodology
2.1 Fuzzy C-Means and Deghost Method Hybrid Combination
2.2 Mobile Notification System
3 Result and Discussion
4 Conclusion
References
Estimation of Road Damage for Earthquake Evacuation Guidance System Linked with SNS
1 Introduction
1.1 Current Situation of the Earthquake Disaster
1.2 Measures Against Earthquake Damage in Japan
1.3 Means of Obtaining Information When the Earthquake Has Occurred
1.4 Related Study
1.5 Research Purpose
2 Overview of the Proposed System
2.1 Internal Specification
3 Estimation of Damage to Passage
3.1 Support Vector Machine (SVM)
3.2 Transfer Learning by VGG 16
4 Experiment
4.1 Evaluation Method
5 Result
6 Conclusion
References
Analysis of the Messages from Social Network for Emergency Cases Detection
1 Introduction
2 Related Works
3 Approach Description
3.1 Message Filtering
3.2 Message Aggregation
3.3 Visualization of the Received Data
4 Usage Scenario
4.1 Earthquake Research
4.2 Resource Research
5 Conclusion
References
Intelligent Simulation of Competitive Behavior in a Business System
1 Introduction
2 Model of Competitive Behavior in the Business System
3 Model of Competitive Behavior in the Business System
4 Finding Equilibrium Points
5 Conclusions
References
Minimizing the Subset of Features on BDHS Dataset to Improve Prediction on Pregnancy Termination
1 Introduction
2 Literature Review
3 Feature Selection and Classification Task
3.1 Correlation-Based Feature Selection
3.2 Information Gain Based Feature Selection
3.3 Symmetrical Uncertainty
3.4 Gain Ratio
3.5 ‘Relief’ Feature Selection
3.6 Classifiers
4 Proposed Methodology
5 Result and Observation
6 Conclusion
References
PUNER-Parsi ULMFiT for Named-Entity Recognition in Persian Texts
1 Introduction
2 Related Work
3 Learning Models
3.1 Machine Learning Model
3.2 BiLSTM-Deep Learning Model
3.3 Transfer Learning Model
4 Methodology
4.1 Voting Classifier Using Machine Learning Approach
4.2 BiLSTM Model Using Deep Learning Approach
4.3 PUNER Using Transfer Learning Approach
5 Experimental Results
5.1 Dataset
5.2 Results
6 Conclusion
References
A Coverage and Connectivity of WSN in 3D Surface Using Sailfish Optimizer
1 Introduction
2 Sailfish Optimizer (SFO)
3 Mathematical Coverage Model
4 Experimental Results
5 Conclusion
References
A Classification Model for Software Bug Prediction Based on Ensemble Deep Learning Approach Boosted with SMOTE Technique
1 Introduction
2 Review of Related Works
2.1 ML Based Techniques
2.2 Feature Selection-Based Approaches for SFP
2.3 Motivation of the Study
3 Investigated Software Projects
4 The Proposed Methodology
4.1 Preprocessing Techniques
4.2 Proposed Classification Paradigm
4.3 Evaluation Measures
5 Evaluation Results and Discussion
5.1 Feature Selection (FS) Results
5.2 Evaluation of MLP Classifier
5.3 Evaluation Results Using Bagging Ensemble Method
5.4 Comparing with State-of-the-Art Methods
6 Conclusion and Future Works
References
Modeling the Relationship Between Distance and Received Signal Strength Indicator of the Wi-Fi Over the Sea to Extract Data in Situ from a Marine Monitoring Buoy
1 Introduction
2 Materials and Methods
3 Results
4 Conclusions
References
Data Classification Model for Fog-Enabled Mobile IoT Systems
1 Introduction
2 Background
3 Data Classification Model
3.1 General Structure of the Data Classification Model
3.2 The Structure of the Data Classification Model in the Fog Layer
3.3 Algorithm for Data Classification for the Fog Node
4 Data Classification Model Evaluation
5 Conclusion
References
Multi-Objective Teaching–Learning-Based Optimization for Vehicle Fuel Saving Consumption
1 Introduction
2 Related Work
2.1 Teaching Learning-Based Optimizer
2.2 Vehicle Fuel Consumption Optimization
3 MTLBO for the Vehicle Fuel Consumption
3.1 Pareto-Optimal Solution
3.2 MTLBO for Vehicle Fuel Saving Consumption
4 Experimental Results
5 Conclusion
References
A Tripod-Type Walking Assistance for the Stroke Patient
1 Introduction
2 Normal Gait and Abnormal Gait
3 Force Plate Analysis
4 Tripod Balancing System
4.1 Design of Tripod Balancing Support
4.2 Static Structural Analysis of the Tripod
4.3 Testing of Tripod with Patient
5 Conclusion
References
Data Routing with Load Balancing Using Ant Colony Optimization Algorithm
1 Introduction
2 Ant Colony Optimization and Its Features
3 Proposed Work
4 Simulation Results
5 Conclusion
References
Speech Emotion Recognition Using Machine Learning Techniques
1 Introduction
2 Methods
2.1 Speech Emotional Database
2.2 Feature Extraction from Speech
2.3 Machine Learning Algorithms
3 Results and Discussions
4 Conclusion and Future Work
References
Feature-Based AD Assessment Using ML
1 Introduction
2 Related Work
3 Methodology
3.1 Dataset Preparation
3.2 Feature Extraction
4 Results
5 Conclusion
References
Show-Based Logical Profound Learning Demonstrates Utilizing ECM Fuzzy Deduction Rules in DDoS Assaults for WLAN 802.11
1 Introduction
2 WIDS Design—An Attacker’s See
3 DoS Attacks in WLAN 802.11
3.1 Application Layer DoS Attacks
3.2 Transport Layer Dos Attacks
3.3 Network Layer DoS Attacks
3.4 Media Access Control (MAC) Attacks
3.5 Physical Layer DoS Attacks
4 Related Work
5 Proposed System Utilizing Rope, ECM, and DENFIS
5.1 LASSO Based Feature Reduction
5.2 Explainable Deep Learning Model
6 Experimental Design
7 Results and Discussion
8 Conclusion
References
Atmospheric Temperature Prediction Using Ensemble Deep Learning Technique
1 Introduction
2 Related Work
3 Methodology
3.1 Parameters for Model Designing
3.2 Long Short-Term Memory (LSTM)
3.3 Proposed Method (LSTMx)
3.4 Data and Study Area
3.5 Experimental Setup
4 Results and Analysis
4.1 Comparison of Predicted Values with Observed
4.2 Comparison Using Evaluation Metrics
4.3 Comparison of Daily Maximum Temperature
5 Conclusion
References
Reliability Evaluation of Distribution System Based on Interval Type-2 Fuzzy System
1 Introduction
2 Preliminaries on Fuzzy Sets
2.1 Type 1 Fuzzy Sets
2.2 Interval Type 2 Fuzzy Sets
2.3 Fuzzy Arithmetic Operations
3 Membership Function Approximation
3.1 Monte Carlo Simulation
3.2 Sampling Approach
4 Fuzzy-Based Reliability Evaluation
4.1 Fuzzy Reliability Indices
4.2 Reliability Evaluation
4.3 Fuzzy Importance Index
5 Results and Discussions
6 Conclusion
References
Design of Automatic Answer Checker
1 Introduction
2 Literature Review
2.1 Text Preprocessing
2.2 Feature Extraction
2.3 Vector Similarity
2.4 Relational Database Management System (RDBMS)
3 Methodology
3.1 Front-End Module
3.2 Back-End Module
4 Results and Analysis
5 Conclusions
References
Community Detection Using Fire Propagation and Boundary Vertices
1 Introduction
1.1 Representation of Social Networking World
1.2 Communities in Social Network
2 Related Work
2.1 Label Propagation Algorithm
2.2 Quantum Inspired Evolutionary Algorithm
3 Proposed Method
3.1 Communities Generated by Fire Transmission
3.2 Detection of Boundary Vertex and Assign Them to Community
4 Datasets Representation
5 Results and Evaluations
5.1 Performance Analysis Using Modularity Score
5.2 Computation Complexity
6 Conclusions and Future Work
References
Handwritten Devanagari Character Recognition Using CNN with Transfer Learning
1 Introduction
2 Working of the Model
2.1 Dataset Used
2.2 Dataset Preprocessing
2.3 Aspects and Reasons for Using CNNs
3 Experiment
4 Results
5 Conclusions
References
Input Parameter Optimization with Simulated Annealing Algorithm for Predictive HELEN-I Ion Source
1 Introduction
2 Predictive HELEN-I Model
3 Simulated Annealing for HELEN-I
4 Results and Discussion
5 Conclusion
References
LRSS-GAN: Long Residual Paths and Short Skip Connections Generative Adversarial Networks for Domain Adaptation and Image Inpainting
1 Introduction
2 Literature Survey
3 Proposed Architecture
3.1 Long Residual Paths
3.2 Alternated Short Skip Connection
3.3 Complete Architecture of the Proposed GAN
4 Experiments and Results
4.1 Unsupervised Domain Adaptation by Pixel Space Transformation
4.2 Image Inpainting
5 Conclusion
References
Tweets Reporting Abuse Classification Task: TRACT
1 Introduction
2 Related Work
3 TRACT Dataset
3.1 Data Collection
3.2 Data Annotation
3.3 Preprocessing
4 Experimental Setup
4.1 Data
4.2 Hyper Parameters
4.3 Comparative Methods
5 Results and Discussion
6 Conclusion and Future Scope
References
Enhance the Prediction of Air Pollutants Using K-Means++ Advanced Algorithm with Parallel Computing
1 Introduction
2 Related Works
3 Methodology
3.1 Algorithm Description
4 Results and Inferences
5 Conclusion
References
Towards Grammatical Evolution-Based Automated Design of Differential Evolution Algorithm
1 Introduction
2 Background Study
3 Grammatical Evolution-Based de Design
4 Results and Analysis
5 Conclusion
References
Efficient Fuzzy Similarity-Based Text Classification with SVM and Feature Reduction
1 Introduction
2 Related Work
3 Algorithmic Details of CMMT and FSCMM-FC Systems
4 Experimental Results
4.1 Datasets Used
4.2 Feature Reduction
4.3 Classification Accuracy
4.4 Resultant Support Vectors
5 Performance Analysis
5.1 Performance Analysis of Feature Reduction
5.2 Performance Analysis of Classification Accuracy
6 Conclusion and Future Extensions
References
Plasma Density Prediction for Helicon Negative Hydrogen Plasma Source Using Decision Tree and Random Forest Algorithm
1 Introduction
2 Experimental Scheme and Methods
2.1 Data Set
2.2 The Decision Tree Algorithm
2.3 The Random Forests Algorithm
2.4 Performance Metric
3 Results and Discussion
4 Conclusion
References
Automatic Recognition of ISL Dynamic Signs with Facial Cues
1 Introduction
2 Related Works
3 Proposed Method
3.1 Data Acquisition
3.2 Key Frames Extraction
3.3 Segmentation
3.4 Feature Extraction
3.5 Classification
4 Experiment and Results
5 Comparison with Other Methods
6 Conclusion
References
A Blockchain-Based Multi-layer Infrastructure for Securing Healthcare Data on Cloud
1 Introduction
2 Related Works
3 Blockchain Technology
3.1 Blockchain: A Chain of Blocks
3.2 Blockchain: Architecture Layers
4 Proposed Blockchain-Based Healthcare Infrastructure
5 Potential Blockchain Use Cases in Healthcare Domain
5.1 Longitudinal Health Records
5.2 Interoperability
5.3 Online Patient Access
5.4 Insurance Claims
5.5 Supply Chain Management
6 Conclusion
References
Analysis of Lightweight Cryptography Algorithms for IoT Communication
1 Introduction
2 Related Work
3 Lightweight Cryptography
3.1 Lightweight Stream Cipher
3.2 Lightweight Block Cipher
3.3 Lightweight Symmetric Algorithms
3.4 Lightweight Asymmetric Algorithms
4 Conclusion
References
Disease Prediction from Speech Using Natural Language Processing and Deep Learning Method
1 Introduction
2 Proposed Method
3 Implementation Details
3.1 Competitive Algorithms
4 Dataset Description
4.1 Experimental Results and Discussion
5 Conclusion and Future Work
References
Investigation on Error-Correcting Channel Codes for 5G New Radio
1 Introduction
2 Background of the Channel Code
2.1 Block Code
2.2 Convolution Code
2.3 Trellis Code
2.4 Turbo Code
2.5 LDPC Code
2.6 Polar Code
3 Performance Comparisons Between All the Forward Error Correction Codes
3.1 Complexity
3.2 Throughput
3.3 Area and Energy Efficiency
3.4 Reliability
3.5 Flexibility
3.6 Latency
4 Conclusion
References
Classification of Human Postural Transition and Activity Recognition Using Smartphone Sensor Data
1 Introduction
2 Literature Survey
3 Methodology
3.1 HAPT Dataset
3.2 Signal Preprocessing
3.3 Feature Generation
3.4 Feature Processing
3.5 Testing and Training
3.6 Performance Metrics
3.7 Algorithm for the Proposed Methodology Using HAPT Dataset
4 Experimental Results
4.1 Human Activity and Postural Transition (HAPT) Recognition Using Logistic Regression(LR) Classifier
4.2 Human Activity and Postural Transition (HAPT) Recognition Using Voting Classifier (VC)
4.3 Performance Analysis
5 Conclusion and Future Scope
References
Color Image Watermarking Technique Using Principal Component in RDWT Domain
1 Introduction
2 Proposed Scheme
2.1 Embedding Scheme
2.2 Extraction Scheme
3 Experimental Results
3.1 Imperceptibility Measurement
3.2 Robustness Measurement
4 Conclusion and Future Work
References
Query Auto-Completion Using Graphs
1 Introduction
2 Literature Review
3 Proposed Approach
3.1 Proposed Architecture
3.2 Algorithm of the Proposed Approach
4 Experimental Setup
4.1 Research Questions
4.2 Dataset
4.3 Evaluation Metric and Baseline
5 Results and Discussions
5.1 Addressing the Research Question RQ1: Can the Semantic Similarity Feature Score Improve the Performance of QAC System When Data is Represented as Graph?
5.2 Addressing the Research Question RQ2: Will ROCAUC Score Has an Effect on the Probability of Prediction in Query Auto-Completion?
6 Conclusion
References
Comparative Analysis of Load Flows and Voltage-Dependent Load Modeling Methods of Distribution Networks
1 Introduction
2 Load Flow Analysis and Load Modeling
2.1 Distribution System Load Flow Techniques
2.2 Voltage-Dependent Load Modeling
2.3 Voltage Stability Analysis
3 Performance Analysis of Radial Distribution Networks
3.1 IEEE 15-Bus Distribution Network
3.2 IEEE 33-Bus Distribution Network
3.3 IEEE 69-Bus Radial Distribution Network
3.4 IEEE 85-Bus Radial Distribution Network
4 Optimization Techniques for Distribution Generation Allocation
4.1 Generalized Procedure of an Optimized Techniques
5 Conclusion
References
A Framework for Disaster Monitoring Using Fog Computing
1 Introduction
2 Literature Review
2.1 Safety Check and Community Help in Facebook
3 Architecture/Framework
3.1 Sensing Layer
3.2 Crowdsourcing Layer
3.3 DMFBC Fog Layer
3.4 Cloud Computing Layer
4 Result Analysis
5 Conclusion
References
Attitude Control in Unmanned Aerial Vehicles Using Reinforcement Learning—A Survey
1 Introduction
2 Background Work
2.1 Quadrotors
2.2 Fixed Wing UAVs
3 Attitude Control in Quadrotors
3.1 Isolated Attitude Control Techniques
3.2 Attitude Control Along with Navigation Control
4 Attitude Control in Fixed Wing UAVs
4.1 Attitude Control Using Proximal Policy Optimization (PPO)
4.2 Feedback Linearization and Adaptive Control
5 Attitude Control in Hybrid System UAVs
5.1 Tiltrotors Position Tracking Controller Design
6 Attitude Control in Quadcopters Swarms
6.1 Synchronize Attitude Control in Multiple Quadcopters
7 Conclusions
References
Self-Supervised Learning Approaches for Traffic Engineering in Software-Defined Networks
1 Introduction
2 Background and Related Work
2.1 Traffic Engineering
2.2 Traffic Classification
3 Efficient Classification and Scheduling Approach
4 Results and Analysis
4.1 Dataset
4.2 Analysis
5 Conclusion
References
Passive Motion Tracking for Assisting Augmented Scenarios
1 Introduction
2 MPU6050 and MEMS Architecture
3 I2C Protocol
4 Interfacing Logic and Design
4.1 Interfacing
4.2 Data Processing and Evaluation
5 Implementation in Virtual 3-D Space
6 Results
6.1 Observation Table
6.2 Graphs
6.3 Blender
7 Conclusion
References
Black Hole—White Hole Algorithm for Dynamic Optimization of Chemically Reacting Systems
1 Introduction
2 Algorithm
2.1 Black Hole Algorithm
2.2 Proposed BH + WH Algorithm
3 Description of Test Problems
4 Results
5 Discussion
6 Conclusion
References
Automated Cooperative Robot for Screwing Application
1 Introduction
1.1 Cooperative Robot
1.2 Assembly Method
2 System Description
2.1 Hardware Presentation: The System’s Architecture
2.2 Design of Job
2.3 Software Description—System Control
3 Implementation Process
3.1 Interfacing Siemens TIA Portal to S7-1200 Controller
3.2 PLC Program in SIEMENS TIA Portal
3.3 Pendant Program for GP-12 and MH5LS Robot
3.4 Trajectory of the Cooperative Robots
4 Results and Discussion
5 Conclusion
References
Flexible Bolus Insulin Intelligent Recommender System for Diabetes Mellitus Using Mutated Kalman Filtering Techniques
1 Introduction
2 Related Work and Background
2.1 Attributes Information
3 Bolus Insulin Intelligent Recommender System
3.1 Mutated Kalman Filter Modeling
3.2 Processing and Observed Models
3.3 Iterative Estimation of Q and R
3.4 Algorithm for Bolus Insulin Intelligent Recommender System
3.5 Suggestion for Non-continuous Glucose Monitoring (CGM) Users
4 Result
5 Conclusion
References
Deep Learning Technique for Predicting Optimal `Organ at Risk' Dose Distribution for Brain Tumor Patients
1 Introduction
2 Related Works
3 Materials and Methods
3.1 Dataset
3.2 Image Preprocessing
3.3 U-network
3.4 Loss Functions
3.5 Optimizer
4 Implementation
5 Results
5.1 Model Trained on Mean Square Error
5.2 Model Trained on Dice Coefficient Similarity
6 Conclusion and Future Work
References
A Fractional Model to Study the Diffusion of Cytosolic Calcium
1 Introduction
2 Mathematical Preliminaries
3 Mathematical Formulation of the Model
4 Numerical Approximation of the Fractal–Fractional Equation of the Model
5 Analysis of Stability
6 Result and Discussion
7 Conclusion
References
Signal Processing Techniques for Coherence Analysis Between ECG and EEG Signals with a Case Study
1 Introduction
2 Coherence Analysis Techniques
2.1 Mathematical Techniques
2.2 Experimental Techniques
3 Signal Processing-Based Coherence Analysis—Case Study
4 Results
4.1 MSC Analysis
4.2 PC Analysis
5 Conclusions
References
A Review on Dimensionality Reduction in Fuzzy- and SVM-Based Text Classification Strategies
1 Introduction
2 An Overview of Existing Text Classification Strategies
3 Comparative Studies on DR and Classification Results of Existing Strategies
3.1 Study I: A Review Based upon Various DR-Based Key Aspects
3.2 Study II: A Review on Datasets Used and Classification Results
4 Analytical Results with Studies I and II
4.1 DR-Based Findings from Study I
4.2 Classifier Usage-Based Findings from Study II
5 Conclusion and Future Scope
References
Adaptive Fuzzy Algorithm to Control the Pump Inlet Pressure
1 Introduction
2 Proposed Method
2.1 Design of Fuzzy Logic Controller
2.2 Fuzzy Set—Input Characteristics
2.3 Fuzzy Set Output Characteristics
2.4 Rule Editor
3 Fuzzy System—Implementation
4 Flow Chart for Proposed Model
4.1 Valve
4.2 Tank
4.3 Pressure
5 Experimental Results and Analysis
6 Conclusion
References
An Automated Citrus Disease Detection System Using Hybrid Feature Descriptor
1 Introduction
2 Stages of Machine Learning Model for Disease Detection
3 The Proposed Model
4 Experimental Setup
5 Results and Discussions
6 Conclusion
References
Clustering High-Dimensional Datasets Using Quantum Social Spider Optimization with DWT
1 Introduction
2 Proposed QSSO Algorithm for Clustering
2.1 Quantum SSO Algorithm
2.2 Objective Function for Clustering
2.3 Feature Reduction with Discrete Wavelet Transform
2.4 Algorithms Implementation
3 Simulation
3.1 Datasets
3.2 Cluster Validation Techniques
4 Results and Discussions
5 Conclusion
References
Intuitive Control of Three Omni-Wheel-Based Mobile Platforms Using Leap Motion
1 Introduction
2 Kinematic Model
3 V-REP Simulation
4 Physical Prototype
5 Integration with Bluetooth Mobile Phone Application
6 Integration with Leap Motion Device
7 User Survey
8 Conclusion
References
Effective Teaching of Homogenous Transformations and Robot Simulation Using Web Technologies
1 Introduction
2 Web Technologies
2.1 WebGL
2.2 Transformation Using Three.Js
2.3 Importing CAD Models of Objects as STL Files and Their Transformation
3 Development of WebHTM
4 Development of WebVRM
4.1 Denavit-Hartenberg (DH) Parameters
4.2 WebVRM
5 Inverse Kinematics
5.1 Inverse Kinematics in WebVRM
6 Conclusion
References
Abnormal Event Detection in Public Places by Deep Learning Methods
1 Introduction
2 Related Work
3 Proposed Method
3.1 Convolutional LSTM Autoencoders
3.2 Convolutional Autoencoders (CA)
3.3 One-Class SVM
4 Results and Discussion
5 Conclusions
6 Future Work
References
Multipurpose Advanced Assistance Smart Device for Patient Care with Intuitive Intricate Control
1 Introduction
2 Proposed System
2.1 Flex Sensor
2.2 Proposed Model
3 Working Model of the System
3.1 Working of Printed Sensor
3.2 Transmitter Module
3.3 Working of Receiver Module
4 Results and Discussion
5 Conclusion
References
Assessing the Role of Age, Population Density, Temperature and Humidity in the Outbreak of COVID-19 Pandemic in Ethiopia
1 Introduction
2 Data and Methods
2.1 Time Frame
2.2 Epidemiological Data
2.3 Weather Data
2.4 Population Data
2.5 Data Analysis: Spearman’s Rank Correlation, Pearson Correlation and Regression Analysis
3 Result and Discussion
3.1 Correlation Between COVID-19 Pandemic Cases and Temperature
3.2 Correlation Between COVID-19 Pandemic Cases and Humidity
3.3 Correlation Between COVID-19 Pandemic Cases and (Temperature/humidity) Ratio
3.4 Correlation Between COVID-19 Pandemic Cases and Population Density
3.5 Correlation Between COVID-19 Pandemic Cases and Age
4 Conclusion
References
Soft Computing Tool for Prediction of Safe Bearing Capacity of Soil
1 Introduction
1.1 Soft Computing Tool Genetic Programming
2 Study Area and Data
3 Model Formulation and Assessment
4 Results and Discussions
5 Conclusions
References
Smart Saline Monitoring System for Automatic Control Flow Detection and Alertness Using IoT Application
1 Introduction
1.1 Related Works
2 Proposed System
3 Hardware and Software
3.1 Node-MCU
3.2 IR Sensor
3.3 Buzzer
3.4 Indicators Set
3.5 Electronic Valve
3.6 Arduino IDE
3.7 BLYNK-IoT Platform
4 Project Setup
5 Results
6 Conclusion
References
Comparative Study on AutoML Approach for Diabetic Retinopathy Diagnosis
1 Introduction
2 Background and Related Work
3 Methodology
3.1 Random Wired Architecture
3.2 NaSNet Architecture
3.3 Data and Pre-processing
3.4 Training
4 Results
5 Conclusion
References
7-DOF Bionic Arm: Mapping Between data gloves and EMG arm band and Control by EMG arm band
1 Introduction
2 Design and Fabrication
2.1 Design of Wrist and Elbow
2.2 Final Assembly
3 Controlling the Bionic Arm with data gloves and Flex Sensors
4 Controlling the Bionic Arm with EMG arm band
4.1 Mapping Data of EMG arm band to that of data gloves and Two Flex Sensors
4.2 Control of Bionic Arm Using EMG arm band
4.3 Risk and Performance Analysis
5 Conclusion
References
Hopping Spider Monkey Optimization
1 Introduction
2 Spider Monkey Optimization Algorithm
2.1 Initialization Step
2.2 Local Leader (LLdr) Step
2.3 Global Leader (GLdr) Step
2.4 Learning Step of Global Leader
2.5 Learning Step of Local Leader
2.6 Decision Step for Local Leader
2.7 Decision Step for Global Leader
3 Hopping Spider Monkey Optimization (HSMO) Algorithm
4 Test Problems Under Consideration
4.1 Experimental Results
5 Conclusion
References
Author Index
Recommend Papers

Congress on Intelligent Systems: Proceedings of CIS 2020, Volume 2 (Advances in Intelligent Systems and Computing, 1335)
 9813369833, 9789813369832

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advances in Intelligent Systems and Computing 1335

Harish Sharma · Mukesh Saraswat · Anupam Yadav · Joong Hoon Kim · Jagdish Chand Bansal   Editors

Congress on Intelligent Systems Proceedings of CIS 2020, Volume 2

Advances in Intelligent Systems and Computing Volume 1335

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. Indexed by DBLP, EI Compendex, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST). All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/11156

Harish Sharma · Mukesh Saraswat · Anupam Yadav · Joong Hoon Kim · Jagdish Chand Bansal Editors

Congress on Intelligent Systems Proceedings of CIS 2020, Volume 2

Editors Harish Sharma Department of Computer Science and Engineering Rajasthan Technical University Kota, Rajasthan, India

Mukesh Saraswat Department of Computer Science and Engineering Jaypee Institute of Information Technology Noida, Uttar Pradesh, India

Anupam Yadav National Institute of Technology Jalandhar, Punjab, India

Joong Hoon Kim Korea University Seoul, Korea (Republic of)

Jagdish Chand Bansal South Asian University New Delhi, Delhi, India

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-981-33-6983-2 ISBN 978-981-33-6984-9 (eBook) https://doi.org/10.1007/978-981-33-6984-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

Congress on Intelligent Systems (CIS 2020) is a maiden attempt to bring the researchers, academicians, industries, and government personnel together to share and discuss the various aspects of intelligent systems. It was organized virtually during September 05–06, 2020. The congress is a brainchild of Soft Computing Research Society which is a non-profitable society. The theme of the congress was intelligent systems, machine vision, robotics, and computational intelligence. The conference witnessed multiple eminent keynote speakers from academia and industry from all over the world along with the presentation of accepted peer-reviewed articles. This volume is a curated collection of the articles which are presented during the conference. The book focuses on the current and recent developments in intelligent systems on image processing, guidance system, predicting mechanism, optimization, and ensemble deep learning approaches. The collection of good articles on the themes such as data classifications models, multi-objective optimizers, speech emotion recognition system, and signal processing techniques are good collections. In conclusion, the edited book comprises papers on diverse aspects of intelligent systems on bionic arms, diabetic retinopathy diagnosis, robot simulations, and fuzzy and SVM classifiers. New Delhi, India November 2020

Harish Sharma Mukesh Saraswat Anupam Yadav Joong Hoon Kim Jagdish Chand Bansal

v

Contents

Model-Based Data Collection Systems on Fog Platforms . . . . . . . . . . . . . . . N. A. Zhukova, A. I. Vodyaho, S. A. Abbas, and E. L. Evnevich Notification System for Text Detection on Document Images Using Graphical User Interface (GUI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wan Azani Mustafa, Mohd Hairy Aziz, Syed Zulkarnain Syed Idrus, Mohd Aminudin Jamlos, Wan Khairunizam, and Mohamad Nur Khairul Hafizi Rohani Estimation of Road Damage for Earthquake Evacuation Guidance System Linked with SNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yujiro Mihara, Rin Hirakawa, Hideaki Kawano, Kenichi Nakashi, and Yoshihisa Nakatoh Analysis of the Messages from Social Network for Emergency Cases Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Olga Tsarenko, Yana Bekeneva, and Evgenia Novikova Intelligent Simulation of Competitive Behavior in a Business System . . . Dmytro Chumachenko, Sergiy Yakovlev, Ievgen Meniailov, Kseniia Bazilevych, and Halyna Padalko Minimizing the Subset of Features on BDHS Dataset to Improve Prediction on Pregnancy Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Faisal Ahmed, Shahana Shultana, Afrida Yasmin, and Junnatul Ferdouse Prome PUNER-Parsi ULMFiT for Named-Entity Recognition in Persian Texts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. Balouchzahi and H. L. Shashirekha A Coverage and Connectivity of WSN in 3D Surface Using Sailfish Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thi-Kien Dao, Shi-Jie Jiang, Xiao-Rong Ji, Truong-Giang Ngo, Trong-The Nguyen, and Huu-Trung Tran

1

15

25

35 49

61

75

89

vii

viii

Contents

A Classification Model for Software Bug Prediction Based on Ensemble Deep Learning Approach Boosted with SMOTE Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thaer Thaher and Faisal Khamayseh

99

Modeling the Relationship Between Distance and Received Signal Strength Indicator of the Wi-Fi Over the Sea to Extract Data in Situ from a Marine Monitoring Buoy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Miguel Angel Polo Castañeda, Constanza Ricaurte Villota, and Danay Vanessa Pardo Bermúdez Data Classification Model for Fog-Enabled Mobile IoT Systems . . . . . . . . 125 Aung Myo Thaw, Nataly Zhukova, Tin Tun Aung, and Vladimir Chernokulsky Multi-Objective Teaching–Learning-Based Optimization for Vehicle Fuel Saving Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Trong-The Nguyen, Hong-Jiang Wang, Rong Hu, Truong-Giang Ngo, Thi-Xuan-Huong Nguyen, and Thi-Kien Dao A Tripod-Type Walking Assistance for the Stroke Patient . . . . . . . . . . . . . 151 P. Kishore, Anjan Kumar Dash, Akhil Pragallapati, Dheerkesh Mugunthan, Aniirudh Ramesh, and K. Dilip Kumar Data Routing with Load Balancing Using Ant Colony Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 S. Manjula, P. Manikandan, G. Philip Mathew, and V. Prabanchan Speech Emotion Recognition Using Machine Learning Techniques . . . . . 169 Sreeja Sasidharan Rajeswari, G. Gopakumar, and Manjusha Nair Feature-Based AD Assessment Using ML . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Siddheshwari Dutt Mishra and Maitreyee Dutta Show-Based Logical Profound Learning Demonstrates Utilizing ECM Fuzzy Deduction Rules in DDoS Assaults for WLAN 802.11 . . . . . . 189 D. Sudaroli Vijayakumar and Sannasi Ganapathy Atmospheric Temperature Prediction Using Ensemble Deep Learning Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Ashapurna Marndi and G. K. Patra Reliability Evaluation of Distribution System Based on Interval Type-2 Fuzzy System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Galiveeti Hemakumar Reddy, Akanksha kedia, Shruti Ramakrishna Gunaga, Raju More, Sadhan Gope, Arup Kumar Goswami, and Nalin B. Dev Choudhury Design of Automatic Answer Checker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Ekta Tyagi, Deeksha, and Lokesh Chouhan

Contents

ix

Community Detection Using Fire Propagation and Boundary Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Sanjay Kumar and Rahul Hanot Handwritten Devanagari Character Recognition Using CNN with Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Gaurav Singh Bhati and Akhil Ranjan Garg Input Parameter Optimization with Simulated Annealing Algorithm for Predictive HELEN-I Ion Source . . . . . . . . . . . . . . . . . . . . . . . 281 Vipin Shukla, Vivek Pandya, Mainak Bandyopadhyay, and Arun Pandey LRSS-GAN: Long Residual Paths and Short Skip Connections Generative Adversarial Networks for Domain Adaptation and Image Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Shushant Kumar and K. Chandrasekaran Tweets Reporting Abuse Classification Task: TRACT . . . . . . . . . . . . . . . . . 305 Saichethan Miriyala Reddy, Kanishk Tyagi, Abhay Anand Tripathi, Ambika Pawar, and Ketan Kotecha Enhance the Prediction of Air Pollutants Using K-Means++ Advanced Algorithm with Parallel Computing . . . . . . . . . . . . . . . . . . . . . . . 315 Chetan Shetty, M. Rosemary Binoy, T. S. Swetha Sree, G. Ujwala, V. H. Geetha, S. Seema, and B. J. Sowmya Towards Grammatical Evolution-Based Automated Design of Differential Evolution Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 M. T. Indu and C. Shunmuga Velayutham Efficient Fuzzy Similarity-Based Text Classification with SVM and Feature Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Shalini Puri Plasma Density Prediction for Helicon Negative Hydrogen Plasma Source Using Decision Tree and Random Forest Algorithm . . . . . . . . . . . . 357 Vipin Shukla, Vivek Pandya, Mainak Bandyopadhyay, and Arun Pandey Automatic Recognition of ISL Dynamic Signs with Facial Cues . . . . . . . . 369 C. J. Sruthi, Karan Soni, and A. Lijiya A Blockchain-Based Multi-layer Infrastructure for Securing Healthcare Data on Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Roshan Jameel, Harleen Kaur, and M. Afshar Alam Analysis of Lightweight Cryptography Algorithms for IoT Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Navdeep Lata and Raman Kumar

x

Contents

Disease Prediction from Speech Using Natural Language Processing and Deep Learning Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 Rahul Kumar, Sushant Pradhan, Tejaswi Rebaka, and Jay Prakash Investigation on Error-Correcting Channel Codes for 5G New Radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Sonu Lal, Roopam Gupta, and Rakesh Kumar Arya Classification of Human Postural Transition and Activity Recognition Using Smartphone Sensor Data . . . . . . . . . . . . . . . . . . . . . . . . . 431 Priyanka Kolluri, Pranaya Chilamkuri, Choppakatla NagaDeepa, and V. Padmaja Color Image Watermarking Technique Using Principal Component in RDWT Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Roop Singh, Alaknanda Ashok, and Mukesh Saraswat Query Auto-Completion Using Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Vidya S. Dandagi and Nandini Sidnal Comparative Analysis of Load Flows and Voltage-Dependent Load Modeling Methods of Distribution Networks . . . . . . . . . . . . . . . . . . . . . . . . . 467 U. Kamal Kumar and Varaprasad Janamala A Framework for Disaster Monitoring Using Fog Computing . . . . . . . . . . 485 T. Raja Sree Attitude Control in Unmanned Aerial Vehicles Using Reinforcement Learning—A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 Varun Agarwal and Rajiv Ranjan Tewari Self-Supervised Learning Approaches for Traffic Engineering in Software-Defined Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Deva Priya Isravel, Salaja Silas, and Elijah Blessing Rajsingh Passive Motion Tracking for Assisting Augmented Scenarios . . . . . . . . . . 523 Pranay Pratap Singh and Hitesh Omprakash Sharma Black Hole—White Hole Algorithm for Dynamic Optimization of Chemically Reacting Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 Prasad Ovhal and Jayaraman K. Valadi Automated Cooperative Robot for Screwing Application . . . . . . . . . . . . . . 547 K. Vigneshwaran, V. R. Jothi Sivam, and M. A. Ganesh Flexible Bolus Insulin Intelligent Recommender System for Diabetes Mellitus Using Mutated Kalman Filtering Techniques . . . . . 565 P. Nagaraj, V. Muneeswaran, R. Sabik Ali, T. Sangeeth Kumar, A. L. Someshwara, and J. Pranav

Contents

xi

Deep Learning Technique for Predicting Optimal ‘Organ at Risk’ Dose Distribution for Brain Tumor Patients . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Ashish Kumar, P. C. Lekshmy, Niyas Puzhakkal, and K. A. Abdul Nazeer A Fractional Model to Study the Diffusion of Cytosolic Calcium . . . . . . . 585 Kritika, Ritu Agarwal, and Sunil Dutt Purohit Signal Processing Techniques for Coherence Analysis Between ECG and EEG Signals with a Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 Rajesh Polepogu and Naveen Kumar Vaegae A Review on Dimensionality Reduction in Fuzzy- and SVM-Based Text Classification Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Shalini Puri Adaptive Fuzzy Algorithm to Control the Pump Inlet Pressure . . . . . . . . 633 P. Sivakumar, R. S. Sandhya Devi, A. Angamuthu, B. Vinoth Kumar, S. K. AnuShyni, and M. JeenBritto An Automated Citrus Disease Detection System Using Hybrid Feature Descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Bobbinpreet Kaur, Tripti Sharma, Bhawna Goyal, and Ayush Dogra Clustering High-Dimensional Datasets Using Quantum Social Spider Optimization with DWT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 Jetti B. Narayana, Satyasai Jagannath Nanda, and Urvashi Prakash Shukla Intuitive Control of Three Omni-Wheel-Based Mobile Platforms Using Leap Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673 Devasena Pasupuleti, Dimple Dannana, Raghuveer Maddi, Uday Manne, and Rajeevlochana G. Chittawadigi Effective Teaching of Homogenous Transformations and Robot Simulation Using Web Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687 Yashaswi S. Kuruganti, Apparaju S. D. Ganesh, D. Ivan Daniels, and Rajeevlochana G. Chittawadigi Abnormal Event Detection in Public Places by Deep Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701 Mattaparti Satya Bhargavi, J. V. Bibal Benifa, and Rishav Jaiswal Multipurpose Advanced Assistance Smart Device for Patient Care with Intuitive Intricate Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713 P. Ravi Sankar, A. Venkata Ratnam, K. Jaya Lakshmi, Akshada Muneshwar, and K. Prakash Assessing the Role of Age, Population Density, Temperature and Humidity in the Outbreak of COVID-19 Pandemic in Ethiopia . . . . 725 Amit Pandey, Rajesh Kumar, Deepak Sinwar, Tesfaye Tadele, and Linesh Raja

xii

Contents

Soft Computing Tool for Prediction of Safe Bearing Capacity of Soil . . . 735 Narhari D. Chaudhari, Neha N. Chaudhari, and Gaurav K. Bhamare Smart Saline Monitoring System for Automatic Control Flow Detection and Alertness Using IoT Application . . . . . . . . . . . . . . . . . . . . . . . 745 D. Ramesh Reddy, Srishti Prakash, Andukuri Dinakar, Sravan Kumar, and Prakash Kodali Comparative Study on AutoML Approach for Diabetic Retinopathy Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759 V. K. Harikrishnan, Harshal Deore, Pavan Raju, and Akshat Agarwal 7-DOF Bionic Arm: Mapping Between data gloves and EMG arm band and Control by EMG arm band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773 D. B. Suriya, K. Venkat, Aparajit Balaji, and Anjan Kumar Dash Hopping Spider Monkey Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783 Meghna Singh, Nirmala Sharma, and Harish Sharma Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799

About the Editors

Harish Sharma is Associate Professor at Rajasthan Technical University, Kota, in the Department of Computer Science & Engineering. He has worked at Vardhaman Mahaveer Open University Kota, and Government Engineering College Jhalawar. He received his B.Tech. and M.Tech. degree in Computer Engineering from Government Engineering College, Kota, and Rajasthan Technical University, Kota, in 2003 and 2009, respectively. He obtained his Ph.D. from ABV-Indian Institute of Information Technology and Management, Gwalior, India. He is the secretary and one of the founder members of Soft Computing Research Society of India. He is a lifetime member of Cryptology Research Society of India, ISI, Kolkata. He is Associate Editor of “International Journal of Swarm Intelligence (IJSI)” published by Inderscience. He has also edited special issues of the many reputed journals like “Memetic Computing”, “Journal of Experimental and Theoretical Artificial Intelligence”, and “Evolutionary Intelligence”. His primary area of interest is nature-inspired optimization techniques. He has contributed to more than 65 papers published in various international journals and conferences. Dr. Mukesh Saraswat is Associate Professor at Jaypee Institute of Information Technology, Noida, India. Dr. Saraswat has obtained his Ph.D. in Computer Science and Engineering from ABV-IIITM Gwalior, India. He has more than 18 years of teaching and research experience. He has guided 2 Ph.D. students, more than 50 M.Tech. and B.Tech. dissertations, and presently guiding 5 Ph.D. students. He has published more than 40 journal and conference papers in the area of image processing, pattern recognition, data mining, and soft computing. He was the part of successfully completed DRDE funded project on image analysis and currently running two projects funded by SERB-DST (New Delhi) on Histopathological Image Analysis and Collaborative Research Scheme (CRS), Under TEQIP III (RTU-ATU) on Smile. He has been an active member of many organizing committees of various conferences and workshops. He was also Guest Editor of the Journal of Swarm Intelligence. He is an active member of IEEE, ACM, and CSI Professional Bodies. His research areas include image processing, pattern recognition, mining, and soft computing.

xiii

xiv

About the Editors

Dr. Anupam Yadav is Assistant Professor, Department of Mathematics, Dr. B. R. Ambedkar National Institute of Technology Jalandhar, India. His research area includes numerical optimization, soft computing, and artificial intelligence; he has more than ten years of research experience in the areas of soft computing and optimization. Dr. Yadav has done Ph.D. in soft computing from Indian Institute of Technology Roorkee, and he worked as Research Professor at Korea University. He has published more than twenty-five research articles in journals of international repute and has published more than fifteen research articles in conference proceedings. Dr. Yadav has authored a textbook entitled “An introduction to neural network methods for differential equations”. He has edited three books which are published by AISC, Springer Series. Dr. Yadav was General Chair, Convener, and a member of steering committee of several international conferences. He is Associate Editor in the journal of the experimental and theoretical artificial intelligence. Dr. Yadav is a member of various research societies. Prof. Joong Hoon Kim Dean of Engineering College of Korea University, obtained his Ph.D. degree from the University of Texas at Austin in 1992 with the thesis title “Optimal replacement/rehabilitation model for water distribution systems”. Prof. Kim’s major areas of interest include optimal design and management of water distribution systems, application of optimization techniques to various engineering problems, and development and application of evolutionary algorithms. His paper which introduced the Harmony Search algorithm has been cited for more than 5000 times according to Google Scholar. He has been the faculty of School of Civil, Environmental and Architectural Engineering at Korea University since 1993. He has hosted international conferences including APHW 2013, ICHSA 2014 & 2015, and HIC 2016 and has given keynote speeches at many international conferences including 2013, GCIS 2013, SocPros 2014 & 2015, SWGIC 2017, and RTORS 2017. He is a member of National Academy of Engineering of Korea since 2017. Dr. Jagdish Chand Bansal is Associate Professor at South Asian University New Delhi and Visiting Faculty at Maths and Computer Science, Liverpool Hope University UK. Dr. Bansal has obtained his Ph.D. in Mathematics from IIT Roorkee. Before joining SAU New Delhi, he has worked as Assistant Professor at ABVIndian Institute of Information Technology and Management Gwalior and BITS Pilani. He is Series Editor of the book series Algorithms for Intelligent Systems (AIS) published by Springer. He is Editor-in-Chief of International Journal of Swarm Intelligence (IJSI) published by Inderscience. He is also Associate Editor of IEEE ACCESSS published by IEEE. He is the steering committee member and General Chair of the annual conference series SocProS. He is the general secretary of Soft Computing Research Society (SCRS). His primary area of interest is swarm intelligence and nature-inspired optimization techniques. Recently, he proposed a fission– fusion social structure-based optimization algorithm, Spider Monkey Optimization (SMO), which is being applied to various problems from engineering domain. He has published more than 70 research papers in various international journals/conferences.

About the Editors

xv

He has supervised Ph.D. theses from ABV-IIITM Gwalior and SAU New Delhi. He has also received Gold Medal at UG and PG levels.

Model-Based Data Collection Systems on Fog Platforms N. A. Zhukova , A. I. Vodyaho , S. A. Abbas , and E. L. Evnevich

Abstract Recent state of technologies development is characterized by an increased complexity and enhanced intelligence of anthropogenic systems being created, by a permanent expansion of information technologies application scope and by emergence of new paradigms of development of software-intensive systems such as cyberphysical systems, Internet of things, cloud and fog platforms. Modern softwareintensive systems often have a dynamic structure and implement complex behavior. Data collection in such systems is a non-trivial task. The paper proposes a model approach to building data collection systems in multilevel cyber-physical systems which are realized on the fog computing platforms. Models are proposed to be built in terms of knowledge. Keywords Data collection systems · Knowledge model · Fog platforms

1 Introduction Impressive progress in microelectronics, telecommunications and software engineering offers novel opportunities for the developers of software-intensive systems (SwIS) [1] as regards, in particular, the anthropogenic ones. It means attaining higher levels of complexity of the systems under development resulting in an increase in a number of elements and subsystems, in a number of hierarchy levels, in a more complex relationship structure and behavior of the system and its elements, and hence, in a higher level of system intelligence. Cognitive systems being capable of implementing behavioral patterns of reasoning acquire widespread in practical applications. The following specifics of the modern stage of the development of SwIS could be outlined: N. A. Zhukova · E. L. Evnevich St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPCRAS), St. Petersburg, Russian Federation A. I. Vodyaho · S. A. Abbas (B) Saint Petersburg Electrotechnical University “LETI”, St. Petersburg, Russian Federation © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_1

1

2

N. A. Zhukova et al.

• SwIS application scope expansion, in particular by virtue of using systems of low cost category; • Growth of the presence of information component in not purely information systems; • Emergence of new paradigms, approaches and platforms oriented at intelligent SwIS development. New SwIS concepts are often based on the already existing principles and technologies and overlap and complement each other to a considerable extent. For example, cyber-physical systems (CPS) include together with software components a wide range of elements of a very different nature such as technological equipment, transport systems, natural phenomena and biological systems together with the people who are responsible for the extraction, accumulation and use of knowledge. In this case, technologies integration is being focused on [2]. Another characteristic example is the paradigm of ambient intelligence (AmI), according to which electronic environment could be sensitive and responsive to the presence of people [3]. The main idea of AmI is to let people use intuitively the information and intelligence hidden in a network that connects numerous devices of various purposes in human everyday activities. It should be noted that the more those devices become compact, cheap and interconnected, the more their technological structure becomes invisible to the user. Only an intuitive interface is available to him. This paradigm and its elements are already actively used in many CPS, especially in the hospitals, public transport, industrial facilities, etc. During system creation, the requirements of minimal costs play significant role. AmI paradigm should be considered as an integration one first of all. For its practical implementation, a number of services must be created. AmI approach integrates several well-known ones: contex- aware systems (CAS), pervasive (ubiquitous) computing, profiling systems, agile computing systems and cognitive computing systems. According to the developers of the paradigm, AmI Systems (AmIS) should be created on the basis of interconnected built-in CPS, the totality of the latter should function as CAS, the systems under development should be adaptive, implement proactive (anticipatory) behavior and be cognitive, i.e., be capable of implementing elements of intelligent behavior and possess learning ability. The listed above properties being considered as non-functional requirements make quite obvious that AmIS could be defined as context-oriented, adaptive (cognitive) CPS [3]. On closer examination, it turns out that the above concepts actually describe and consider the class of SwIS from various points of view, and various subclasses of SwIS could be considered as profiles. The idea of AmIS is quite attractive, but a question arises how to implement the above functionality which requires rather large computing resources under strict restrictions imposed on the cost of real systems development on the basis of the concept. Hierarchical information processing systems seems to be an efficient solution to the problem.

Model-Based Data Collection Systems on Fog Platforms

3

2 CPS Platforms Modern stage of information technologies development is characterized by the emergence of new technologies and platforms that are created on the basis of those technologies, in particular, Internet of things (IoT), industrial Internet of things (IIoT), Internet of everything (IoE) [4]. Cloud and fog computing demonstrate the growth of their use: CPS of various purposes are mainly being developed on those platforms. The most popular is the IoT platform [4, 5]. It should be noted that IoT is a multidimensional concept. IoT could be considered both as a platform and as a multilevel application. In this paper IoT is represented as the platform the application systems are based on. Fog Systems (FS) [6] could serve some reference platform [7] being appropriate for AmIS systems to be implemented on. Modern FS are multilevel systems with dynamic structure and adaptive behavior, composed of elements of different physical nature. In addition to vertical connections of all levels there are numerous horizontal connections [6, 7]. The typical FS structure is shown in Fig. 1. The upper level—the cloud layer—is built on the basis of public, private or hybrid clouds [8] and is usually completely virtual and automatically scalable resource presented to the users as a wide but often fixed set of services. Individual applications are built as ecosystems on the basis of one service field. The service access time is determined by the capacity of the communication channels. The ability of the user to control the cloud layer is rather limited. The fog level includes a lot of interconnected controllers the developer has total access to. At this level, virtualization mechanisms such as cloudlets [6] could be used to a limited extent, but more often the user works with physical resources. Two contradictory trends exist in this level at the moment: On the one hand, lower costs and higher performance provide powerful controller and telecommunications equipment at the user’s disposal, and on the other hand, FS application growth is mainly takes place due to the use of low-price systems. In this context, there is a kind of agreement to subdivide this level into two separate levels: fog layer and mind layer. Fog layer with its mechanisms of resource virtualization is completely available to the user and mind layer deals with physical resources. At the lowest (sensor) level, the sensors, actuators and communication tools are located, and the built-in controllers and communication tools used at this level are becoming more powerful and make it possible to realize efficient horizontal intralevel interaction.

Public cloud

Cloud Layer Private cloud Fog Layer Mind Layer Sensor Layer

Fig. 1 Typical FS structure

Hybrid cloud

4

N. A. Zhukova et al. Systems of systems consisting of AmIS AmIS Separate CPS

Fig. 2 AmIS as a set of FS applications

Thus, the task of developing applications in a fog environment could be considered, in a certain sense, as the optimization task of modules deployment on the elements belonging to the three listed above levels.

3 AmIS as a Set of FS Applications FS are platforms on which some applications are implemented, or an information component of a certain CPS. Depending on the complexity of applications, at least three layers can be distinguished: individual applications, domain-oriented AmIS and systems consisting of many systems, including AmIS (Fig. 2).

4 Typical Tasks of Data Collection in Software-Intensive Systems Efficient data collection procedures proved to be of special need for maintenance of context orientation, pervasive (ubiquitous) computing and cognitive features of dynamic structure and behavior adaptive environment of the CPS under observation. Figure 3 represents aggregated (large-scale) classification of various data collection tasks. The first case refers to CAS, the second one concerns adaptive system and in the future—a system capable of learning (cognitive one). The third case deals with a built-in (usually distributed) control system, i.e., a CPS. In this case, the executive subsystem is a consumer of collected information and data could be collected either about system elements (status, attribute values) or about ongoing business processes (BP). Thus, there are three main paradigms of data collection systems (DCS) creation in a fog environment: DCS as a system for collecting and displaying data in the desired form, that is, a system for perceptions transformations; DCS as a subsystem of the control system; DCS as a CAS.

Model-Based Data Collection Systems on Fog Platforms

5

Data collection Tasks

Data collection in the interests of finite users

Data collection on external environment

Data collection in the interests of system control

Data collection on some object of control

Data collection on the control system itself

Fig. 3 Aggregated classification of data collection tasks

5 Proposed Approach to Data Collection Systems Development Existing approaches to the creation of SDSs are mostly focused on the static structures of observed systems (OSs), and as size of OSs increases, their efficiency decreases. The proposed model approach is intended for the use in multilevel distributed CPS with a dynamic structure and a large number of subsystems. The key idea of the prapproach is to use a model of OS created in terms of knowledge, which describes the structure of the OS and the BP running in it. Development of CPSs realizing the proposed approach is based on the following principles: 1. 2. 3.

4. 5. 6. 7.

The multilevel OS model describing its structure and its BPs is created in terms of knowledge (ontologies, knowledge graphs). The multilevel OS model is created, maintained and updated in the automated mode. Automated creation and maintenance of the structural model current state is implemented by means of the structural model synthesis algorithms and as regards creation and maintenance of the current state of BP models, it is proposed to use data mining algorithms. Multilevel OS model is based on the information contained in the log files. All user requests are routed to the model through a subsystem of perception generation based on a data merge model. Data collection process is controlled by policies being represented as a set of rules. Data collection procedures are scripts that are generated in the automated mode on the basis of policies and models using logic inference.

6

N. A. Zhukova et al.

6 Models Based on Perception Transformation Systems based on the transformation of perceptions are actually traditional information control systems (ICS), collecting data on the observed system and its ongoing business processes (BPs) and presenting those data to the interested parties in accordance with their role. The ICS can be represented as: Mims = , where RQ is a set of requests for performing information collection actions, PRS is a set of representations of query results, D is a set of data collection points, TR is a set of transformations. Two tasks are being solved simultaneously: that of formation of data collection policy and that of formation of perceptions. Data collection policy determines data collection procedure. For SwIS with dynamic structure, it has to be done in run time. Perceptions generation is a set of procedures (services) for presenting query responses in a user-friendly form. Model of perception formation (MPF) could be formally defined as Mpf = , where TRM is a set of models created on the basis of the Mpf model, and DSL is a set of languages of communication with monitoring systems (MS) intended for the different categories of stakeholders. The implementation of MPF is associated with the set of transformation of queries in the language of the stakeholders into the RQ

RS

queries in the language of the model and vice versa: DS L i → M i M → DS L i . Well-known data fusion Joint Directors of Laboratories (JDL) model [9], the structure of which is shown in Fig. 4, could be used, for example, as a formal model of the observed system (OS) performance description. Classic JDL model actually defines “what is being done” but not “how it is being done,” i.e., JDL model is a functional one and it was never considered as a process model or as a model of technical architecture. Classic JDL model is quite general model which is not directly related to any subject domain. The proposed MJDL (modified JDL) model is a domain specific one and it could be also considered as a special case of classic JDL. Another distinctive feature of the MJDL in comparison with classic JDL is that MJDL is not a data fusion model but a data transformation one and it is used to generate the needed form of information presentation. MJDL model has six levels like the classic one. One can define four groups of stakeholders: service engineers (responsible for the technical state of the observed system (OS) infrastructure), the operators (monitor the state of individual OS subsystems, if necessary, he or she can issue control actions), managers (responsible for the technical state of large subsystems or the entire OS system), business analysts and top managers who are primarily interested in the general state of the OS. In principle, those user groups work at different levels of the model. Level 0. Individual logs are processed at this level. The tasks being solved are the following: formation of requests for the issue of logs; processing of “raw” logs; processing, estimation and prediction of values of individual parameters. Typical problems to be solved at this level: elimination of noises (random logs), loss of individual logs, inability to remove the required log, etc.

Model-Based Data Collection Systems on Fog Platforms

7

4. Quality assessment of OS performance Request on NC performance quality

Policies

3. Formation and implementation of OS monitoring policies Request on OS general state

Integral information on OS state

2. Assessment of OS general state

Request of subsystems state

Information on the state

Analysts, top managers

5. HMI

Managers

1. Subsystems characteristics assessment Logs request, sensors control

Operators Cleared logs

0. Logs clearing, requests formation

Requests for logs

Service engineers

Logs, control

Fig. 4 MJDL model

Level 1. Evaluation of characteristics of OS individual subsystems. The main task to be solved at this level is estimation and prediction of the values of individual parameters and of the state of individual entities (objects) included in the OS. Functions related to generation of information about individual objects are implemented on the basis of information on object elements. Such information could concern technical conditions of individual nodes of a complex technical system, location of the object, speed and direction of object movement. Level 2. Assessment of the general state of the OS. At this level, the state OS is estimated and predicted. Functions of generation of information about the situation in a certain context in terms of entities (subsystems), relationship between them and events are implemented. Level 3. Reaction definition. This level is present in the systems of monitoring and control. The main task is to estimate and to predict the future states of the OS and its

8

N. A. Zhukova et al.

parts in terms of utility/cost and to form alternative variants of control signals generation. At this level, an assessment of the situation is carried out including assessment of situation dynamics, assumptions on possible actions of external objects, threats and system own vulnerabilities. Level 4. Efficiency assessment. The main challenge being addressed at this level is to assess and to predict the performance characteristics of both the OS and the DCS itself and to compare them with the desired performance values. At this level, monitoring functions of the DCS itself are implemented, in particular, in order to enhance its time characteristics. Level 5. Human–machine interaction. At this level, functions of human–machine interaction (HMI) procedures are implemented. In addition, functions of this level are responsible for virtualization and shared decision-making, knowledge control mechanisms, the latter determine the following: who requests information, who has access to certain information, for what purpose the information is to be used, in what form and to what extent it should be presented to the person concerned. Each of the levels could be defined as Li = , where Mi is a set of models related to the ith level and Tij is a set of model transformation procedures. In its turn, Tij = , Tv—vertical transformation procedures, Th—horizontal transformation procedures. Vertical transformation procedures are transitions between levels and horizontal transformations are transitions within levels, the transitions usually looks like: Raw Data → Information → Knowledge.

7 CAS Generalized Model In its most general form, the CAS model could be represented as Mcas = , where RQ is a set of requests to perform information collection, RS—a set of OS reactions, D—a set of data collection points, PROC—a set of processing procedures, L is event information coming in the form of logs, Mc is a current context model, Mr is a set of context reference models, Δ is a procedure for determining the degree of contexts similarity. In its turn, data collection procedure could be defined as a PROC = , where PROCp is a set of processing environment management procedures, and PROCc is a set of procedures for processing information about the current context (for example, requests processing order). There are several variants: (1)

Adaptation of the algorithm (processing procedures) to the context: PROCpef = f (PROCp, Mc), where PROCpef is an executable procedure;

(2)

Customization of environment of implementation to the context: PROCcef = f (PROCc, Mc);

Model-Based Data Collection Systems on Fog Platforms

(3)

9

Response to context changes: When the context changes, a signal is generated in the form of a log that makes start PROC [10].

8 Data Collection System as an Element of a System of Control In its most general form, data collection model in a separate subsystem could be represented as Mcs = , where RQ is a set of requests to perform information collection actions, RS is a data set being transferred to the execution subsystem, D is a set of data collection points, PROC is a set of data collection and processing procedures, L is event information received in the form of logs. Formally, the behavior of the OS could be described as follows. While operating in discrete time the OS state is represented by the element x of phase space X = {x} and its evolution in time is described by the sequence x1 , …, xi , …, xn . The dynamics of the point of phase space X is generally described by the formula xt+i = g(xf ) and if initial values are set then all elements of the sequence could be calculated. Let the control space Y = {y} be specified. The controlled object (OS) is characterized by the equation of motion xt+1 = g(xt , yt ), t ≥ 1. In addition, you need to add control selection rules that are generally history-dependent and can be represented by yt+1 = f (xt , yt−1 ), t ≥ 1. Let us suppose that the choice of control actions is implemented by the control system, and the system of rules {f } is referred to as a strategy of control. Further those terms are used as synonyms. (The equations of motion of a controlled object may be known beforehand or may be unknown.) The set of rules (strategy) is selected so that the motion of the object in the phase space has one or another property. The requirement that this property actually takes place is called the control objective. There are several variants for linking the DCS and the OS (Fig. 5a). In the simplest case (Fig. 5a), the DCS passively monitors the OS. In the second case, the OS can issue requests for logs and, possibly, issue control actions (Fig. 4b). In the third case, the DCS is an adaptive system that can adjust its behavior (Fig. 5c). In this case, regulator R is responsible for behavior correction. And the case when a multistage regulator is used is shown in Fig. 5d. In this case, some meta-regulator R2 corrects the behavior of the main regulator R1 . In principle, the regulator could have more levels. It is evident that implementation of intelligent data processing functions is possible only if the almost unlimited computing capabilities of cloud structures are involved. Thus, the distribution of functionality between levels becomes quite obvious: • The sensor level is responsible for collecting raw data and, if possible, for their pre-processing; • The fog level controls data collection process (in terms of policies), for solving more subtasks cloud-level services are to be involved;

10

N. A. Zhukova et al.

a)

OS

b)

OS

X x

DCS

c)

OS

DCS Y d) R

x DCS

Y

OS

Y

R1

x

R2

DCS Fig. 5 Variants of communication between OS and DCS

• The cloud layer is responsible for storing big data, models, meta-models and provides processing services to the fog layer.

9 Work with Knowledge and Big Data Problem The low cost of the sensors leads to a spontaneous increase in their number and in the amount of raw data. Although available storage makes possible to store very large amounts of data, there are still limits. Increasingly powerful computing resources are required to locate and process data. One of the possible solutions to big data problems is to extract knowledge from data and to store only knowledge (the pyramid of knowledge). The question arises as to what knowledge should be extracted and stored. There are many types of knowledge and approaches to their classification. Upper level types of knowledge are the following: procedural knowledge, declarative knowledge, meta-knowledge, structural knowledge, inexact and uncertain knowledge, common sense knowledge, ontological knowledge, etc. At lower levels, in particular, at the levels of domains, other types of knowledge may appear such as model knowledge, architectural knowledge. To represent them, the methods of representing top-level knowledge are usually used. To solve the problems associated with creation of AmIS, a significant practical interest belongs to model knowledge which could be defined as a model developed in terms of knowledge.

Model-Based Data Collection Systems on Fog Platforms

11

One can use different models of SwIS. In particular, there are models used at the design stage and models used in the run time. It is run-time models that are of interest in this study. When implementing data collection process, an explicit or implicit model of the system on which the data is collected should be present. In this work, it is proposed to use an explicit model developed in terms of knowledge.

10 Control in Model Terms This is a quite common approach in the frames of information systems, but control in terms of model knowledge is still of limited use. In the case of CPS, a model approach is a natural solution. Moreover, since CPS uses elements of different physical nature, there is a need to work with models presented in different terms, in particular, in terms of knowledge. In order for the above said positive effect to be obtained, the model must possess certain properties, the main of which are the following: (1) (2) (3)

Description of the structure and behavior of the observed system of arbitrary complexity in an hierarchical mode; Response to users requests at a reasonable cost of resources; Possibility of automatic construction and reconfiguration at reasonable resource costs.

11 Model Operations Modern IoT and IoE systems are distributed multilevel systems. At the upper level, the structure is more or less strict, whereas at the lower level, the structure of relations is constantly changing. Each element has its own model, and each subsystem is described by its own model. You can define the following main types of model operations: model building, model rebuilding, issuing the desired representation and model merging.

12 Model Approach as a Form of Virtualization The model approach could be considered as a form of virtualization. Most commonly used is resource virtualization, which is used everywhere, in particular in cloud structures (Fig. 6). Resource virtualization assumes that the end user does not work with physical resources but with their model (Fig. 7). Besides, the model-driven architecture (MDA) approach is widely used at the design stage of model development after which the model is translated into executable

12

N. A. Zhukova et al.

Virtual Resource

Middleware, Resource Virtualization

Physical Resource

Middleware, Model Virtualization

Physical Resource

Fig. 6 Resource virtualization

Model

Fig. 7 Model virtualization

code. It should be noted that model approach and virtualization mechanisms prove to be closely related since a virtual resource actually represents a model of a physical resource. However, virtualization approach as well as MDA one results in the development of static models most often remaining unchanged during their use. The following main differences could be distinguished between proposed approach and MDA and resource virtualization ones: • Unlike MDA the proposed approach uses run-time models; • Proposed model approach is not intended for the resource or run-time modeling but for the modeling of some OS, i.e., external entity; • OS model is dynamic which enables its operative automated update with a perspective of automated synthesis in the future; • In the frames of proposed approach, the model is developed in terms of knowledge which makes possible to describe the structure and behavior of OS consisting of entities of different physical nature.

13 Potential Benefits Under certain conditions, the use of models developed in terms of knowledge provides a number of useful properties: (1)

(2)

(3)

Ability of operative monitoring of structural dynamics and behavior of the observed CPS consisting of elements of a different physical nature including people (socio-physical systems); Ability to reduce the response time to user requests about the state of the OS, since it is not necessary to formulate requests to the observed system itself (a kind of 2 stage pipe); Ability of fast access to data on system past states and of prediction its behavior within certain limits;

Model-Based Data Collection Systems on Fog Platforms

(4)

13

In the case of implementing self management mechanisms such as selfdiagnostic, self-repair, self-healing, etc., the model enables knowledge accumulation about the system itself, that is, prerequisites are created for cognitive mechanisms realization.

14 Practical Use of the Proposed Approach The proposed approach was used for the development of a number of real systems. One of the examples is a cable digital television networks management system which was developed by the request of cable television operators. Modern cable digital television networks management system have a number of specific features: (i) large and very large network size; (ii) highly dynamic environment; (iii) heterogeneous network infrastructure; (iv) strong requirements to the total cost of ownership, reliability, reaction speed, etc. They can be classified as a large multilevel distributed CPS. Earlier in the case of network failures, maintenance staff identified the location, time and causes of the faults and restored the health of the devices. A work could be done remotely or locally and take hours. Use of model approach allows reducing the time of solving this problem to minutes which is a significant performance time enhancement.

15 Conclusion The approach and models proposed in the paper could be efficiently used for data collection process in various AmIS including CPS as elements being created on fog computing platforms. Application of model approach to the development of DCSs in such systems enables to reach a new complexity level which is unattainable when using traditional approaches. It should be noted that the expected transition to cognitive systems in the near future will complicate the task of data collection and that task could only be solved using a model approach that would find its application in a wide range of subject domains. Acknowledgements «The paper was prepared in Saint-Petersburg Electrotechnical University (LETI) and is supported by the Agreement № 075-11-2019-053 dated 20.11.2019 (Ministry of Science and Higher Education of the Russian Federation, in accordance with the Decree of the Government of the Russian Federation of April 9, 2010 No. 218), project «Creation of a domestic high-tech production of vehicle security systems based on a control mechanism and intelligent sensors, including millimeter radars in the 76–77 GHz range».

14

N. A. Zhukova et al.

References 1. Lattanze, A.J.: Architecting Software Intensive Systems. Practitioner’s Guide, p. 453. Taylor & Francis Group, LLC Boca Raton London New York FL (2009) 2. Sanfelice, R.G.: Analysis and design of cyber-physical systems. A hybrid control systems approach. In: Rawat, D., Rodrigues, J., Stojmenovic I. (eds.) Cyber-Physical Systems: From Theory to Practice. CRC Press (2016) 3. Streitz, N., Charitos, D., Kaptein, M., Böhlen, M.: Grand challenges for ambient intelligence and implications for design contexts and smart societies. J. Ambient Intell. Smart Environ. 11, 87–107 (2019) 4. Jun, H., Kun, H. (eds.): Managing the Internet of Things: Architectures, Theories and Applications, p. 226. The Institution of Engineering and Technology (2016) 5. Boyes, Hugh, Hallaq, Bil, Cunningham, Joe, Watson, Tim: The Industrial Internet of Things (IIoT): an analysis framework. Comput. Ind. 101, 1–12 (2018) 6. Mahmood, Z.: Fog Computing Concepts, Frameworks and Technologies, p. 291. Springer International Publishing AG, Cham, Switzerland (2018) 7. IEEE Standard Association. FOG−Fog Computing and Networking Architecture Framework. http://standards.ieee.org/develop/wg/FOG.html. Last accessed 20.04.2020 8. Hwang, K., Fox, G.C., Dongarra, J.J.: Distributed and Cloud Computing, p. 631. Elsevier Inc (2012) 9. Blasch, E., Bosse, E., Lambert, D.: High-Level Information Fusion Management and System Design, p. 376. Artech House Publishers, Norwood, MA (2012) 10. Loke, S.: Context-Aware Pervasive Systems. Architectures for a New Breed of Applications, p. 220. Taylor & Francis Group, Boca Raton, FL (2007)

Notification System for Text Detection on Document Images Using Graphical User Interface (GUI) Wan Azani Mustafa, Mohd Hairy Aziz, Syed Zulkarnain Syed Idrus, Mohd Aminudin Jamlos, Wan Khairunizam, and Mohamad Nur Khairul Hafizi Rohani Abstract Degraded images of documents typically contain several knowledge numbers. Therefore, a binarization technique must be used to extract it. Various reasons can lead document images to the degradation process, including document image quality of the inferior materials, environmental factors, and lack of interest in the handling of document images. Binarization is an image conversion process that consists of black and white pixels in pixels into a binary image. In this paper, starting process of an average red, green, and blue components of a pixel can be added to allow this transition to reach its grayscale value. Two parts of this proposed framework are primarily used in the MATLAB graphical user interface (GUI), respectively. The binarization technique for fuzzy C-Means and Deghost method amendments is used in the preliminary portion of the GUI for a document picture chosen by the purchaser. The binarization method involves IQA analysis (PSNR, accuracy, and F-measure for instance) to determine the feasibility of the methodology. In this section, the three IQA parameters will be explained in the GUI. The second part of the system offers a flexible mechanism for sending SMS over the telephone. Collectively, for the patron to accumulate the results of the photographic examination, there should be a warning framework on the cell phone. By using the GSM module coordinated with Arduino Uno, the word for the photographic examination will be sent to the phone. Keywords Text · Image · Detection · Smart · System W. A. Mustafa (B) · M. H. Aziz · M. A. Jamlos Faculty of Engineering Technology, Kampus Sg. Chuchuh Universiti Malaysia Perlis, 02100 Padang Besar, Perlis, Malaysia e-mail: [email protected] S. Z. S. Idrus Center of Excellence Geopolymer and Green Technology, Universiti Malaysia Perlis, 01000 Kangar, Perlis, Malaysia W. Khairunizam Advanced Intelligent Computing and Sustainability (AICoS) Research Group, School of Mechatronics Engineering, Universiti Malaysia Perlis, 02600 Arau, Perlis, Malaysia M. N. K. H. Rohani School of Electrical System Engineering, Universiti Malaysia Perlis, Pauh Putra Main Campus, 02600 Arau, Perlis, Malaysia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_2

15

16

W. A. Mustafa et al.

1 Introduction Binarization is the basis for the segmentation of images in two separate parts: the background and the foreground using the idea of a thresholding. The image binarization technique is then used to retrieve images from the document using an image pre-processing procedure to improve image quality [1–5]. However, the extracting information from document images become challenging when dealing with the noise, low-quality image, degradation, and blurring. A binarization algorithm was applied to remove from the degraded document, and a clean input image can be produced and would definitely have a better performance than another algorithm with a bad pre-processing image. Typically, an algorithm with practical pre-processing operations uses simpler binarization techniques. This task is necessary to obtain content from document images [6–8]. The amount of the literature addressed specifically for document images using the automated image processing system has shown an growing trend over recent years [4, 9]. A modern algorithm and technique are the basis for the current image processing approach [10, 11]. Most researchers concluded from the debate that the deterioration of document images triggers the complexity of binarization [12– 17]. Otsu, first published in 1975, has a reputation as one of the most sustainable methods for the binarization of images. By setting the dark and the light pixel to two values, the method identifies the threshold. The darker pixel is set to black and brighter pixel [18]. Since the Otsu approach only uses a single threshold value for the entire image, it does not seem appropriate for all images. After a few years, Niblack [19–21] proposed a method which calculates the threshold of an input image for each pixel. The threshold of all grayscale values in a square area of the present pixel is set to μ + k σ, whereby μ is mean. As for k, the constant is between 2 and 5. In the meantime, the grayscale values in a square region of the selected pixel are standard deviation. This method created a class of algorithms for “adaptive threshold” which fit the threshold for each pixel to suit the local region of that pixel. Niblack’s versatility in adaptive thresholds strengthened it over the Otsu process. Therefore, some of the lighter pixels have been set black after they have thresholds, because it does make a noise. If the default variance is high, the pixels may have the same color as each other, which may misclassify random pixels as black pixels. Because of Niblack’s adaptable threshold, the pixel area is not dark enough to respond to subtle noise. An increased binarization of damaged documents using a directed image filter in 2014 was proposed by Kaur et al. [22]. The proposed method provides a modern, more efficient approach to the binarization process for document images. It consists of three parts: the first part of the approach will be used for the recovery of a degraded document by smoothing and restore the directed picture filter; the second part will be used to improve the adaptive image contrast on the document. The bulk of the latest binarization approach uses one or more common techniques for image processing. Such basic operations are discussed here to better understand the more advanced binarization techniques implemented later on. Some binarization algorithms do the

Notification System for Text Detection on Document Images …

17

basics, which transform a color image to a gray image [23–25]. The transformation involves one of three ways: (1) to combine the red, green, and blue components in one pixel, to achieve the grayscale value; (2) to reweight RGB components in one pixel in order to approximate the number of rods and cones in the human eye; and (3) to minimize the RGB values in an image to one dimension using principal component analysis, a well-known matrix process.

2 Methodology 2.1 Fuzzy C-Means and Deghost Method Hybrid Combination The first step is the application of the fuzzy C algorithm to the image processed. To extract the text from the context, the process is required. The FCM standard N into c clusters is given by equation [26]. partitioning goal function {xk }k=1 J=

c  N 

p

u ik xk − vi 2

(1)

i=1 k=1 c where {vi }i=1 are the prototypes of the clusters, and the array [u ik ] = U represents a partition matrix, U ∈ U , namely in Eq. 2 [26].

 c  N    u ik = 1 ∀ k and 0 < u ik < N ∀ i U u ik ∈ [0, 1]  

i=1

(2)

k=1

Parameter p is an exponent of weighting for each fluid member and defines the flavor of a corresponding class friction. If voxels whose intensities are like the middle of their class are assigning high membership values to voxel and when the voxel data is far from the middle of their class, the objective function of the FCM is reduced. Two meaning clusters were used in the proposed methodology. The next step is to apply the Deghost method to the picture processed. Within the automated imaging, the Deghost approach eliminates “ghost” entities (Fig. 1). The key algorithm measures: 1. 2. 3.

In the smooth image, gradient magnitude G is calculated using the edge operator of Sobel. A value is chosen for objects with a medium gradient below the T p threshold. There is no automated method for defining T p , so the check and error are listed. The average gradient of the edge pixels is determined for all four-connected printing components.

18

W. A. Mustafa et al.

Fig. 1 Cross section by a picture showing how “ghost” artifacts could occur

2.2 Mobile Notification System In the mobile notification system, there are many components required for running the entire program. The GUI for sending SMS to mobile phone will be included in the GSM SIM900A module. To simplify the entire binarization process, a user-friendly interface is required. The aim of the mobile notice system is to send IQA analysis notices to mobile phones via SMS. The key segmentation algorithm is the proposed binarization process. As shown in Fig. 2, the GUI device architecture. GSM may either be used to transmit or receive both calls and text messages. SIM900A is the GSM module used in this article. It is attached to the board of Arduino Uno to allow text message to be sent to mobile phones by the device. The sending of the SMS had IQA info. The relation between the GSM and the Arduino Uno board is shown in Fig. 3 (Fig. 4).

Fig. 2 Mobile notification system GUI design

Notification System for Text Detection on Document Images …

19

Fig. 3 Arduino Uno and GSM SIM900A module circuit connection

Fig. 4 Arduino Uno GSM and SIM900A module hardware

3 Result and Discussion For this analysis, H-DIBCO 2013 offers a list of the images the research subject uses. The history of the dataset is non-uniform. Figure 5 shows the output of the document picture following binarization. Table 1 presents the findings of IQA for precision, F-measurement, and PSNR. Binarizing means that there are the three parameter values of IQA, which for accuracy are 96.06%, for F-measure 74.96%, and for PSNR 16.03 dB. All parameters for the binarization method have reached an optimal point.

20

W. A. Mustafa et al.

Method

Images

Original

Proposed Method

Fig. 5 Image before (top) and after applying the Deghost method (bottom) Table 1 IQA performance against H-DIBCO 2013 dataset for the proposed method

Image

Accuracy (%)

F-measure (%)

PSNR (dB)

1

98.26

74.02

17.60

2

98.02

84.82

17.04

3

95.72

56.95

13.68

4

99.50

94.51

22.98

5

99.39

90.67

22.15

6

98.09

84.07

17.18

7

97.89

34.81

16.75

8

99.13

92.54

20.63

9

95.87

38.44

13.84

10

98.92

93.74

19.68

11

97.88

83.86

16.73

12

80.09

50.47

7.01

13

93.59

83.29

11.93

14

96.24

81.79

14.25

15

92.80

79.60

11.43

16

95.60

75.73

13.56

Average

96.06

74.96

16.03

Notification System for Text Detection on Document Images …

21

Fig. 6 GUI application process

To build a user-friendly framework, MATLAB-based GUI is used as a user interface to binarize document image processes. Some features on the graphics interface include document image binarization software, image quality analysis indicator, and an SMS-sending feature on mobile phones. The framework can be used with only one click on any button, and the whole segmentation process can be said to be streamlined by the GUI. The first step in using the GUI is to insert a document image on the basis of Fig. 6; when the image is loaded, it is then used for the proposed background and text separation binarization process. The user can then save the processed image or upload a test image for IQA tests directly. After the acquisition of IQA data, a window will appear that will prompt the user to enter his cell phone number and can upload it to the telephone number to send SMS. Figure 7 displays the completed GUI submission. The SMS versatile warning is utilized for the IQA results conveyance medium, as it is increasingly useful and essential. To receive an SMS, one does not need site connections, as SMS only allows the client to get and send an SMS with one SIM card. On pressing the SMS button on the GUI, the SMS commits will be forwarded to the committed telephone number with a versatile warning. The SMS has three distinctive IQA parameters, which are accuracy, PSNR, and F-measurement, as shown in Fig. 8. The SMS helps the receiver to assess the feasibility of the technique being applied on the record.

22

W. A. Mustafa et al.

Fig. 7 Completed GUI application

Fig. 8 IQA analysis results via SMS

4 Conclusion In recent years, the digitization phase has been slowly improved and enhanced to ensure the preservation of historical photos of documents. The vast amount of digital data produced needs to be automatically recognized, processed, and enhanced. A binarization approach is required to obtain information on a document in the document image. This paper is used to construct a binarization technique using the combination of C-Means and the Deghost process. A few IQAs such as F-measure and PSNR were determined to evaluate the results output. Binarization and IQA tests are then streamlined and shown on a user-friendly GUI. In order to send text alerts on the user cell phone, GUI is also integrated with GSM SIM900A module and Arduino Uno. The consumer can therefore test the efficiency of the binarization method.

Notification System for Text Detection on Document Images …

23

References 1. Pardhi, S., Kharat, G.U.: An improved binarization method for degraded document. Int. J. Res. Advent Technol. 1–5 (2017) 2. Lu, D., Huang, X., Liu, C., Lin, X., Zhang, H., Yan, J.: Binarization of degraded document image based on contrast enhancement. In: Chinese Control Conference, CCC, pp. 4894–4899 (2016). https://doi.org/10.1109/ChiCC.2016.7554113 3. Mysore, S., Gupta, M.K., Belhe, S.: Complex and degraded color document image binarization. In: 3rd International Conference on Signal Processing and Integrated Networks, SPIN 2016, pp. 157–162 (2016). https://doi.org/10.1109/SPIN.2016.7566680 4. Mustafa, W.A., Kader, M.M.M.A.: Binarization of document images: a comprehensive review. J. Phys. Conf. Ser. 1019, 1–9 (2018). https://doi.org/10.1088/1742-6596/1019/1/012023 5. Mustafa, W.A., Abdul Kader, M.M.M.: Document image database (2009–2012): a systematic review. J. Phys. Conf. Ser. (2018). https://doi.org/10.1088/1742-6596/1019/1/012024 6. Mustafa, W.A., Yazid, H., Jaafar, M.: An improved Sauvola approach on document images binarization. J. Telecommun. Electron. Comput. Eng. 10, 43–50 (2018) 7. Ayyalasomayajula, K.R., Brun, A.: Document binarization combining with graph cuts and deep neural networks. In: 36th Swedish Symposium on Image Analysis, pp. 1–6 (2017). https://doi. org/10.1007/978-3-319-59126-1_32 8. Lokhande, S.S., Dawande, N.A.: A survey on document image binarization techniques. In: Proceedings—1st International Conference on Computing, Communication, Control and Automation, ICCUBEA 2015, pp. 742–746 (2015). https://doi.org/10.1109/ICCUBEA.201 5.148 9. Mustafa, W.A., Khairunizam, W., Ibrahim, Z., Ab, S., Razlan, M.Z.: Improved Feng binarization based on max-mean technique on document image. In: IEEE International Conference on Computational Approach in Smart Systems Design and Applications (ICASSDA), pp. 1–6. IEEE (2018) 10. Mustafa, W.A., Yazid, H.: Illumination and contrast correction strategy using Bilateral filtering and binarization comparison. J. Telecommun. Electron. Comput. Eng. 8, 67–73 (2016) 11. Mustafa, W.A., Aziz, H., Khairunizam, W., Ibrahim, Z., Ab, S., Razlan, M.Z.: Review of different binarization approaches on degraded document images. In: IEEE International Conference on Computational Approach in Smart Systems Design and Applications (ICASSDA), pp. 1–8. IEEE (2018) 12. Zemouri, E.T., Chibani, Y., Brik, Y.: Restoration based Contourlet Transform for historical document image binarization. In: 2014 International Conference on Multimedia Computing and Systems—Proceedings, pp. 309–313 (2014). https://doi.org/10.1109/ICMCS.2014.6911321 13. Singh, B.M., Sharma, R., Ghosh, D., Mittal, A.: Adaptive binarization of severely degraded and non-uniformly illuminated documents. Int. J. Doc. Anal. Recogn. 393–412 (2014). https:// doi.org/10.1007/s10032-014-0219-6 14. Ranganatha, D., Holi, G.: Hybrid binarization technique for degraded document images. In: Souvenir of the 2015 IEEE International Advance Computing Conference, IACC 2015, pp. 893–898 (2015). https://doi.org/10.1109/IADCC.2015.7154834 15. Kaviya Selvi, K., Sabeenian, R.S.: Restoration of degraded documents using image binarization technique. ARPN J. Eng. Appl. Sci. 10, 2813–2817 (2015) 16. Jyoti, D., Raj, B., Sharma, A., Kapoor, K.: Document image binarization technique for degraded document images by using morphological operators. Int. J. Adv. Res. Ideas Innov. Technol. 2, 1–7 (2016) 17. Jia, F., Shi, C., He, K., Wang, C., Xiao, B.: Degraded document image binarization using structural symmetry of strokes. Pattern Recogn. 74, 225–240 (2018). https://doi.org/10.1016/ j.patcog.2017.09.032 18. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 62–66 (1979) 19. Niblack, W.: An Introduction to Digital Image Processing. Prentice-Hall, Englewood Cliffs (1986)

24

W. A. Mustafa et al.

20. Motl, J.: Niblack local thresholding. https://www.mathworks.com/matlabcentral/fileexchange/ 40849 21. Khurshid, K., Siddiqi, I., Faure, C., Vincent, N.: Comparison of Niblack inspired binarization methods for ancient documents. In: Proceedings of IS&T-SPIE Electronic Imaging Symposium, vol. 7247, pp. 1–9 (2009). https://doi.org/10.1117/12.805827 22. Kaur, E.J., Mahajan, R.: Improved degraded document image binarization using guided image filter. Int. J. Sci. Res. Edu. 4, 242–249 (2014) 23. Yousefi, M.R., Soheili, M.R., Breuel, T.M., Kabir, E., Stricker, D.: Binarization-free OCR for historical documents using LSTM networks. In: Proceedings of International Conference on Document Analysis and Recognition (ICDAR), November 2015, pp. 1121–1125 (2015). https://doi.org/10.1109/ICDAR.2015.7333935 24. Ahmadi, E., Azimifar, Z., Shams, M., Famouri, M., Shafiee, M.J.: Document image binarization using a discriminative structural classifier. Pattern Recogn. Lett. 63, 36–42 (2015). https://doi. org/10.1016/j.patrec.2015.06.008 25. Hedjam, R., Nafchi, H.Z., Kalacska, M., Cheriet, M., Member, S.: Influence of color-to-gray conversion on the performance of document image binarization: toward a novel optimization problem. 7149, 1–15 (2015). https://doi.org/10.1109/TIP.2015.2442923 26. Mesquita, R.G., Silva, R.M.A., Mello, C.A.B., Miranda, P.B.C.: Parameter tuning for document image binarization using a racing algorithm. Exp. Syst. Appl. 42, 2593–2603 (2015). https:// doi.org/10.1016/j.eswa.2014.10.039

Estimation of Road Damage for Earthquake Evacuation Guidance System Linked with SNS Yujiro Mihara, Rin Hirakawa, Hideaki Kawano, Kenichi Nakashi, and Yoshihisa Nakatoh

Abstract The damage caused by earthquakes is very serious in Japan. One of the most common causes of earthquake damage is delayed escape due to obstructed passage. Rapid evacuation is very important during an earthquake. However, the pathway may be obstructed by obstacles during an earthquake. In this study, we propose an evacuation guidance system to evacuate users from a disaster area by using information from SNS and other sources. In this study, we propose an evacuation guidance system to evacuate users from a disaster area by using information from SNS and other sources. The proposed system enables users to complete their evacuation in the shortest and safest route. However, it is necessary to estimate the damage of the corridor to realize the system. Thesis in order to reduce the computational cost, transfer learning was performed using VGG16. As a result, the classification was achieved with an accuracy of 78%. This method can be used to automatically identify the damage in the corridor and help in determining the evacuation route. Keywords Earthquake · Evacuation guidance system · Support vector machine

1 Introduction 1.1 Current Situation of the Earthquake Disaster Japan suffers from a variety of natural disasters every year. The number of earthquakes that occur in Japan is particularly high, and the human suffering caused by them is enormous. If we take past major earthquakes as an example, the HanshinAwaji earthquake and the Great East Japan Earthquake have caused severe damage, claiming tens of thousands of lives [1]. One of the reasons for the deaths of these victims is the delayed escape of secondary disasters such as fires and tsunamis. In addition, according to a survey conducted by the Cabinet Office after the Great East Y. Mihara · R. Hirakawa · H. Kawano · K. Nakashi · Y. Nakatoh (B) Kyushu Institute of Technology, 1-1 Sensuicho, Tobata Ward, Kyushu City, Fukuoka Prefecture, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_3

25

26

Y. Mihara et al.

Japan Earthquake and Tsunami, many said that damage to the roads by themselves and debris on the roads were obstacles [2]. These obstacles have hindered evacuation, and the subsequent tsunami has left many people dead or missing. Therefore, measures are needed to prevent the delay in evacuation.

1.2 Measures Against Earthquake Damage in Japan Governments and local governments are working on various arrangements and activities to reduce the victims of these disasters. For example, in Japan, we are focusing on earthquake prediction and strengthening of observation and surveying systems. In addition, we establish various laws for reinforcement, improvement of disaster prevention facility. In local government, we send out information about disaster prevention and crisis management in social networks such as websites or Twitter including evacuation information such as evacuation advice at the time of disaster doing [3]. In addition to hazard maps, we also disclose information on shelters, information on soils, and areas such as hazards predicted in the event of an earthquake. Some local governments are using Twitter to train information sharing among users at the time of evacuation [4]. The national and local governments are taking various measures to deal with the damage caused by the disaster. As part of these measures, there have been many attempts to use social networking sites for evacuation.

1.3 Means of Obtaining Information When the Earthquake Has Occurred Media and disaster prevention reports can be cited as means of obtaining information in the event of an earthquake. However, in recent years, social network services (SNS) such as Twitter, Facebook, and mixi have attracted attention as a means of obtaining information in the event of an earthquake. They enable get or exchange quickly the information. Also, it is excellent in confirming the safety of others. After the Great East Japan Earthquake, many people used SNS to obtain damage information [5]. Further, GPS information can be added to the posting of SNS, and pinpoint information can be transmitted. Research has also been conducted to show the effectiveness of using SNS in disaster evacuation [6]. The results of this study show that the use of social networking sites can be a great way to check on the safety of family and friends in times of disaster. Not only that, but it is also an excellent tool for exchanging information, such as sharing information about the situation in the corridor.

Estimation of Road Damage for Earthquake Evacuation …

27

1.4 Related Study An example of an evacuation guidance system at the time of a disaster is given [7]. The system uses mobile phones and smart don terminals to guide people along the best route to a predetermined shelter. The system also has a function to report road information with GPS information. The user can evaluate the condition of the passage in three levels: “Passable,” “Obstacle,” and “Impassable” and then post the information. Then, based on the submissions, the best evacuation route is determined. In this study, we considered that it is necessary to systematically evaluate the passageway situation as a means of understanding the situation of the passageway, considering the possibility of individual differences in the evaluation. In this study, we propose a system that automatically judges the passageway situation and provides evacuation guidance. A method for estimating the damage to the roadway for the necessary treatment portion of the system was also studied.

1.5 Research Purpose From the above background, in order to minimize the number of victims of the earthquake, it was considered necessary to eliminate the delay in evacuation. For that, it is important to choose the shortest route to the shelter. However, in the event of a disaster, there will be various road damages. Therefore, evacuation guidance system is necessary for consideration of those obstacles. However, to identify a safe passage required information on the road. We paid attention to the case of information provision by SNS and the high real-time performance. Also, we proposed a system for providing an optimum evacuation route to a user linked with SNS. In addition, we have studied a systematic method to grasp the damage situation of the passage. In addition, a systematic method to grasp the damage situation of the passage will be studied.

2 Overview of the Proposed System In this chapter, the outline of the proposed system in this research and the internal algorithm are described.

28

Y. Mihara et al.

2.1 Internal Specification The purpose of this system is to evacuate from a point outdoors to a predetermined evacuation site. The process of evacuation is shown in Fig. 1. First, when an earthquake occurs, the system obtains information on the earthquake and the necessity of evacuation from the meteorological agency and local government. If there is no need to evacuate in spite of an earthquake, a warning message is displayed. If it is necessary to evacuate, the shortest route to an evacuation site will be shown to the user. The system will then search for posts on social networking sites. If there is information about damage to a pathway in an evacuation route, the system avoids the pathway and suggests the safest and shortest route to the evacuation site. However, it may not always be possible to complete the evacuation by the safe route alone. For example, there are two shortest routes to the evacuation site, both of which are affected by the earthquake. In this case, you can use the safest route. However, in

Fig. 1 Route decision flow considering passage damage

Estimation of Road Damage for Earthquake Evacuation …

29

the case of a high emergency, such as during a tsunami, the choice of a detour route may delay the escape. Therefore, using the safe evacuation distance as a threshold, if the route is within the evacuation range, the safe detour is selected, and if it exceeds the evacuation distance, the damage content of the corridor is estimated based on the posted images. This method is discussed in the next section. Then, a relatively safe corridor is determined and selected as the evacuation route. These actions are shown in Fig. 2. Here, the evacuation possible distance is the evacuation possible distance from the start of evacuation. Based on the arrival time of the tsunami and the general walking speed at the time of evacuation, the starting evacuation distance to the arrival time of the tsunami is calculated. If the evacuation route is chosen, it is

Fig. 2 Route decision flow considering evacuation distance

30

Y. Mihara et al.

determined if it is possible to evacuate based on this distance. If it is not possible to evacuate, the damaged corridor should be determined as the evacuation route, even if it is a little dangerous. The equation is based on the following definition [8], and these contents have been discussed in previous papers [9] Evacuable distance (L) = V × (T1 − T2 − T3 )

(1)

where V T1 T2 T3

Walking speed, Tsunami arrival time, Evacuation start time, Time from evacuation start to current location.

3 Estimation of Damage to Passage In this chapter, we examine a method to estimate the damage caused by the disaster based on the corridor images acquired from SNS. They are used to determine whether a passage is appropriate as an escape route. First, the machine learning method and other technologies are described, and the details of the experiments are described.

3.1 Support Vector Machine (SVM) Support vector machine (SVM) is a supervised learning method for two-class pattern identification. When a plurality of classes exists in the classification of SVM, the classification of the plurality of classes is made possible by using a method of “oneagainst-all” or “one-against-one.” “one-against-all” performs identification in pairs. So you create as many SVM models as there are classes. On the other hand, “oneagainst-one” identifies each class one to one. Therefore, all classes must be combined [10]. In this study, images were classified by “one-against-one.” In this study, the linear function “Liner” was used.

3.2 Transfer Learning by VGG 16 The VGG 16 is a CNN model consisting of 16 layers learned in a large-scale image dataset “ImageNet.” Each parameter used for learning is shown in Table 1 [11]. In this study, we used a learned model of VGG 16 to extract features of training images. The outputs of the convolution layer and the pooling layer of the VGG 16 were used as the input of the classifier by the SVM as feature values.

Estimation of Road Damage for Earthquake Evacuation … Table 1 Hyperparameters of VGG16

31

Parameter

Value

Butch size

256

Momentum

0.9

Weight decay

5e−04

Learning rate

1e−05

Epoch

74

4 Experiment The purpose of this research system is to estimate the damage content of the passageway based on SNS posts. The images used for learning were collected from damage reports (picture) posted on Twitter at the time of the actual disaster. It was defined on the basis of those images. Images labeled to those items were learned. Images with multiple defects were excluded. These images were used as training images and validation images. Aside from the training images, we prepared 10 images for each item, classified them as test images, and evaluated them. The number of image data for training is shown in Table 2. For learning, K-fold cross-validation (K = 5) was performed, and learning was performed while replacing training data with validation data, and the average value was calculated.

4.1 Evaluation Method As a measure of the recognition system, the evaluation was carried out by fitting rate, reproduction rate, and F-measure. Each is defined as follows (Formulas 2–4) Precision =

Table 2 Categories and number of images

N1 N2

(2)

Categories

Number of images

Collapse of a house

104

Road crack

46

Collapse of a utility pole

43

Road depression

38

The inclination of a utility pole

56

Shredding of the road

41

Collapse of the concrete block

40

No damage

40

32

Y. Mihara et al.

Recall = F - measure = 2 ×

N1 N3

Recall × Precision Recall + Precision

(3) (4)

where N1 N2 N3

Number of images correctly identified, Number of images identified as correct, Number of images in the correct class.

5 Result The classification results of each item are shown in Fig. 3. The vertical axis indicates the correct label of the test image, and the horizontal axis in the figure indicates the prediction of the classification. As shown in Fig. 3 and Table 3, some categories can be classified with high accuracy. Overall classification accuracy averages around 78%. Here, if attention is paid to a misclassified item, for example, an item of “Road crack” is identified as “Road Depression.” In addition, some of the items in “The inclination of a utility pole” are identified as “The inclination of a utility pole.” Although these are classified incorrectly, they can be identified as roughly similar categories, and it is inferred that these features can be accurately extracted in the process of learning the passage image. The problem here, however, is the “No damage” item. It is possible that the proposed system in this study will post a passageway image with no damage to estimate the damage of the passageway. In this case, it is suggested that a detour route, which is a detour in spite of a safe passage, may be presented. In the implementation of this system, it will be important to extract the passage of “No damage” with high accuracy. Since there is a small number of data in the current learning, it is necessary to examine the method of data augmentation. In addition, it is necessary to consider methods for appropriately processing “No damage” images, such as adding classes to improve versatility and capturing damaged images of passages as above.

6 Conclusion In this study, we propose an evacuation guidance system based on the route information and evacuation distance in order to reduce the human damage caused by the earthquake. After that, it was thought that it was necessary to determine a safe evacuation route by systematically discriminating the information of the passage, not by the eyes of the user. The damage of the passage was classified into eight categories

Estimation of Road Damage for Earthquake Evacuation …

33

True class

Predict class Collapse of a house

Road crack

Collapse of a utility pole

Road depression

The inclination of a utility pole

Shredding of the road

Collapse of the concrete block

Collapse of a house

10

0

0

0

0

0

0

0

Road crack

0

7

0

0

0

0

0

3

Collapse of a utility pole

1

0

8

0

0

0

1

0

Road depression

0

7

0

3

0

0

0

0

The inclination of a utility pole

0

1

0

0

8

0

1

0

Shredding of the road

0

0

0

0

0

10

0

0

Collapse of the concrete block

0

0

0

0

0

1

8

1

No damage

0

0

0

0

1

1

1

7

No

damage

Fig. 3 Classification results for each class Table 3 Evaluation result Categories

Precision

Recall

F-measure

Collapse of a house

1.00

0.91

0.95

Road crack

0.60

0.60

0.60

Collapse of a utility pole

0.70

1.00

0.82

Road depression

0.80

0.67

0.73

The inclination of a utility pole

0.60

0.86

0.71

Shredding of the road

1.00

0.83

0.91

Collapse of the concrete block

0.90

0.82

0.86

No damage

0.60

0.60

0.60

Accuracy

0.78

34

Y. Mihara et al.

and estimated by the transfer learning of SVM. As a result, it became possible to identify with 78% accuracy as a whole. In the future, it is necessary to enhance the accuracy by expanding the training data and to enhance the versatility by adding classes.

References 1. Appendix to the Cabinet Office’s, White Paper on Disaster Prevention, Major Natural Disasters in our country Since 1945 (2018) 2. Expert Committee on Earthquake and Tsunami Countermeasures Based on the Lessons Learned from the Tohoku District-Off the Pacific Ocean Earthquake of the Cabinet Office, Main Issues in Future Damage Estimation Based on the Great East Japan Earthquake: Responses to Wide-area Disasters Caused by Trench-type Earthquakes, p. 22 3. Sugiyama, S.: Use of SNS in disaster response by local governments. In: SNS Guidebook in Disaster Response, Information and Communications Technology (IT), Cabinet Secretariat (General Strategy Office, 2017), pp. 289-295 4. Wakou City Homepage. http://www.city.wako.lg.jp/home/kurashi/bousai/_19053/_19047/_ 17017/_18386.html. Last accessed 2020/6/27 5. Ito, S., Takizawa, Y., Obu, Y., Yonekura, T.: Effective indomation sharing in the great east Japan earthquake. In: The 26 Japan Society of Social Informatics National Congress (2011) 6. Niwa, T., Okaya, M., Takahashi, T.: Effect of SNS communication in evacuation simulation. In: SIG-SAI (2013) 7. Ozawa, S.: Development of a route guidance system for disaster evacuation research report commissioned by the Higashimikawa Regional Disaster Prevention Council (2013) 8. Ministry of Land, Infrastructure, Transport and Tourism, Guidelines for Building Cities Resistant to Earthquakes and Tsunamis (2014) 9. Mihara, Y., Hirakawa, R., Nakashi, H.K.K., Nakatoh, Y.: Study on evaluating risks of routes damages at earthquakes for evacuation guidance system. In: ACIT2019 (2019) 10. Hsu, C.-W., Lin, C.-J.: A comparison of methods for multi-class support vector machines. IEEE Trans. Neural Netw. 13(2), 415–425 (2002) 11. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

Analysis of the Messages from Social Network for Emergency Cases Detection Olga Tsarenko, Yana Bekeneva, and Evgenia Novikova

Abstract Analysis of the publicly available data from social networks may benefit many practical applications, and analysis of text messages to reveal emergent cases is an urgent and important task. Wide spread of social media may be used in order to detect and manage rescue operations. In this paper, an approach to analysis of public messages from social network for understanding the emergency scenario developing and rescue operation managing is proposed. This approach allows filtering data according to the settings chosen by researcher and visualizing the result of analysis using two options: placing number of messages on the map and creating a pie diagram. The designed application was tested on data from VAST Mini-Challenge 3 2019 describing social network response on earthquake. The experiments showed that the proposed approach allows detecting the most dangerous areas in the city and highlighting them visually. Keywords Social network · Message filtering · Message analysis · Emergency detection · Analysis result visualization

O. Tsarenko (B) · Y. Bekeneva · E. Novikova Saint-Petersburg Electrotechnical University “LETI”, Professora Popova Str. 5, St. Petersburg, Russia e-mail: [email protected] Y. Bekeneva e-mail: [email protected] E. Novikova e-mail: [email protected] E. Novikova Saint Petersburg Institute for Informatics and Automation, 14ya Liniya V.O, Saint-Petersburg, Russia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_4

35

36

O. Tsarenko et al.

1 Introduction Interest in the study of social networks is constantly increasing every day. The analysis of the individual data array of each of the users consists of information about their interactions, about the content that this particular user forms, as well as information about interactions with the content of other users [1]. Analyzing such kind of data, it is possible to understand general trends in public interests and concerns. Social media provide also effective tools for emergency management during emergencies and disasters [2]. On one hand, they could be used to distribute information across broad audience; on the other hand, the information obtained from social networks allows one to coordinate rescue operations more efficiently. There are many tools for analyzing social networks of various types, but perhaps one of the most important common requirements to them is the ability to process a large amount of data. They need to provide possibilities to analyze data extracted from different types of social media that differ in data format used, extract knowledge from data that in major cases may be uncertain and unreliable, and present it in the way suitable for an analyst. As social media technologies are constantly developing, some types of social media are losing their popularity while others become extremely popular, the need to develop novel approaches remains always urgent. In the paper, we present an approach to analyze social media messages in order to discover centers of emergencies and disasters and track the development of emergency situation as well as rescue operation. It is based on text analysis and visualization of extracted data. The particular focus in the research is done on data preprocessing step. Specifically, our contribution is a social media data preprocessing step that allows analysis of text message corpus based on given keywords not a hash tags that are often omitted by users in emergency case. The rest of the paper is organized as follows. In Sect. 2, related works are presented. Section 3 describes the proposed approach including data preparation, data analysis, and visualization of the results. In Sect. 4, experiments are presented. Conclusions sum up author’s contributions.

2 Related Works The advanced technologies and tools make it possible to analyze the data of social networks independently and get some information, but before implementing any analysis, it is necessary to extract and preprocess data, and this task is a complicated one as it requires considering different data format, structure, and type used in a particular social media source. Parsing [3] is a process of data collection and retrieval from websites that are available on the Internet through the hypertext transfer protocol or through web browsers, and special programs that extract certain parts of the information from the

Analysis of the Messages from Social Network …

37

data array are called parsers. Parsing has a relationship with scanning, since website scanning is a systematic browsing of the Internet, usually for indexing purposes, while parsing performs lexical analysis of the extracted information by comparing the texts found with certain templates that are often referred as masks. It could be used in any industry where it is necessary to extract data from the Internet. It is widely used in marketing and product analytics purposes, such as analysis of pricing policy, monitoring the product prices in online stores, monitoring news or announcements. In social networks, the goal of data parsing is very diverse, starting from promotion your account finishing with selection of potential customers. At the moment, the most popular task of parsing relating to social network analysis is audience parsing, which is widely used for business and targeting. The best-known tools for automatic analysis of social impacts to model the interaction and processes of the network, for content analytics, discussions, and assessment of engagement, to find customers for entrepreneurs are listed below. Yandex.Blogs [4] is a free Internet search service that allows you to search the main social networks and an extensive database of online blogs. The free Yandex.Blogging Internet aggregator from Yandex is designed to search quickly for information on blogs: posts, posts, publications, comments, and videos. Work with the Yandex.Blogs system is organized in the form of interaction with a regular search engine. The user is provided with tools for filtering sources and publication parameters: city, publication date and time, file type, publication language, and others. In addition, the Yandex query language could be used for advanced search. Among the processed sources are a large number of well known and other sites and blogs in their own subject domains. Google Analytics (GA) [5] is a free service provided by Google to create detailed statistics on website visitors. Statistics are collected on the Google server, and the user is required to place on Javascript code on the web pages of his/her site. The free version does not guarantee the processing of more than 10 million page views per month. For such sites since September 2011, it is proposed to use Google analytics premium (from 150 thousand US dollars per year), which can handle up to 1 billion visits. Leaderator.pro [6] is a customer search service for social networks. This service monitors different social networks around the clock, and based on machine learning algorithms, only relevant ads are beaten out. The main areas are programming, design, advertising, student work and education, and photo/video. Audience [7] is a tool that is able to evaluate the activity of a profile audience in social networks. It provides the ability to validate a cluster of targeted users that are more suitable for a particular request and allows one to track the discussion of published content, and the built-in hints allow finding ways to positively affect the audience engagement. In addition, audience analyzes how effectively the budget is used and an advertising campaign is being built on the social network. Brandwatch [8] is a kind of data library company that actively sells five different products: Consumer Research, Audiences, Vizia, Qriously, and BuzzSumo. Brandwatch is powered by artificial intelligence and has access to over 80 million sources. For example, Brandwatch Consumer Research [9] is software as a service that archives social media data to provide companies with information and tools to track

38

O. Tsarenko et al.

specific segments to analyze the online presence of their brands. It examines the reaction to the brand, the tone of reviews, allows on live to interact with the audience, and analyzes the activities of competitors in the industry. However, parsing is useful not only for commercial purposes. It has been shown that it is very effective in broadcasting information about emergencies and gaining insight on developing threat scenarios [2]. In [10], authors investigate practical issues of application of social media to implement such activities as search and rescue, first aid treatment, victim evacuation, etc. Apart from quick dissemination of information, they highlighted the following benefits of using social media: Social media is a tracking tool that possesses an ability to self-regulate misinformation in emergency due to masses. In some cases, they are the only way to communicate with people when other means of communication are out of service due to natural disaster. In [11], authors analyzed dynamic social networks formed in major China microblog service during Yiliang earthquake. In recent months, the research focus moved to analysis of text data in order to understand the spread of coronavirus pandemic [12]. In our paper, we propose an approach to analysis of social media text that supports search of information based on given keywords not hash tags that in case of emergency could be missed by users and visual presentation of the results by easily perceived visual models.

3 Approach Description The proposed methodology of social media text consists of following key steps: (1) identification of attributes of message attributes that would serve a basis for further analysis; (2) define filtering and aggregation (grouping) operations based on selected attributes; (3) define feature vectors for the selected subset of messages based on selected attributes; and (4) perform analysis using generated feature vectors. These steps are shown in Fig. 1. They are discussed in details in the subsections below. Each message in any social network has attributes. These are indispensable properties by which the most part of the messages can be structured. Let us consider the types of message attributes in social networks in more detail. It is possible to outline following most commonly use message attributes. • Date and time. Each message has this characteristic, which can be used to track when a message is received or sent. • Username. Without this characteristic, not a single message is sent, because it carries information about who (from which account) wrote and sent the message. • Location. Not all messages have this characteristic, but in many social networks, there are some settings that allow assigning location tag automatically. This feature gives information about the location of the user who sent the message. In major cases, it is represented by a point on the map or GPS coordinates. Another widely spread approach to determine location of a user is IP address of their device. This method does not provide high accuracy of the data, but quite tolerably determines

Analysis of the Messages from Social Network …

39

Fig. 1 General scheme of proposed social media analysis for emergency case detection

the location relative to the region or city. This method does not require interaction with the user, the main thing is that the user has to give corresponding permissions to the applications to process this information, and usually, it is done via application settings. • The text of the message. Obviously, this text is the most important and essential characteristic of the message. According to the listed attributes, any messages from any social network can be extracted and analyzed. The easiest way is to upload messages for a specific specified time period since it will take much less time, and it will also be more efficient to process, because collecting all messages from a certain social network may require a huge computational resource. We use following atomic actions with source data in order to obtain meaningful information: message filtering according to some predefined rules including temporal conditions and search of keywords.

3.1 Message Filtering We select only those messages that satisfy certain conditions that are associated with a specific natural phenomenon, or input features are selected. If D is the space of all

40

O. Tsarenko et al.

possible messages, and the initial data contains k records, then the filtering operation should select m records according to some predefined rules: F_filter : D k → D m , m ≤ k

(1)

It should be possible to select messages according to the following rules: the presence of duplicate letters (1); selection of messages within given time interval (2); and the presence of keywords (3). The user can select one, none, or several of them; if the rule of repeating letters and the presence of keywords are selected at the same time, they are replaced by one general rule, which is a disjunction of these two rules. The presence of duplicate letters This rule was chosen because in case of earth trembling, people can suddenly press the same letter several times so it will cause some mistakes in their messages. This rule was added after main analysis of dataset used for experiments since a number of messages include such mistakes. The result set contains only those messages in the text of which there are three or more identical consecutive characters of the Latin alphabet. This rule is defined as follows. ˜ r ∈ d, ˜ t = text(r ) ⇒ ∃n : t_n = t_(n + 1) = t_(n + 2), F_filter(d) = d, t_n ∈ Latin (2) where d is the initial set of messages (input parameter of the filter function), d˜ is the filtered set of messages (the result of the filter function), r is the specific message, text (…) is the text of this message, n is the position from which the repetition occurs letters, t_n is the character at position n of the text, and Latin is the set of Latin letters. Selection of messages within given time interval We distinguish here two operations—selection of messages with timestamp greater than given time and selection of messages with timestamp not exceeding given time. In the first case, messages that were generated later than the specified time fall into the result set. This rule is defined as follows. ˜ r ∈ (d) ˜ τ = time(r ) ⇒ τ ≥ T F_filter(d) = d,

(3)

where d is the initial message set (input parameter of the filter function), d˜ is the filtered message set (result of the filter function), r is the specific message, time (…) is the time the message was sent, and T is the time specified by the user borders. In the second case, messages that were written earlier than specified time fall into the result set. This rule is defined as follows.   ˜ r ∈ d˜ τ = time(r ) ⇒ τ ≤ T F_filter(d) = d, (4)

Analysis of the Messages from Social Network …

41

where d is the initial message set (input parameter of the filter function), d˜ is the filtered message set (result of the filter function), r is the specific message, time (…) is the time it was sent, and T is the time specified by the user at the top borders. The presence of keywords Only messages in the text which contain words from a user-defined set are included in the result set. This rule is defined as follows. ˜ r ∈ (d) ˜ t = text(r ) ⇒ ∃n, s∼w = (t_n . . . t_(n + s)), w ∈ W, F_filter(d) = d, ∃t_(n−1) ⇒ t_(n−1) = ASCII (20), ∃t_(n + s + 1) ⇒ t_(n + s + 1) = ASCII(20) (5) where d is the initial message set (input parameter of the filter function), d˜ is the filtered message set (result of the filter function), r is the specific message, text (…) is the text of this message, n is the position starting from which in the text there is a keyword, s is the length of the keyword, t_n is the character at position n of the text, w is the keyword, and (t_n … t_ (n + s)) is the sequence of characters from position n to position n + s not inclusive, W—a set of user-defined keywords, ASCII (20)—a space character.

3.2 Message Aggregation To analyze the messages with given keywords in order to define center of emergency cases, it is necessary to group them based on location principle. In other words, it is necessary to perform following transformation: F_sort : D m → N _0q

(6)

where N_0 is the set of natural numbers, and q is the number of locations studied. F_sort(d) = v ⇒ v_i = #{r : r ∈ d, location_index(r ) = i}

(7)

where d is the set of messages (input parameter of the analysis function), v is the vector of the number of messages in each region (the result of the analysis function), # {…} is the power (size) of the specified sets, r is a specific message, location_index (…) is the number of the area from which it was sent, and i is the number of the area. Another interesting quantitative characteristic is a number of users involved in the discussion, for which an equivalence relation between messages is introduced: d 1 , d 2 ∈ D; author(d1 ) = author(d2 ) ⇔ d1 = d2

(8)

42

O. Tsarenko et al.

where d 1 , d 2 are messages,and author (…) is the author of the communication. To implement further analysis, we suggest constructing feature vectors that represent a number of messages that satisfy given conditions assigned to each location. The process of their calculation is determined as follows: • Counting rule: to count messages in a certain location or unique users. Mathematically, this parameter defines the equivalence relation between messages. • A set of functors, each of which is a filter. The functor receives a message at the input and returns 1 when this message passes the filter, otherwise 0.

F : D → {0, 1}

(9)

The above-described filter function is uniquely determined by the set of these functors: F_filter(D) = {D_i : ∀ F ∈ F : F(D_i) = 1}

(10)

 is the set of transferred functors, D_i is a specific message, D is a set of where F input messages, and F is a specific functor.

3.3 Visualization of the Received Data The resulting dataset represented by a set of numerical vectors numbers corresponding to the number of messages that satisfy the given conditions, which are assigned to each location. This representation allows application of wide variety of visualization techniques, standard charts, such as line charts, histograms, and matrixbased visualization techniques. However, considering that one of the attributes is location, we suggest using map-based visualization. As it is necessary to monitor both absolute and relative number of messages for each location analyzed, we used maps with regions marked by number to visualize location with numerical attributes and pie charts, where each sector corresponds to a district. The size of a sector is proportional to the number of messages in the corresponding area. Currently, the implementation of selected visual models is a static picture; the developed application allows an analyst to load a file with initial data, a file with predefined keywords, filter messages in accordance with the user’s choice, and display the results in the selected way. Statistics can be obtained both by messages and by the number of users who sent them. The map is marked in the program, but it takes the names of the cities from the data file. After starting, the program generates a png-file, the location of which will need to be selected in the dialog box that appears.

Analysis of the Messages from Social Network …

43

However, elaboration of interactive visualization with different filtering mechanisms implemented suggested operation with data are included in the future research work.

4 Usage Scenario To evaluate applicability of the approach, we used data provided within VAST Challenge 2019 Mini-Challenge 3 [13] competition was used. This competition presents a specific situation with an imaginary city called St. Himark in which an earthquake occurred. This city is subdivided into 19 districts, each of which has a unique lifestyle that ensures that in St. Himark has a place for all of its 246,839 people. The source data includes the city map and the table with 41,000 messages written from April, 6 until April, 10. Let us consider different periods of time to determine the most dangerous situations according to the earthquake.

4.1 Earthquake Research On top of each district of the city of Himark, a number is displayed that corresponds to it as the necessary messages screened out using a list of keywords. To analyze this situation, a set of keywords related to the earthquake was used, for example, “vibration,” “shaking,” “trembling,” etc. and their synonyms and similar short phrases as well. Figure 2 shows the number of messages sent from different districts for two different periods: Fig. 2a shows the number of messages containing keywords for April, 6 from 13:00 till 18:00, and Fig. 2b shows the number of messages

Fig. 2 Number of messages containing keywords selected for two different time periods: April, 6 from 13:00 till 18:00 a and for April, 8 from 7:00 till 13:00 b

44

O. Tsarenko et al.

containing keywords for April, 8 from 7:00 till 13:00. It is clearly seen that the number of messages increasing fir central and northwest districts. Figure 3 shows results of the third experiment; though the difference in duration of selected time intervals in the second experiment (Fig. 2a) and third experiment (Fig. 3) is more than three times, the number of messages registered for each region is approximately equal, which means that Fig. 1b corresponds to the period of high message activity that may be a sign of time period when main shocks of the earthquake occurred. Thus, it is possible to conclude that Fig. 2a reflects a discussion of the inhabitants of the city caused by very first aftershocks of an earthquake. It is also possible to determine districts with first earthquake shocks. We also analyzed messages for different periods of time, and let us consider another interesting case. Figure 4 shows how amount of messages containing selected keywords (“vibration,” “shaking,” “trembling,” etc.) changed during two days. The duration of time interval in the fourth experiment (Fig. 4a) is five times less than duration of time interval in fifth experiment (Fig. 4b), but the amount of messages is approximately equal, which means that Fig. 4a corresponds to time period with higher messaging activities. Analyzing the obtained statistics and duration of selected time intervals, it is possible to conclude that second experiment (Fig. 2b) corresponds to the earthquake, and the fourth experiment (Fig. 4a) corresponds to the aftershock that is tremors after the main earthquake. It should be noted that in all cases, there are two areas that are characterized by a larger number of messages than others, and these areas are central and northwest districts (Weston, Old Town, Downtown, Southton, Northwest). Eastern part of city is characterized by small amount of messages. Thus, it is possible to conclude that

Fig. 3 Number of messages during 17:00 April, 8 till 13:00 April, 9

Analysis of the Messages from Social Network …

45

Fig. 4 Number of messages containing keywords selected for two different time periods: from 17:00 April, 8 till 13:00 April, 9. a and from 18:00 April, 9 till 10:00 April, 10 b

the west and central parts of city had a peak of tremors, and attention should be paid to these areas, while eastern part was almost not affected by the earthquack.

4.2 Resource Research The second part of experiments was devoted to finding messages related to resources necessary during the emergency. A list of words related to resources was used to determine which areas of the city need food, water, and the help of doctors according to their messages. We assumed that resources and help to people are required after the main earthquake that means we need to choose the messages after the main shocks. So, the time interval from April, 8 1:00 till April, 9 23:00 was chosen since it captures the time of the main earthquake and aftershock. Figure 5 shows the number of messages related to the list of words associated with resources before all earthquake events. The number of people who wrote these messages from different areas of the city can be determined by a pie chart. According to this diagram, we can conclude that most of the people who talked about resources and help are in the regions of the city: Weston, Southton, Downtown. These areas are the same areas that were determined during analysis of earthquake location. In addition, if you look at the map, these areas are neighbors to each other, which means that the earthquake shocks caused damage to these areas of the city most of all, and they need the help of emergency services and the immediate response of the city authorities. It can be seen from the output files that the program correctly processes the data—the results on the map and pie chart for the same filters are the same, the number of users does not exceed the number of messages, and after conducting

46

O. Tsarenko et al.

Fig. 5 Number of messages during 1:00 April, 8 till 23:00 April, 9

several experiments, you can draw many conclusions using analysis of people’s communication in social networks.

5 Conclusion The aim of research was developing an approach to social network data preparation and analysis and creating an application for detecting emergencies through message analysis on social networks in order to manage emergency response activitites. The proposed approach includes several messages filtering scenarios; the criteria of filtering are discussed. The author’s contribution is an idea that not only named words can be among filtered messages but also some mistakes related to the condition of emergency also could be analyzed. The manual analysis of data showed that the source data includes a number of messages with repeating symbols more than two one after another. We assumed that it could be related to earth trembling, so such messages were included to the analysis for the experiments. For the future researches, another more significant mistakes could be found and explained. The designed application was implemented and tested on a real dataset, and the analysis of these data through a developed application was performed, which allowed us to draw some conclusions. And they can help in the distribution of emergency assistance among the areas of the studied city where the emergency occurred. In this paper, the approach was tested on static dataset, and the source data for the application is a single file with all messages. We are planning to enhance our application by adopting it to real-time monitoring, elaborating criteria for setting thresholds

Analysis of the Messages from Social Network …

47

notifying about need of emergency response and adding interactive data visualization supporting flexible data filtering options. Another direction of the research is the parallelizing the developed algorithm, which will ensure efficient allocation of resources and increase productivity, because for such applications, an important requirement is the ability to process a very large amount of data.

References 1. Lopatovska, I., Rink, K., Knight, I., Raines, K., Cosenza, K., Williams, H., Martinez, A.: Talk to me: exploring user interactions with the Amazon Alexa. J. Librarianship Inform. Sci. 51(4), 984–997 (2019) 2. Innovative uses of social media in emergency management. https://www.dhs.gov/sites/default/ files/publications/Social-Media-EM_0913-508_0.pdf. Last accessed 2020/06/29 3. Zellers, R., Yatskar, M., Thomson, S., Choi, Y.: Neural motifs: scene graph parsing with global context. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5831–5840 (2018) 4. Yandex.Blogs. https://yandex.ru/blogs. Last accessed 2020/06/29 5. Google Analytics. https://analytics.google.com/analytics. Last accessed 2020/06/29 6. Leaderator.pro. https://leaderator.pro/. Last accessed 2020/06/29 7. Audience. https://business.linkedin.com/marketing-solutions/native-advertising/linkedin-aud ience-network. Last accessed 2020/06/29 8. Brandwatch. https://www.brandwatch.com/. Last accessed 2020/06/29 9. Brandwatch, Consumer Research. https://www.brandwatch.com/products/consumer-res earch/. Last accessed 2020/06/29 10. Simon, T., Goldberg, A., Adini, B.: Socializing in emergencies—a review of the use of social media in emergency situations. Int. J. Inf. Manag. 35(5), 609–619 (2015) 11. Li, L., Zhang, Q., Tian, J., Wang, H.: Characterizing information propagation patterns in emergencies: a case study with Yiliang Earthquake. Int. J. Inf. Manag. 38(1), 34–41 (2018) 12. Lwin, M.O., Lu, J., Sheldenkar, A., Cayabyab, Y.M., Yee, A.Z.H., Smith, H.E.: Temporal and textual analysis of social media on collective discourses during the Zika virus pandemic. BMC Pub. Health 20, 1–9 (2020) 13. VAST Challenge 2019. https://vast-challenge.github.io/2019/. Last accessed 2020/06/29

Intelligent Simulation of Competitive Behavior in a Business System Dmytro Chumachenko , Sergiy Yakovlev , Ievgen Meniailov , Kseniia Bazilevych , and Halyna Padalko

Abstract Modern economics makes extensive use of mathematical methods, both for solving practical problems and for modeling socio-economic phenomena and processes. As part of the study, such model has been developed which consists in the fact that society is divided into three groups that have different behavioral strategies in different situations: the strategies of “Hawk”, “Dove” and “Law-abiding.” The proposed model is implemented in the MATLAB/Simulink package. With the help of the model, the dynamics of the population development using each of the strategies, the speed of “reproduction” of the system players applying each of the strategies is calculated. The system is tested for stability. Keywords Model of competitive behavior in a business system · Equilibrium points · Stability characteristics · Economics modelling

1 Introduction Economic management is not only the redistribution of resources (financial, material, labor, etc.), but also the ability to predict the results of this redistribution [1–4]. There are many economic, financial, social, political and other indicators characterizing the state of the economic system [5]. The ability to predict losses, the magnitude and time of their occurrence, as well as to measure the losses with useful acquisitions, allows you to avoid many management errors in the economic system [6]. Modern economics makes extensive use of mathematical methods, both for solving practical problems and for modeling socio-economic phenomena and processes [7]. Mathematical models are the most important tool for research and forecasting. They are the basis of computer modeling and information processing [8], D. Chumachenko (B) · S. Yakovlev · I. Meniailov · K. Bazilevych · H. Padalko National Aerospace University “Kharkiv Aviation Institute”, Kharkiv, Ukraine I. Meniailov e-mail: [email protected] K. Bazilevych e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_5

49

50

D. Chumachenko et al.

provide deeper insights into the patterns of economic processes [9], and contribute to the formation of a way of thinking and analysis at a new, higher level [10]. Today, in the context of the globalization of the world economy and the emergence of a new type of society—the information society—mathematical models are becoming a powerful tool for predicting the evolution of civilization [11], which allows us to determine the optimal highways of economic development, primarily in terms of human life [12]. With the further development of society, it is becoming more and more important to develop ways to improve economic relations in terms of the optimal use of all natural, industrial, material and labor resources [13]. With the help of an economic model, it is possible to analyze and adequately respond to a variety of situations that are caused by continuously changing conditions and in which a socio-economic object functions [14–16]. Formulation of the problem: The rules for the competitive behavior of players in the economic system are defined. It is necessary: 1. 2. 3.

to formulate the rules for winning the players of the original model; to develop a software model of competitive behavior in a business system; to determine the equilibrium points and system stability characteristics.

2 Model of Competitive Behavior in the Business System Developed model consists in the fact that the society is divided into three groups that have different behavioral strategies in different situations: the strategies of “Hawk”, “Dove” and “Law-abiding” [17]. Conflicts arise when players collide with each other. Suppose that there are three possible behaviors: demonstration, fighting, and run [18]. Assume that individuals respond to confrontation in a limited number of ways. Let each individual act on one of the strategies specified in Table 1. An individual applying a strategy i = 1, 3 to an opponent applying a strategy j = 1, 3 gets a win aij . It is believed that the gain aij affects the behavior of the economic system. Suppose that only pure strategies are applied, i.e., each individual always belongs to the same type and applies the same strategy, and that the offspring inherits the strategy of the parent [6]. Let x i is part of the players of the economic system, applying the strategy i. Then Table 1 Tactics of players for given strategies Number i

Strategy

Initial tactic

Tactics if the opponent enters the fray

1

Hawk

Fight

Fight

2

Dove

Demonstration

Run

3

Law-abiding

Fight

Fight

Intelligent Simulation of Competitive Behavior … 3 

xi = 1,

51

(1)

i=1

where xi ≥ 0. Winning for players using strategy i against all others is 3 

ai j xi = (Ax)i ,

(2)

j=1

where A is the payment matrix. (Each side of the matrix is a player, the rows determine the strategies of the first player, and the columns define the second. At the intersection of the two strategies, you can see the winnings that the players will receive). The player’s win is 3 

xi (Ax)i = x T Ax,

(3)

i=1

Therefore, the “benefit” from applying the strategy i is equal to (Ax)i − x T Ax,

(4)

The player’s reproduction rate for a group applying strategy i is considered to be proportional to the advantage of this strategy, which gives •

x i = xi ((Ax)i − x T Ax),

(5)

Equation (5) makes sense only for those points of the space R3 that satisfy condition (1). That is, for the area of possible strategies, we can get a payment matrix by setting “points” for the result of each collision. For example: victory = 6, defeat = 0, injury = −10, loss of time = −1. Those specific values that are given here are irrelevant, their sign and the order of absolute values are important. If a “hawk” meets with a “dove” or “law-abiding”, then it wins, so that a12 = a13 = 6. If there are two “hawks”, they fight until one of them gets injured. Both “hawks” win with equal probability, and the gain is equal to a11 = 0.5 * (6 − 10) = −2. If the “pigeon” meets the “hawk” or “law-abiding”, then it loses, therefore a21 = a23 = 0, but two “pigeons” continue their demonstrations to each other until one of them surrenders, so a22 = 0.5 * (6 + 0) − 1 = 2. Finally, “law-abiding” lose to “hawks” (a31 = 0), win against “doves” (a32 = 6) and have a 50% chance of winning from their own kind (a33 = 0.5 * (6 + 0) = 3). In this way

52

D. Chumachenko et al.



⎤ −2 6 6 A = ⎣ 0 2 0 ⎦, 0 63

(6)

It is also useful to note that the advantage of the strategy does not change if any column of the matrix A is added with a constant value. Using such a transformation, the matrix A can be simplified by making its diagonal elements equal to zero. The dynamic Eq. (5) will not change. Therefore, we can assume ⎡

⎤ 04 3 A = ⎣ 2 0 −3 ⎦, 24 0

(7)

3 Model of Competitive Behavior in the Business System The proposed model is implemented in the MATLAB/Simulink package [19]. This is an interactive tool for modeling, simulation and analysis of dynamic systems. It provides an opportunity to build graphic block diagrams, simulate dynamic systems, investigate the performance of systems and improve projects [20]. MATLAB/Simulink provides the user with immediate access to a wide range of analysis and design tools. These advantages make Simulink the most convenient tool for designing control and communication systems, digital processing and other modeling applications [21]. The software implementation of the competitive behavior model in the business system in the MATLAB/Simulink environment is shown in Figs. 1, 2 and 3. The software implementation of this model has four subsystems: 1. 2.

x T · A · x (Fig. 2); (A · x)1 , (A · x)2 , (A · x)3 (Fig. 3). Using the constructed model, you can get the following results:

1. 2.

Graph of the dynamics of the development of the population, applying each of the strategies (Fig. 4); Graph of the rate of “reproduction” of the players of the system, applying each of the strategies (Fig. 5).

4 Finding Equilibrium Points Equilibrium point [22]—a point in the coordinate space of the system, which characterizes its equilibrium state at a given moment. This is one of the stationary points of the function describing the behavior of the system. Thus, all partial derivatives of

Intelligent Simulation of Competitive Behavior …

53

Fig. 1 Model of competitive behavior in the business system

the function vanish at the equilibrium point [23]. In mathematical programming, this is the point where the Lagrange function reaches a maximum in terms of the initial variables (direct problem) and a minimum in Lagrange multipliers [24]. The principle of equilibrium occupies an important place in economic analysis [25]. In an economic system, equilibrium is established as a result of the action of a particular socio-economic mechanism, i.e., a combination of prices and other economic standards, and the coordination of the interests of all subsystems [26]. Equilibrium, in particular, depends on accepted economic relations, including the principles of the distribution of goods and incomes [27]. The concept of equilibrium is closely related to the concept of system stability [28]. If the external influence on the system remains unchanged its equilibrium properties, we are dealing with a stable equilibrium, in the opposite case with an unstable one [29]. An equilibrium is called locally stable if it is ultimately achieved, starting with a certain set of prices that is close enough to the equilibrium point, and globally stable—if it is ultimately achieved regardless of the starting point [30–33]. We show that the dynamic Eq. (5), where the matrix A is given by formula (7), have a fixed point (x 1 , x 2 , x 3 ) = (3/5, 0, 2/5). Apply the function of “Lyapunov type” 3/5 2/5

V (x) = x1 x3 , and prove that this fixed point is asymptotically stable and the stability region is

(8)

Fig. 2 Subsystem x T · A · x

54 D. Chumachenko et al.

Intelligent Simulation of Competitive Behavior …

55

Fig. 3 Subsystems (A · x)1 ; (A · x)2 and (A · x)3 •

 = {(x1 , x2 , x3 )|x1 + x2 + x3 = 1; x1 , x2 , x3 > 0},

(9)

56

D. Chumachenko et al.

Fig. 4 Graph of the dynamics of population development, applying each of the strategies, with the initial values x 1 = 0.1, x 2 = 0.8, x 3 = 0.1. The final values are x 1 = 0.6, x 2 = 0, x 3 = 0.4

Fig. 5 Graph of the rate of “reproduction” of the players of the system, applying each of the strategies, with the initial values x 1 = 0.1, x 2 = 0.8, x 3 = 0.1

To check whether the point x = (3/5, 0, 2/5) is a fixed point for system (5), we note that x T Ax = 5(3/5)(2/5) = 6/5. For i = 1 and i = 3 (Ax)i = 6/5 and, therefore, • • • x 1 = x 3 = 0, and for i = 2 x 2 = 0, since x 2 = 0.

Intelligent Simulation of Competitive Behavior …

57 •

The point (3/5, 0, 2/5) is asymptotically stable over , using the reasoning of “Lyapunov type.” The level surfaces of the function V (x1 , x2 , x3 ) are invariant with respect to the shift along the x 2 axis and intersect the x 2 = 0 plane along hyperbolas. For  derivative V along trajectories (5) is defined as 



• •  3 x1 2 2 x3 3 T , 0, Ax − x Ax + = V (x) V (x) = V (x) 5x1 5x3 5 5



,

11 3 2 − x1 − x3 + 5 x1 − = V (x) (1 − x1 − x3 ) 5 5 •



(10)



Consequently, the derivative V (x) is positive on  and V increases along the • trajectories to its maximum at the point Q. Thus, all the trajectories in  as t increases •

approach the point Q. This means that  is a subset of the stability domain and cannot contain fixed points. Therefore, all fixed points of system (5) must be on the boundary • • . The equations x 1 = 0, x 3 = 0 turn into x1 (3x3 −5x1 x3 ) = 0, x3 (2x1 −5x1 x3 ) = 0 respectively. Thus, besides Q, there are fixed points H = (1, 0, 0) and B = (0, 0, 1). Similarly, on BD (x 1 = 0) and HD (x 3 = 0) there are fixed points D = (0, 1, 0) and P = (2/3, 1/3, 0); other fixed points on  don’t exist. We can determine the behavior of the trajectories on the border, noting that •

(a)

at H B,x 1 > 0 with x1
0; • at H D, x 2 > 0 with x2
35 ; •

and x 2 < 0 with x2 > 13 .

Suppose that in the population consisting of “hawks” and “doves” a mutant appears—“law-abiding.” The new state of the population corresponds to the phase point in, close to HD. Since all trajectories in tend to the point Q with increasing t, we can conclude that the state of the population tends to x = (3/5, 0, 2/5) and that the “doves” will die out. Equilibrium points can be of two types: 1. 2.

Attractor (if the initial parameters fall into a certain pool of values, then the end result will be almost the same); Repeller (if the initial parameters are slightly changed, then the final result will differ significantly) [10].

In our case, the equilibrium points of the attractors (e.g., with initial values x 1 = 0.6, x 2 = 0, x 3 = 0.4, we get the final values x 1 = 0.6, x 2 = 0, x 3 = 0.4, with initial values x 1 = 0.1, x 2 = 0.8, x 3 = 0.1 we also get the final values x 1 = 0.6, x 2 = 0, x 3 = 0.4).

58

D. Chumachenko et al.

5 Conclusions In the framework of the research, the assigned tasks were completed. The rules for winning players of the source system are described. The model of competitive behavior in a business system is implemented programmatically in MATLAB/Simulink. Equilibrium points and system stability characteristics are found, which allows selection of suitable input parameters of the system. It is shown that the equilibrium points are attractors and have a large pool of initial values.

References 1. Dotsenko, N., Chumachenko, D., Chumachenko, I.: Modeling of the processes of stakeholder involvement in command management in a multi-project environment. In: Proceedings of 2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2018, pp. 29–32 (2018) 2. Dotsenko, N., Chumachenko, D., Chumachenko, I.: Project-oriented management of adaptive teams’ formation resources in multi-project environment. CEUR Workshop Proc. 2353, 911– 923 (2019) 3. Dotsenko, N., Chumachenko, D., Chumachenko, I.: Management of critical competencies in a multi-project environment. CEUR Workshop Proc. 2387, 495–500 (2019) 4. Dotsenko, N., Chumachenko, D., Chumachenko, I.: Modeling of the process of critical competencies management in the multi-project environment. In: Proceedings of IEEE 2019 14th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2019, pp. 89–93 (2019) 5. Chernyaeva, V., Wang, D.H.: Interpretation of economic systems models in international overview: traditional, the command-administrative, capitalism and mixed-economy systems: unified economic areas. Int. J. Econ. Manag. Sci. 7(3), 519 (2018) 6. Wesley, E., Peterson, F.: The role of population in economic growth. SAGE Open 7(4), 1–15 (2017) 7. Chumachenko, D., Yakovlev, S.: On Intellegent agent-based simulation of network worms propagation. In: 2019 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM), pp. 3.11–3.13 (2019) 8. Chumachenko, D.: On intelligent multiagent approach to viral hepatitis b epidemic processes simulation. In: Proceedings of the 2018 IEEE 2nd International Conference on Data Stream Mining and Processing, DSMP 2018, pp. 415–419 (2018) 9. Chumachenko, D., Chumachenko, T.: Intelligent agent-based simulation of hiv epidemic process. Adv. Intell. Syst. Comput. 1020, 175–188 (2020) 10. Chumachenko, D., et al.: On agent-based approach to influenza and acute respiratory virus infection simulation. In: Proceedings of 14th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering, TCSET 2018, pp. 192–195 (2018) 11. Ying, H., et al.: Predicting key events in the popularity evolution of online information. PLoS ONE 12(1), 1–21 (2017) 12. Ginters, E., Aizstrauts, A., Eroles, M., Buil, R., Wang, B.: Economic development assessment simulator based on Yantai use case. Procedia Comput. Sci. 77, 22–32 (2015) 13. Yi, L., Huifang, L., Lei, Z.: Modeling and forecasting returns jumps using realized variation measures. Econ. Model. 76, 63–80 (2019)

Intelligent Simulation of Competitive Behavior …

59

14. Polyvianna, Y., Chumachenko, D., Chumachenko T.: Computer aided system of time series analysis methods for forecasting the epidemics outbreaks. In: 2019 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM), pp. 7.1–7.4 (2019) 15. Chumachenko, D., Chumachenko, K., Yakovlev, S.: Intelligent simulation of network worm propagation using the code red as an example. In: Telecommunications and Radio Engineering, vol. 78(5), pp. 443–464 (2019) 16. Mashtalir, V.P., Yakovlev, S.V.: Point-set methods of clusterization of standard information. Cybern. Syst. Anal. 37(3), 295–307 (2001) 17. Rahman, S., Li, S.: A hybrid framework for modelling and simulation for deshopping behaviour and how companies respond. In: Latest Trends in Energy, Environment and Development: Proceedings of the 3rd International Conference on Energy Systems, Environment, Entrepreneurship and Innovation, pp. 176–181 (2014) 18. Neumann, M., Secchi, D.: Exploring the new frontier: computational studies of organizational behavior. In: Agent-Based Simulation of Organizational Behavior, pp. 1–16 (2016) 19. Chumachenko, D., et al.: Intelligent expert system of knowledge examination of medical staff regarding infections associated with the provision of medical care. In: CEUR Workshop Proceedings, vol. 2386, pp. 321–330 (2019) 20. Meniailov, I., et al.: Using the K-means method for diagnosing cancer stage using the pandas library. In: CEUR Workshop Proceedings, vol. 2386, pp. 107–116 (2019) 21. Arnaldi, I.: Design of sigma-delta converters in MatLAB/Simulink, p. 243 (2019) 22. Gutierrez, R., Vidal, C.: Stability of equilibrium points for a hamiltonian systems with one degree of freedom in one degenerate case. Regul. Chaotic Dyn. 22(7), 880–892 23. Mashtalir, V.P., Shlyakhov, V.V., Yakovlev, S.V.: Group structures on quotient sets in classification problems. Cybern. Syst. Anal. 50(4), 507–518 (2014) 24. Bialynicki-Birula, I., Kalinski, M., Eberly, J.: Lagrange equilibrium points in celestial mechanics and nonspreading wave packets for strongly driven Rydberg electrons. Phys. Rev. Lett. 73(13), 1777–1780 (1994) 25. Beenstock, M., Felsenstein, D.: Spatial econometric analysis of spatial general equilibrium. Spat. Econ. Anal. 13(3), 356–368 (2017) 26. Mohammed, T.: Simulation of the impact of economic policies on poverty and inequality: GEM in micro-simulation for the Algerian economy. Int. Rev. Appl. Econ. 32(3), 308–330 (2018) 27. Mazorchuck, M., Dobriak, V., Chumachenko, D.: Web-Application development for tasks of prediction in medical domain. In: 2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT), pp. 5–8 (2018) 28. Ilic, M., Jaddivada, R., Miao, X.: Modeling and analysis methods for assessing stability of microgrids. IFAC PapersOnLine 50(1), 5448–5455 (2017) 29. Chumachenko, D., et. al.: Development of an intelligent agent-based model of the epidemic process of syphilis. In: IEEE 2019 14th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2019, pp. 42–45 (2019) 30. Gerasin, S.N., Shlyakhov, V.V., Yakovlev, S.V.: Set coverings and tolerance relations. Cybern. Syst. Anal. 44(3), 333–340 (2008) 31. Otunuga, M.O.: Global stability of nonlinear stochastic SEI epidemic model with fluctuations in transmission rate of disease. Int. J. Stoch. Anal. 2017, 7 (2017) 32. Nechyporenko, et al.: Implementation and analysis of uncertainty of measurement results for lower walls of maxillary and frontal sinuses. In: Proceedings of 2020 IEEE 40th International Conference on Electronics and Nanotechnology, ELNANO 2020, pp. 460–463 (2020) 33. Gargin, V., et al.: Application of the computer vision system for evaluation of pathomorphological images. In: Proceedings of 2020 IEEE 40th International Conference on Electronics and Nanotechnology, ELNANO 2020, pp. 469–473 (2020)

Minimizing the Subset of Features on BDHS Dataset to Improve Prediction on Pregnancy Termination Faisal Ahmed, Shahana Shultana, Afrida Yasmin, and Junnatul Ferdouse Prome

Abstract Predicting the pregnancy termination and controlling the child mortality rate has always been a great challenge for third world country. This research targets to extract out best subset of features to predict pregnancy termination more accurately relative to previous researches. To facilitate this noble purpose, we have carried out an extensive research on Bangladesh Demographic and Health Survey (BDHS) 2014, that find out the most contributing attributes of pregnancy termination in Bangladesh. Bivariate and multivariate analyses on this data shows interesting details to find out the recent causes for pregnancy termination. However, for finding out the intended features first demographically feature selection performed with Weka provided visualization tools and secondly Weka provided feature ranking attribute evaluators such as Correlation, Gain Ratio, One R, Symmetrical Uncertainty, Information Gain, Relief are used. After minimizing the subset of features, we apply three traditional machine learning classifiers (Naïve Byes, Bayesian Network, Decision Stump) along with the hybrid method which shows better performance in terms of performance metrics. This research improved accuracy 10.238% for Naïve Byes, 8.2657% for Bayesian Network, 3.5853% for Decision Stump and 9.03% for Hybrid. Keywords BDHS · Attribute evaluator · Correlation · Gain ratio · Information gain · OneR · Relief · Symmetrical uncertainty

1 Introduction The main intention of Millennium Development Goal is diminishing child mortality rates by reducing pregnancy termination rate of the country. Few conscious simple actions can reduce the rate of child mortality. Forward movement of diminishing F. Ahmed (B) · S. Shultana Department of CSE, Daffodil International University, Dhaka, Bangladesh J. F. Prome Department of CSE, Z. H. Sikder University of Science and Technology, Sikder, Bangladesh A. Yasmin Department of Statistics, Jahangirnagar University, Jahangirnagar, Bangladesh © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_6

61

62

F. Ahmed et al.

child fatality needs focused attention on coverage [1–3]. Taking stiffer move to prepare consciously about that leading intention of the commonwealths becomes much effective because deficiencies of understanding coverage ruin a newborn and also monitoring the execution of each step of population health interventions and services is exigent. Understanding coverage plays an obvious role. An initial task in data mining and predicting applications is feature selection. This research improved classifiers results by using the best subset of features which is selected in two ways. We have analyzed demographic feature and different feature ranking attribute evaluator provided with Weka to extract the feature of interest for predicting Pregnancy Termination which is better than previous research [1]. The remaining portion in the paper is arranged in the mentioned way:—Portion 2 focuses on the Related Works done earlier for minimizing the subset of attributes, Portion 3 explains the feature selection and classification task, Portion 4 consists of Proposed Methodology, Result and Observation is mentioned in Portion 5 and Conclusion is given in Portion 6.

2 Literature Review One Currently various techniques have been proposed for gathering attributes using supervised and unsupervised methods [4, 5]. If used in a specific domain supervised method may perform remarkably but might fail to do so in another domain [4]. On the other hand, unsupervised natural language processing (NLP) follow syntactic procedures for extracting the attributes. But here we are concerned with supervised techniques only. Thus, isolating the outliers from the gathered features is quite hard. Guyon and Elisseeff [6] pointed main procedure for choosing attributes which involves multivariate attribute selection, attribute construction, attribute ranking, attribute validity evaluation process and efficient search techniques. Prediction of patient’s condition based on some features is a very useful process of medical science where various machine learning algorithms are used [7]. Liu and Yu [8] found an ingenious way of feature selection by doing a survey about feature selection algorithms. Six feature selection approaches were analyzed by Hall and Holmes [9] which developed ranked list of features and used them on many data sets. Procedures to choose attributes using SVM was proposed by Jong et al. [10]. The importance of attribute selection was emphasized by Ilczuk et al. [11] for figuring out if the patients require cardiac pacemaker implantation. Maximum entropy, Support vector machine and Naive Bayes are 3 such machine learning procedures introduced by Pang et al. [12], in order to categorize movie reviews into positive and negative classes. Guyon and Elisseeff [6] introduced the necessity of attribute selection on SNP datasets for either machine learning or biology. Ahmed and Nazrul [13] showed ubiquity of contraception use between working and non-working women involved feature analysis. Mostafa [14] investigates urban women “Domestic Violence”, “Unwanted Pregnancy” and “Pregnancy Termination” as feature analysis both involved BDHS dataset analysis. Though Very limited

Minimizing the Subset of Features on BDHS Dataset to Improve …

63

research exists on BDHS dataset feature analysis and ranking but there is necessity of attribute selection for BDHS datasets. In the computational point of view, attribute selection tries to develop the efficiency of prediction of the predictors by ignoring curse of dimensionality for giving more cost-effective and quicker predictors in order to make data understanding [8] and data visualization much easier. Multiple machine learning models [15] are merged or integrated by Hybrid machine learning systems. Merging different machine learning techniques often shows improved performance compared to using single machine learning or decision—making model [16, 17] because each of the machine learning techniques work differently and makes use of different portion of the problem space, by using various set of attributes. Individual restrictions of basic models can be decreased using Hybrid models, and these models can make use of their various generalization procedures. Machine learning depends on information from various objects and with various characteristics [18].

3 Feature Selection and Classification Task One of the reason why there might be a decrease in accuracy of our model is due to focusing on unnecessary attributes in our data from which our model is learning. So Feature Selection is important and it is a method for diminishing dimensionality from a dataset. Based on a specific relevance evaluation criterion, it chooses a small subset of the related attributes from the initial ones which results in reduced cost of computation, improved model interpretability, and higher precision of a learning model. Using Demographic Feature Selection and attribute evaluator by WEKA, attributes are selected. Attribute sets are diminished using graphical based framework with Visualization tool provided by WEKA. This is how Demographic Feature Selection operates. Features are ranked independently without including any learning algorithm, using Filter based feature ordering techniques. Feature ordering works in such a way that each attributes are scored based on a certain method, and then attributes are selected depending on their scores. Some frequently used filter based attribute ordering methods are used in this work such as Gain Ratio, Relief, Information Gain, Symmetrical Uncertainty and Chi-Square. Symmetrical Uncertainty, Information Gain, and Gain Ratio are factors used for analysis which is based on entropy theory which depends on information theory.

3.1 Correlation-Based Feature Selection One such filter technique which orders attribute subsets depending on a heuristic evaluation function based on correlation which is known as CFS. Unnecessary features should not be taken into consideration because of their insignificant relation with

64

F. Ahmed et al.

the class. Unwanted attributes must be removed as they will be significantly related with multiple remaining attributes. The acceptance of an attribute is based on how efficiently it can guess the classes of instance space that are still remaining to be estimated by other features. Attribute subset evaluation function of CFS is expressed below: MS = 

kr f f k + k(k − 1)r f f

(1)

Heuristic merit of feature subset S is denoted by M s , r  ff is the average relation between attributes, and r  cf is the mean relation between class and attribute. We can imagine the numerator of Eq. 1 giving us a sign about how predictive the feature set of the class are and the denominator showing us how much unessential features there are. Equation 1 ranks the attribute subsets. CFS follows 3 heuristic search techniques: forward selection, best first and backward elimination. Forward selection initially starts without any features and continues to add a single feature at a time until better evaluation is not possible by feature addition. Backward elimination initially begins with full attribute set and continues to remove single features at once until evaluation stops to deteriorate. Best first search(BFS) may either begin with all or zero attributes. Previously the search went backward removing one feature at a time. Later the search moved forward including one feature at a time. A stopping benchmark is applied in order to stop BFS going through complete search space. If 5 successive subsets don’t display any improvement compared to the current best subset, then the search will stop. Figure 1 displays different steps of CFS algorithm and its procedure of use with Machine Learning algorithm. Using the process of Fayyad and Irani [19], a duplicate of the training data is initially discretized, then passed to CFS. Feature—feature and feature—class correlation is calculated using CFS. Dimensionality of testing data and initial training data is diminished using subset with most merit. Finally, the diminished datasets may go through training and testing.

3.2 Information Gain Based Feature Selection If the only data at hand is a feature and respective class distribution, then Information gain (IG) finds out the total information in bits regarding the class prediction [20]. A factor used most often in Information theory measure is Entropy, which is the base of IG feature ranking procedure. The amount of system’s uncertainty is usually defined by the entropy measure. The entropy of Y is: H (Y ) = −

 y∈Y

p(y) log2 ( p(y))

(2)

Minimizing the Subset of Features on BDHS Dataset to Improve …

65

Fig. 1 CFS workflow

where p(y) refers to the marginal probability density function of the arbitrary variable Y. In the training data set S, if the values of Y are partitioned depending on the values of another feature X and entropy of Y prior to partitioning is higher compared to entropy of Y after partitioning caused by X, in that case a relation exists between features X and Y. Thus entropy of Y after observing X is: H (Y ) = −

 x∈X

p(x)



p(y/x) log2 ( p(y/x))

(3)

y∈Y

where p(y|x) denotes the conditional probability of y given x. Provided that entropy is considered as an evaluation criteria for impurity in training set S, we can declare a criteria showing detailed information regarding Y, given that X displays the extent by which entropy of Y reduces. This criterion is known as IG which is shown by,

66

F. Ahmed et al.

I G = H (Y ) − H (Y/ X ) = H (X ) − H (X/Y )

(4)

IG is a symmetrical estimate obtained from the equation above. Information obtained about X after noticing Y is same as the information received about Y after checking X. Information Gain is not perfect for deciding the appropriateness of a feature even though it is a fine measure. When information gain is applied to features that can accept many definite values, a noticeable issue arises.

3.3 Symmetrical Uncertainty From the training data, by determining each of the probability values of y ∈ Y, a probabilistic model of nominal valued attribute Y can be developed. If we use this model to find out the value of Y for a sample, then the amount of bits it would require approximately to get accurate model output is the entropy of the model. Entropy is an estimate of unpredictability or uncertainty in a system. The entropy of Y is, H (Y ) = −



p(y) log2 ( p(y))

(5)

y∈Y

Equation 6 shows entropy of Y after checking X H (Y/ X ) = −

 x∈X

p(x)



p(y/x) log2 ( p(y/x))

(6)

y∈Y

Information about Y given by X shows the extent by which entropy of Y reduces and this is called information gain, or also known as mutual information. Information gain is shown by, gain = H (Y ) − H (X/Y ) = H (X ) − H (X/Y ) = H (Y ) + H (X ) − H (X, Y )

(7)

Biasedness of Information gain towards features with higher values is counterbalanced by Symmetrical Uncertainty [21] and the value is normalized in the range [0, 1].

3.4 Gain Ratio In decision tree learning the ratio of gain of information to the intrinsic information is known as Information gain ratio. When selecting a feature it is used in order to decrease multivalued features bias by considering the amount and dimension of the

Minimizing the Subset of Features on BDHS Dataset to Improve …

67

branches. Information Gain is also called Mutual Information. So, the intrinsic value is H(X|Y ) and information gain is H(X) − H (X|Y ) I G R = (H (X ) − H (X |Y ))| H (X |Y )

(8)

3.5 ‘Relief’ Feature Selection Relief algorithm can estimate attributes effectively. They are able to show a view regarding feature evaluation in recognition and regression and also can figure out conditional dependencies between attributes. Apart from being seen as a method for choosing attributes subset which are used before a model is learned in prepossessing stage, they were also used remarkably in various settings such as inductive logic programming and regression tree learning. Relief algorithm produces an output in which for each attribute the weight varies between −1 and +1. Predictability increases along with the increase in positivity of the weight of attributes. From the dataset we choose a sample, and then we find out the nearest neighbour data: one that belongs to the same class and another that belongs to the opposite class. Change in class after a change in attribute value, results in weighting of the attribute depending on the intuition that due to attribute change there might be a change in class. Inspite of no change in class if there is a change in attribute value it results in down weighting of the attribute based on the inspection that there is no effect on class inspite of attribute change. The method of changing attribute weight is either done for arbitrary sample set in data or entire dataset. The mean of the updated weights is calculated to get the final weight in the range [−1, +1].

3.6 Classifiers A classifier is a machine learning model which differentiates many things depending on some specific attributes. The classifier is also known as the learning algorithm using which the model acquires knowledge from training data. Now let’s summarize the classifiers we used in our research: Naïve Bayes Classifier Naive Bayes classifier makes firm assumptions which depends on Bayes theorem [1]. It is one of the most usual text classification procedures with many applications such as personal email sorting, language detection, sentiment detection and email spam detection. According to the Bayes theorem, P(A|B) = P(B|A)P(A)|P(B)

(9)

68

F. Ahmed et al.

Bayesian Network Some probabilistic graphical models such as Bayesian net, Bayesian model, belief network, Bayes net displays their conditional dependencies and random variables by using a directed acyclic graph (DAG) [1]. For instance, the probabilistic relation between symptoms and diseases can be shown using Bayesian Network. Given symptoms, the network is capable of determining the probabilities that many diseases are present. Decision Stump This machine learning model contains single level decision tree [22]. In this decision tree there is a single internal node attached directly to the terminal nodes. Depending only on one input feature, a decision stump gives an estimation, which is also known as 1-rules. Hybrid Classifiers Hybrid classifiers is a combined form of other classifiers. Hybrid classifiers relatively shows better performance than other classifiers [1]. Hybrid classifiers combines the method of other classifiers to get better prediction results. Every machine learning classifier have some limitation individually. Individual restrictions of models can be decreased by Hybrid machine learning models and can use their various generalization techniques.

4 Proposed Methodology BDHS, 2014 dataset preserves very high dimensionality with 4013 attributes and 17,842 instances which is quite impossible to evaluate with open source tool like “Weka”. But the matter of fact that all the attributes are not concern with Pregnancy Termination. So, we’ve analyzed BDHS, 2014 dataset and primarily select attributes that exhibits information’s before giving birth and living detail based on class attribute. Then demographic feature selection performed on those features which shows interest about the pregnancy termination using Weka visualization tool. After demographical analysis we got some feature of this dataset shows significance relation with class attribute. By this way we reduced dataset dimensionality. For the all case of demographic analysis we plotted all others variables in X axis with class attribute (ever had a pregnancy termination) in Y axis. Here class attribute labeled by two values 0 and 1. The red marks of the figure indicates the pregnancy termination which is denotes by value 1 and the blue mark of the figure indicates the safe birth which stands for value 0. Figure 2a plotted X axis as Other such pregnancies and Y axis as Ever had a pregnancy termination. The class attribute (ever had a pregnancy termination) level by two values 0 and 1. And the attribute (other such pregnancies also label by two values 0 and 1. The study of this case can be described as pregnancy termination occur

Minimizing the Subset of Features on BDHS Dataset to Improve …

69

Fig. 2 a Other such pregnancies versus ever had a pregnancy termination, b all women factor educational versus ever had a pregnancy termination, c current pregnancy wanted versus ever had a pregnancy termination, d highest year of education versus ever had a pregnancy termination

for all the value of attribute (other such pregnancies) accept (0.0). Figure 2b plotted X axis as All women factor educational and Y axis as Ever had a pregnancy termination. The study of this case is that when the value of educated women increased the rate of pregnancy termination is lower than the safe birth. So this attribute is also helpful for reproductive birth. The study of Fig. 2c is that when the value of current pregnancy wanted increased the rate of pregnancy termination is less than the rate of safe birth. So this attribute is also helpful for reproductive birth. The study of Fig. 2d shows that according to the value of attribute (highest year of education) the rate of pregnancy termination is lower than the rate of safe birth especially in the case of value (1.0, 0.0, 6.0, 7.0, 8.0). When dataset is reduced into a low dimensionality then we’ve applied Weka provided different feature ranking attribute evaluator (i.e. Correlation, OneR, Relief) to ranking features and get the primarily ranked features. By this way we select the attributes which are most relevant to the attribute “Pregnancy Termination” and then use different classifiers (i.e. Naïve Bayes, Bayes Net, Decision Stump) to gain the insight of feature by evaluating performance metrics and discard the last ranked feature at every iteration till the performance is not reduced for any of the classifiers. As the performance dropped the union of all the feature ranking attribute evaluators

70

F. Ahmed et al.

ranked feature of previous iteration is selected as desired selected feature. Hence the dataset reduced into lower dimensionality. Figure 3 shows the process of Feature Selection.

Fig. 3 Flow chart of feature selection

Minimizing the Subset of Features on BDHS Dataset to Improve …

71

5 Result and Observation For extracting out the features of interest from the BDHS, 2014 dataset; we have applied our proposed methodology, described in Sect. 4 which shows better performance relative to previous researches not only in accuracy (Table 2) but also in case of other performance metrices like TRP, FPR, Precision, Recall, F-measure (Fig. 4; Table 1).

Safe birth

Pregnancy termination

(a) Comparison using Naïve Bayes Safe birth

Safe birth

Pregnancy termination

(b) Comparison using Bayesian Network

Pregnancy termination

Safe birth

Pregnancy termination

(d) Comparison using Hybrid classifier

(c) Comparison using Decision Stump

Fig. 4 Performance metrices comparison (without feature selection vs with feature selection)

Table 1 Confusion Matrix

Table 2 Accuracy Comparison among classifiers (without feature selection vs with feature selection)

Predicted

Actual Negative

Positive

Negative

TN

FN

Positive

FP

TP

Classifiers

Accuracy of previous research (%)

Accuracy of our research (%)

Naïve bayes

52.97

63.21

Decision stump

85.17

93.44

Bayesian net

66.69

70.28

Hybrid

67.2

76.23

72

F. Ahmed et al.

Accuracy Overall correctness of the classifier, Accuracy =

TP + TN TP + TN + FP + FN

(10)

True Positive Rate How often correctly predicts the yes, TPR =

TP = Recall TP + FN

(11)

False Positive Rate How often incorrectly predicts the yes, FPR =

FP TN + FP

(12)

Precision How often the yes is actually yes, Precision = F-measure

F =2∗

TP TP + FP

Precision ∗ Recall Precision + Recall

(13)

(14)

In Fig. 4a we are comparing the performance parameters of Naive Bayes Classifier for before and after feature selection. We can see that the performance parameters show better result after feature selection compared to before feature selection. Consecutively we are comparing the performance parameters for Bayesian Network, Decision Stump and Hybrid Classifier model for before and after feature selection in Fig. 4b–d respectively. And in each of the remaining three figures we see that the performance parameters show better result after feature selection compared to before feature selection.

6 Conclusion In any developing and third world country abnormal pregnancy termination has always been an imminent threat Hence, any sort of research that backs to find the heart of this predicament will act as a helping hand. We performed multivariate and bivariate analyses in this research on the BDHS dataset in 2014. We have investigated with Weka provided feature ranking attribute evaluators (i.e. Correlation, Gain Ratio, Information Gain, OneR, Relief, Symmetrical Uncertainty) on the basis of predefined classifiers (i.e. Bayesian Net, Naive Bayes, Decision Stump) to extract out the features of interest that contribute for figuring out the Pregnancy Termination. In the future, we are planning to implement a mixture of built in classifiers and evaluate by depending on performance metrics.

Minimizing the Subset of Features on BDHS Dataset to Improve …

73

References 1. Ahmed, F., Shams, M.M.B., Shill, P.C., Rahman, M.: Classification on BDHS data analysis: hybrid approach for predicting pregnancy termination. In: 2nd International Conference on Electrical, Computer and Communication Engineering (ECCE), pp. 1–6. IEEE, Cox’s Bazar, Bangladesh (2019). https://doi.org/10.1109/ecace.2019.8679302 2. Lawn, J.E., Cousens, S., Zupan, J.: 4 Million neonatal deaths: When? Where? Why? Lancet 365(9462), 891–900 (2005) 3. Kerber, K.J., de Graft-Johnson, J.E., Bhutta, Z.A., Okong, P., Starrs, A., Lawn, J.E.: Continuum of care for maternal, newborn, and child birth: from slogan to service delivery. Lancet 370, 1358–1369 (2007) 4. Boetticher, G., Menzies, T., Ostrand, T.: Promise repository of empirical software engineering data. West Virginia University, Department of Computer Science (2007). http://promisedata. org/repository 5. Cameron, A.C., Trivedi, P.K.: Regression analysis of count data. 2nd edn, Econometric Society Monograph No. 53, Cambridge University Press, NY, USA (1998) 6. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003) 7. Ghosh, P., Hasan, M.Z., Jabiullah, M.I.: A comparative study of machine learning approaches on dataset to predicting cancer outcome. J. Bangladesh Electron. Soc. 18(1–2), 81–86 (2018) 8. Liu, H., Yu, L.: Toward integrating feature selection algorithms for classification and clustering. IEEE Trans. Knowl. Data Eng. 17(4), 491–502 (2005). https://doi.org/10.1109/TKDE.2005.66 9. Hall, M.A., Holmes, G.: Benchmarking attribute selection techniques for discrete class data mining. IEEE Trans. Knowl. Data Eng. 15(6), 1437–1447 (2003). https://doi.org/10.1109/ TKDE.2003.1245283 10. Jong, K., Marchiori, E., Sebag, M., van der Vaart, A.: Feature selection in proteomic pattern data with support vector machines. In: Proceedings of the 2004 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology, pp. 41–48. IEEE, La Jolla, CA, USA (2004). https://doi.org/10.1109/cibcb.2004.1393930 11. Ilczuk, G., Mlynarski, R., Kargul, W., Wakulicz-Deja, A.: New feature selection methods for qualification of the patients for cardiac pacemaker implantation. In: Computers in Cardiology, vol. 200, pp. 423–426, IEEE, Durham, NC, USA (2007) 12. de Souza, J.T., Japkowicz, N., Matwin, S.: STochFS: a framework for combining feature selection outcomes through a stochastic process. In: 9th European Conference on Principles and Practice of Knowledge Discovery in Databases on Proceedings, Porto, Portugal, Oct 3–7, 2005, pp. 667–674 13. Islam, A.Z., Mondol, M.N.I., Islam, M.R.: Prevalence and determinants of contraceptive use among employed and unemployed women in Bangladesh. Int. J. MCH AIDS 5(2), 92–102 (2016). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5187648/ 14. Kamal, S.M.M.: Domestic violence, unwanted pregnancy and pregnancy termination among Urban women of Bangladesh. J. Fam. Reprod. Health 7(1), 11–22 (2013) 15. Cherkassky, V., Mulier, F.M.: Learning from Data: Concepts, Theory, and Methods. 2nd edn, Wiley—IEEE Press (2007) 16. Wozniak, M.: Hybrid Classifiers: Methods of Data, Knowledge, and Classifier Combination. Studies in Computational Intelligence, vol. 519. Springer (2014) 17. Brazdil, P., Giraud-Carrier, C., Soares, C., Vilalta, R.: Metalearning: Applications to Data Mining. Springer (2009) 18. Rajaraman, A., Leskovec, J., Ullman, J.D.: Mining of Massive Datasets, 1st edn. Cambridge University Press, NY, USA (2011) 19. Fayyad, U.M., Irani, K.B.: Multi-interval discretization of continuous valued attributes for classification learning. In: 13th International Join Conference on Artificial Intelligence, Vol. 2, pp. 1022–1027, Morgan Kaufmann, Chambe’ry, France (1993)

74

F. Ahmed et al.

20. Roobaert,D., Karakoulas, G., Chawla, N.V.: Information gain, correlation and support vector machines. In: Guyon I., Nikravesh M., Gunn S., Zadeh L.A. (eds.) Feature Extraction. Studies in Fuzziness and Soft Computing, vol. 207. Springer, Berlin, Heidelberg (2006). https://doi. org/10.1007/978-3-540-35488-8_23 21. Rajaraman, A., Leskovec, J., Ullman, J.D.: Mining of Massive Datasets. Cambridge University Press (2011) 22. Iba, W.F., Langley, P.: Induction of one-level decision trees. In: Sleeman, D., Edwards, P. (eds.) Proceedings of the Ninth International Conference on Machine Learning, San Mateo, CA: Morgan Kaufmann, pp. 233–240 (1992)

PUNER-Parsi ULMFiT for Named-Entity Recognition in Persian Texts F. Balouchzahi and H. L. Shashirekha

Abstract Named Entity Recognition (NER) is an information extraction technique for the automatic recognition of named entities and their classification in a natural language text. Applications of NER include extracting named entities from texts such as academic, news and medical documents, content classification for news providers, improving the search algorithms, etc. Most of the NER research works explored is for high resource languages such as English, German, and Spanish. Very less NER related work is done in low-resource languages such as Persian, Indian, and Vietnamese due to lack of annotated corpora for these languages. Among the mentioned languages very few works have been reported for the Persian language NER till now. Hence, this paper presents PUNER—a Persian NER system using Transfer Learning (TL) model that makes use of Universal Language Model Fine-tuning (UMLFiT) for NER in Persian language. This is accomplished by training a Language Model on Persian wiki text and using that model to develop a system for identifying and extracting named entities from the given Persian texts. Performance of the proposed model is compared with the Deep Learning (DL) models using BiLSTM by applying five word embedding models namely, Fasttext, HPCA, Skipgram, Glove, and COBOW and conventional Machine Learning (ML) model. All the models are evaluated on two Persian NER datasets and the results illustrate that TL model performs better than ML and DL models. Keywords NLP · Named entity recognition · Persian language · Transfer learning · Machine learning · ULMFiT · Deep learning · Bidirectional LSTM

1 Introduction A Named Entity (NE) is the noun representing the name of a person, place, organization, etc., in general, newswire domain and Named Entity Recognition (NER) is the process of recognizing NEs in a given text and classifying them into one of the predefined categories [1]. It is an important preprocessing technique for many applications F. Balouchzahi · H. L. Shashirekha (B) Department of Computer Science, Mangalore University, Mangalore 574199, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_7

75

76

F. Balouchzahi and H. L. Shashirekha

such as event detection from news, customer support for on-line shopping, knowledge graph construction, Information Retrieval, Question-Answering (QA) [2]. The information gained from NER task is useful in its own right and it also facilitates higher-level Natural Language Processing (NLP) tasks such as text summarization and machine translation [3]. Research in NER has mainly focused on high-resource languages such as English, German, and Spanish which have a large number of digitally annotated resources [4]. However, due to scarcity or inaccessibility of large volume of annotated digital resources and the challenges associated with the languages, NER for Persian language has received very less attention [4]. Hence, this paper presents PUNER—a Persian NER system using Transfer Learning (TL) model that makes use of Universal Language Model Fine-Tuning (ULMFiT) for NER in Persian language. The rest of the paper is organized as follows: Sect. 2 gives an overview of the work carried out in the related area. TL, Machine Learning (ML), and Deep Learning (DL) models are explained in Sect. 3. The proposed methodology is discussed in Sect. 4 followed by Experiments and Results in Sect. 5. Section 6 gives the conclusion and throws light on future work.

2 Related Work Researchers have developed many tools and techniques for identifying and classifying NEs for many languages and in several domains including the popular newswire. Amarappa et al. [1], has proposed a Supervised Multinomial Naïve Bayes NER model for Kannada Corpus consisting of parts of EMILLE (Enabling Minority Language Engineering) corpus and articles collected from internet and Kannada language books. The model trained on a corpus consisting of around 95,170 words recognizes the NEs with an average F-measure of 81% and 10 fold cross-validation F-measure of 77.2%. An approach for Persian NER based on DL architecture using BiLSTM-CRF has been presented by Poostchi et al. [3]. They have also released ArmanPersoNERCorpus, an Entity-Annotated Persian dataset and four different Persian Word Embedding (WE) models based on GloVe, CBOW, skip-gram, and HPCA. Their approach has achieved an average F1 score of 77.45% using Skip-Gram WE which is the highest Persian NER F1 score reported in the literature. A variety of Long Short-Term Memory (LSTM) based models for sequence tagging are proposed by Huang et al. [5]. They have compared the performance of LSTM networks, bidirectional LSTM (BI-LSTM) networks, LSTM with a Conditional Random Field (CRF) layer (LSTM-CRF) and bidirectional LSTM with a CRF layer (BI-LSTMCRF) for sequence tagging. Their models were tested on three NLP tagging tasks: Penn Treebank (PTB) POS tagging, CoNLL 2000 chunking, and CoNLL 2003 NE tagging. A web-based text classification application built by Indra et al. [6], classifies tweets into four predefined groups such as health, music, sport, technology using Logistic Regression. The classified tweets are presented according to the selected topics in

PUNER-Parsi ULMFiT for Named-Entity Recognition in Persian Texts

77

an efficient format, such as a graph or table which also displays the accuracy of text classification. Khormuji et al. [4], presents NER in Persian Language using Local Filters model and publically available dictionaries to recognize Persian language NEs. Their NER framework is composed of two stages: detection of NE candidates using dictionaries for lookups and then filtering them based on false positives. They have reported 88.95% precision, 79.65% recall, and 82.73% F1 score. Taghizadeh et al. [7], have proposed a model forNSURL-2019 Task 7 which focuses on NER in Farsi. The objective of this task was to compare different approaches to find phrases that specify NEs in Farsi texts, and to establish a standard test bed for future researches on this task in Farsi. The best performance of 85.4% F1 score was obtained by MorphoBERT system [8] based on the phrase-level evaluation of seven classes of NEs including person, organization, location, date, time, money, and percent. They used morphological features of Farsi words together with the BERT model and Bi-LSTM. Mahalakshmi et al. [9], have proposed an NER system by applying Naïve Bayes algorithm for NER which takes Tamil text about temples as input. The system identifies the NEs in temple domain after preprocessing and parsing the dataset of 5000 documents collected from ‘www.temples.dinamalar.com’ and gives an accuracy of 79%. A text classification model based on ULMFiT which is effective, extremely simple and efficient TL method is proposed by Jeremy et al. [10]. They have also proposed several novel fine-tuning techniques that prevent catastrophic forgetting and enable robust learning across a diverse range of tasks. Their model is applied on three common text classification tasks, namely, sentiment analysis, question classification, and topic classification and all results are reported in terms of error rates. Sentiment analysis is evaluated on the binary movie review IMDb dataset and the binary and fiveclass version of the Yelp review dataset. Question Classification model is evaluated on the six-class version of the small TREC dataset, dataset of open-domain and fact-based questions divided into broad semantic categories. Topic classification is evaluated on the large-scale AG news and DBpedia ontology datasets.

3 Learning Models NER system consists of two steps: recognizing the NEs and classifying them into one of the predefined set of labels or tags. While the first step is typically a segmentation problem where nouns or names are defined to be contiguous spans of tokens, with no nesting, the second phase is classification task that automatically assigns a suitable label/tag to a name from a predefined set of labels/tags [11]. NER can also be considered as a sequence labeling problem as every term/phrase in a sequence will be assigned a label from a predefined set of labels including ‘other’ as the label for non-NEs [12]. Researchers have explored several learning models for the task of text classification as well as sequence labeling problems. Off late TL is gaining importance as

78

F. Balouchzahi and H. L. Shashirekha

a learning model and showing better performance compared to conventional ML models and DL models.

3.1 Machine Learning Model Conventionally, in ML NER system, sentences are extracted from raw texts and then tokenization into words followed by tagging every word with one of the predefined NER tags. This tagged data in the form of is used to build the NER model which is then used to predict the label of the input words/phrases. Large annotated data is required to build a ML model. Once the model is trained, it is tested on the test set and the accuracy is computed. Instead of using a single ML model for text classification, it is advantageous to use an ensemble of learning models just like considering the decision of a team rather than an individual. The weakness of one learning model may be overcome by the strength of the other model in ensemble learning. One such ensemble learning model is Voting Classifier which is made up of ‘n’ classifiers/learning models. All these classifiers accept the same input simultaneously and predict the appropriate tag for each input. Then based on majority voting the tag with higher number of votes will be assigned to the input. Figure 1 shows the structure of the voting classifier. The major drawback of the ML approach is the availability of large annotated data and the tedious feature extraction process to obtain a good performance. Further, the strong dependence on domain knowledge for designing features makes the method difficult to easily generalize to new tasks.

Fig. 1 Structure of voting classifier

PUNER-Parsi ULMFiT for Named-Entity Recognition in Persian Texts

79

3.2 BiLSTM-Deep Learning Model The major advantage of DL models over ML models is that they learn feature representations rather than the features and perform classification, in an end-to-end fashion. DL has paved the way for critical and revolutionary applications in almost every field of life in general. DL models for text classification involve the use of WE for representing words and a learning model for the purpose of classification. WEs has been proved as a powerful representation for characterizing the statistical properties of natural language [13]. WEs are pre-trained word representations that are the key component in many neural language understanding models. However, learning high-quality representations can be challenging. They model the complex characteristics of word uses, such as syntax and semantics, and how these uses vary across linguistic contexts [14]. Since neural networks understand only numbers WE provide text to numeric vector conversion. Five popular WEs are explained below: Continuous Bag of words (COBOW) and Skipgram model—Word2vec is a technique to produce WE for better word representation. It captures a large number of precise syntactic and semantic word relationships and represents a word in the form of a vector so that semantically similar words are grouped together and dissimilar words are located far away. There are two architectures used by Word2vec: COBOW and Skipgram. COBOW predicts the best suited word with higher probability for a given context whereas Skipgram predicts the most appropriate context that can surround the given word. For example, given the context “Mangalore is a very […] city”, CBOW model would say that word “beautiful” or “nice” is the most probable word and words like “delightful” gets less attention. In case of Skipgram, by giving the word delightful the model would say there is a high probability that the context is “Mangalore is a very […] city”, or some other relevant context. Global Vectors (Glove) is a global log-bilinear regression model for learning word representations that outperforms other models on word analogy, word similarity, and NER tasks. It is an extension of Word2vec method for efficiently learning word vectors [15]. GloVe constructs a word-context or word co-occurrence matrix using statistics across the entire text corpus which results in a better WE. HPCA is a simple spectral method comparable to PCA. In the beginning, the cooccurrence matrix is normalized row-by-row to represent the words by proper discrete probability distributions. Then, the resulting matrix is transformed into a Hellinger space before applying PCA to reduce its dimensionality [3]. Fasttext is an extension of Word2vec model that represents each word as an n-gram of characters and generates embeddings by adding the character n-gram of all the n-gram representations for the words including the words that do not appear in the training corpus. The model is trained on Wikipedia texts, News corpora and on the Common Crawland is publicly available for research purpose.1 1 https://fasttext.cc/.

80

F. Balouchzahi and H. L. Shashirekha

Long Short Term Memory (LSTM) and Bidirectional LSTM (BiLSTM) are the two popular learning models used for the purpose of classification. LSTM—introduced by Hochreiter and Schmidhuber [16] is a special kind of Recurrent Neural Networks (RNN) capable of learning long-term dependencies. An RNN maintains a memory based on historical information which enables the model to predict the current output conditioned on long-distance features [5]. BiLSTM—networks trained using Back-Propagation-Through-Time (BPTT) [5] can efficiently use past features (via forward states) and future features (via backward states) for a specific time frame and are very effective for tagging sequential data, speech utterances or hand written documents. In LSTM, data is fed from beginning to end or in only one direction whereas in BiLSTM, data is fed in both the directions from beginning to the end and from end to beginning. BiLSTM significantly improves the accuracy of the learning model.

3.3 Transfer Learning Model TL is a ML method where the knowledge of developing a model for a specific task is reused to develop another task. It involves the concepts of a domain and a task and uses pre-trained models that have been used for one task as a base for the development of new task. TL is proved to be one of the crucial inventions in the field of DL and computer vision [17]. It stores the knowledge gained in solving the source task and applies it to the development of target task [18]. Figure 2 represents the concept of TL. Low resource languages like Persian are challenging due to less/non-availability of labeled data in practice. Hence, it is difficult to train deep neural networks, as it

Fig. 2 Concept of transfer learning (https://medium.com/the-official-integrate-ai-blog/transfer-lea rning-explained-7d275c1e34e2)

PUNER-Parsi ULMFiT for Named-Entity Recognition in Persian Texts

81

Fig. 3 A framework of ULMFiT (https://humboldt-wi.github.io/blog/research/information_sys tems_1819/group4_ulmfit/)

can lead to overfit due to less training data. TL can be used as a solution in such cases where knowledge of the already trained model can be transferred to develop the new related model and the resulting model can be fine-tuned to the required task. A Language Model (LM) is a probability distribution over sequences of words, which is used as a base model for various NLP tasks such as text classification, text summarization and text generation. Formally, “LM introduces a hypothesis space that should be useful for many other NLP tasks” [10]. For example, to build a text classification task, a learning model which is trained on a general (LM) task can be used as a base model and then text classification task can be fine-tuned. This model is able to use knowledge of the semantics of language acquired from the LM. Universal Language Model Fine-tuning (ULMFiT) is an impressive TL method based on LM that can be applied to many NLP tasks [18]. It consists of the following stages: (i) (ii) (iii)

Training the LM on a general-domain corpus that captures high-level natural language features Fine-tuning the pre-trained LM on target task data Fine-tuning the classifier on target task data

Figure 3 describes a framework of ULMFiT model.

4 Methodology The proposed PUNER model using TL and conventional ML and DL approaches used for NER are described below:

4.1 Voting Classifier Using Machine Learning Approach Three ML classifiers namely, Naïve Bayes (NB), Logistic regression (LR), and SVM are ensembled as a Voting Classifier (VC) to predict the tags/labels of NEs. The model is implemented using sklearn library using 10-fold cross validation. The test data is input to VC and based on majority voting the tag with the higher number of votes will be assigned to the input. Results of individual classifier are noted as well.

82

F. Balouchzahi and H. L. Shashirekha

Fig. 4 A bidirectional LSTM network that illustrates a Persian NER system

4.2 BiLSTM Model Using Deep Learning Approach The tagged text is given as input to the BiLSTM learning model for classification. Figure 4 illustrates a BiLSTM network for a sentence written in Persian language. ” (In English: “Fazl to India went”), is tagged The sentence “ as B-PER, O, B-LOC, O, where B and I tags indicate beginning and intermediate positions of NEs and O tag indicates ‘Other’ which means word other than a noun. (Note that in Persian, people write from right to left and also grammar is different).

4.3 PUNER Using Transfer Learning Approach PUNER—a Persian NER system using TL model that makes use of ULMFiT for NER in Persian language is inspired by the architecture proposed by Howard and Ruder [10]. It uses a BiLSTM model which is trained on a general LM task and then fine-tuned on NER classification task. PUNER uses the knowledge captured from LM trained on texts collected from Wikipedia in the first stage. In this stage, LM is created using text.models module from fastai library that implements the encoder for an AWD-LSTM [19]. Once the LM completes its learning the gained knowledge is used to fine-tune the NER classification task. In the final step of model construction, obtained knowledge from LM and the training data for NER is used to train the model for tagging NEs by using an AWD-LSTM layer. Figure 5 illustrates the architecture of PUNER.

PUNER-Parsi ULMFiT for Named-Entity Recognition in Persian Texts

83

Fig. 5 Architecture of PUNER

5 Experimental Results Evaluating the model’s performance is the most important task which requires suitable dataset and suitable validation measures.

5.1 Dataset Dataset plays a major role in the evaluation of any model’s performance. In this work, a collection of unannotated Persian text is used to train the LM model for TL and two annotated Persian NER datasets namely, ArmanPersoNERCorpus [3] and Persian-NER2 are used to validate the performance of the learning models. NE tags in the tagged datasets are in IOB format and CONLL3 representation. IOB is the segment representation model where the tag I is assigned to intermediate NE, O to non-NE and B to the first token of consecutive NE of the same class. Unannotated Persian text data was collected through XML dump files which include 17,000 latest Persian articles from https://dumps.wikimedia.org. Collected files have been extracted using WikiExtractor4 module and converted to csv files such that each row contains one article. ArmanPersoNERCorpus contains six NE classes: person, organization (such as banks, teams, ministries and publishers), location (such as cities, countries, seas, and mountains), facility (such as schools, universities, hospitals, and cinemas), product (such as books, newspapers, movies, cars, theories, agreements, and religions), and event (such as wars, earthquakes, national

2 https://github.com/Text-Mining/Persian-NER. 3 The

"CONLL" file type represents a corpus with one word per line, each word containing 10 tab-separated columns with information about the word (surface, lemma, POS, NER). 4 https://github.com/attardi/wikiextractor.

84

F. Balouchzahi and H. L. Shashirekha

Fig. 6 Annotated datasets for Persian NER

Table 1 Statistics related to dataset used for Persian NER

Dataset

Type

Wiki texts

Unannotated 482,826

ArmanPersoNER Annotated PersianNER

Annotated

No. Sentences No. Tokens 18,919,947

7682

250,015

15,363

1,000,000

holidays, and conferences); other is for the remaining non-NE tokens. PersianNER5 available at https://app.text-mining.ir consists of five classes: person, organization, location, date (such as days, months, and years), and event; other is the remaining non-NE tokens. Class-wise distribution of tokens in ArmanPersoNERCorpus and PersianNER corpus is shown in Fig. 6. Statistics related to all the three datasets are given in Table 1.

5.2 Results NER model can be dealing with important documents such as medical or legal and precise identification of NEs in those documents determines the success of the model. Therefore, the main metric to evaluate models will be F1 score (F1), Recall score (R), and Precision score (P). The results obtained in terms of F1 score, Recall score, and Precision score for all the three models are given below. Table 2 shows the results for each ML classifier namely, NB, LR, SVM and VC which is an ensemble of NB, LR and SVM for both the tagged datasets ArmanPersoNERCorpus and PersianNER. The results illustrate that SVM classifier performs better than VC for ArmanPersoNERCorpus dataset whereas VC gives better results for PersianNER. DL approach is implemented using TensorFlow-Keras BiLSTM model with five different WEs namely, Fasttext, Skipgram, HPCA, Glove and COBOW. Each session is run for 10 epochs. Table 3 shows the results for BiLSTM model 5 PersianNER

is a very big dataset. 1,000,000 tokens from the beginning part of dataset have been selected for this work.

PUNER-Parsi ULMFiT for Named-Entity Recognition in Persian Texts

85

Table 2 Results of ML model Methods

ArmanPersoNER Precision

Recall

Persian-NER F1

Precision

Recall

F1

Naïve bayes

29.64

49.28

31.53

44.72

70.29

51.70

Logistic regression

72.34

39.68

51.25

74.85

45.78

56.81

SVM

70.11

51.78

57.67

27.75

68.73

34.21

Ensemble (NB and LR and SVM)

73.01

44.53

55.32

74.78

49.49

59.56

Bolded indicates best performance for each dataset

Table 3 Results of BiLSTM model using different word embeddings Word Embeddings

ArmanPersoNER Precision

Persian-NER

Recall

F1

Precision

Recall

F1

Fasttext

86.47

91.80

89.06

70.85

72.26

71.55

Skipgram

91.47

93.05

92.26

67.09

71.56

69.25

HPCA

87.21

90.34

88.75

67.27

71.86

69.49

Glove

91.07

92.06

91.56

69.69

72.75

71.19

COBOW

93.39

93.82

93.60

71.41

71.45

71.43

Bolded indicates best performance for each dataset

for five different WEs for both the tagged datasets ArmanPersoNERCorpus and PersianNER. For ArmanPersoNERCorpus, the results of BiLSTM model range from a minimum of 88.75 F1 score for HPCA WE to a maximum of 93.60 F1 score for COBOW WE. For PersianNER dataset, the results of BiLSTM model range from a minimum of 69.25 F1 score for Skipgram WE to a maximum of 71.55 F1 score for Fasttext WE. Table 4 illustrates the results of the proposed model PUNER on both datasets ArmanPersoNERCorpus and PersianNER. PUNER gives 92.82 F1 score for ArmanPersoNERCorpus and 82.16 F1 score for PersianNER corpus. A comparison of all the three learning models namely ML, DL and PUNER in terms of Precision, Recall, and F1 score is shown is Table 5. The F1 score in results illustrate that PUNER performs better on PersianNER dataset and BiLSTM using COBOW WE gives better result for ArmanPersoNERCorpus. Further, F1 score of PUNER for ArmanPersoNERCorpus is close to BiLSTM using COBOW WE. DL approach has illustrated higher results than that of ML and TL for ArmanPersoNERCorpus dataset. This relative result seems to be good according to other NER results Table 4 Results of PUNER Methods PUNER

ArmanPersoNER

Persian-NER

Precision

Recall

F1

Precision

Recall

F1

92.72

93.44

92.82

82.02

84.42

82.16

86

F. Balouchzahi and H. L. Shashirekha

Table 5 Comparison of learning models for Persian NER Approach ML

DL

TL

Methods

ArmanPersoNER

Persian-NER

Precision

Recall

F1

Precision

Recall

F1

Naïve bayes

29.64

49.28

31.53

44.72

70.29

51.70

Logistic regression

72.34

39.68

51.25

74.85

45.78

56.81

SVM

70.11

51.78

57.67

27.75

68.73

34.21

Ensemble (NB and LR and SVM)

73.01

44.53

55.32

74.78

49.49

59.56

BiLSTM (Fasttext)

86.47

91.80

89.06

84.15

84.16

70.85

BiLSTM (Skipgram)

91.47

93.05

92.26

70.85

72.26

67.09

BiLSTM (HPCA)

87.21

90.34

88.75

67.09

71.56

67.27

BiLSTM (Glove)

91.07

92.06

91.56

67.27

71.86

69.69

BiLSTM (Cobow)

93.39

93.82

93.60

69.69

72.75

71.41

ULMFiT

92.72

93.44

92.82

82.02

84.42

82.16

Bolded indicates best performance for each dataset

in Persian and other languages from the literature. However, on the average PUNER performs better than conventional ML model and also the DL model. A graphical representation of the comparison of F1 scores of all the three models on the two datasets is shown in Fig. 7.

6 Conclusion This paper presents PUNER—a Persian NER system using Transfer Learning (TL) model that makes use of ULMFiT for NER in Persian language. PUNER is initially trained on a general domain data collected from Wikipedia, and then it is applied on target NER data. The model is evaluated on two annotated datasets namely ArmanPersoNERCorpus and PersionNER. Results show that the proposed model has achieved 92.82 F1 score on ArmanPersoNERCorpus 82.16 F1 score on Persian-NER dataset. For the sake of comparison, Voting Classifier which is an ensemble of ML approach using Naïve Bayes, Logistic Regression, and SVM algorithms and BiLSTM model using five different WEs namely, Fasttext, Skipgram, HPCA, Glove and COBOW has been implemented. The comparison of all the approaches illustrate that PUNER definitely performs better than ML models and on par with DL models. On the average,

PUNER-Parsi ULMFiT for Named-Entity Recognition in Persian Texts

87

Fig. 7 F1 score comparison of all models

PUNER gives the result which is on par with DL models. Further, the ULMFiT Persian LM weights built for Persian NER can be utilized for other Persian NLP tasks.

References 1. Amarappa, S., Sathyanarayana, S.V.: Kannada named entity recognition and classification (NERC) based on multinomial naive bayes (MNB) classifier. ArXiv preprint arXiv: 1509.04385 (2015) 2. Poostchi, H., Borzeshi, E.Z., Abdous, M., Piccardi, M.: Personer: persian named-entity recognition. In: COLING 2016-26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers, pp. 3381–3389 (2016) 3. Poostchi, H., Borzeshi, E.Z., Piccardi, M.: BiLSTM-crf for persian named-entity recognition armanpersonercorpus: the first entity-annotated persian dataset. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), pp. 4427–4431 (2018) 4. Khormuji, M.K., Bazrafkan, M.: Persian named entity recognition based with local filters. Int. J. Comput. Appl. 100(4) (2014) 5. Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. ArXiv preprint arXiv: 1508.01991 (2015) 6. Indra, S.T., Wikarsa, L., Turang, R.: Using logistic regression method to classify tweets into the selected topics. In: 2016 International Conference on Advanced Computer Science and Information Systems (ICACSIS), pp. 385–390. IEEE (2016) 7. Taghizadeh, N., Borhanifard, Z., GolestaniPour, M., Faili, F.: ‘NSURL-2019 task 7: named entity recognition (NER) in farsi. ArXiv preprint arXiv: 2003.09029 (2020) 8. Mohseni, M., Tebbifakhr, A.: Morphobert: a persian NER system with BERT and morphological analysis. In: Proceedings of the First International Workshop on NLP Solutions for Under Resourced Languages, NSURL ’19, Trento, Italy (20190

88

F. Balouchzahi and H. L. Shashirekha

9. Mahalakshmi, G.S., Antony, J., Roshini, S.: Domain based named entity recognition using naive bayes classification. Aust. J. Basic Appl. Sci. 10(2), 234–239 (2016) 10. Jeremy, H., Ruder, S.: Universal language model fine-tuning for text classification. ArXiv preprint arXiv: 1801.06146 (2018) 11. Nayel, H.A., Shashirekha, H.L., Shindo, H., Matsumoto, Y.: Improving multi-word entity recognition for biomedical texts. ArXiv preprint arXiv: 1908.05691 (2019) 12. Pham, T.-H., Mai, K., Trung, N.M., Duc, N.T., Bolegala, D., Sasano, R., Sekine, S.: Multitask learning with contextualized word representations for extended named entity recognition. arXiv preprint arXiv: 1902.10118 (2019) 13. Wang, P., Qian, Y., Soong, F.K., He, L., Zhao, H.: Part-of-speech tagging with bidirectional long short-term memory recurrent neural network. ArXiv preprint arXiv: 1510.06168 (2015) 14. Peters, M.E., Neumann, M., Iyyer, M., Gardner, M, Clark, C, Lee, K., Zettlemoyer, L.: Deep contextualized word representations. ArXiv preprint arXiv: 1802.05365 (2018) 15. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: Proceeding DXs of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014) 16. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 17. Saha, R.: Transfer learning—a comparative analysis (2018) 18. Faltl, S., Schimpke, M., Hackober, C.: Ulmfit: state-of-the-art in text analysis (2019) 19. Merity, S., Keskar, N.S., Socher, R.: Regularizing and optimizing LSTM language models. ArXiv preprint arXiv: 1708.02182 (2017)

A Coverage and Connectivity of WSN in 3D Surface Using Sailfish Optimizer Thi-Kien Dao, Shi-Jie Jiang, Xiao-Rong Ji, Truong-Giang Ngo, Trong-The Nguyen, and Huu-Trung Tran

Abstract Coverage and connectivity in the 3D surface of sensor nodes as the mountain is a critical problem in a wireless sensor network (WSN). This paper suggests a solution to multi-connectivity deployment WSN coverage based on combining Sailfish optimizer (SFO) with the characteristic of 3D surface topography. The target area divided into mesh grids of a size to establish the multi-connectivity of every grid. The cover set constructed through the direction gradient probabilistic model and connected graph and the joint points to graph within the grid by optimizing SFO. A large number of simulation experiments show that the proposed method can cover the target region and guarantee the connectivity and robustness of the network. Keywords Wireless sensor network · 3D surface · Multi-connectivity · Sailfish optimizer

1 Introduction The Wireless Sensor Network (WSN) is a self-organized multi-hop network composed of a vast number of sensor nodes distributed within the monitoring region [1]. The sensor node detects and gathers the information in the target area, and then it is transmitted to the sink node, which then transmits it across the Internet to the gateway node [2]. Therefore, the key to the research of WSN lies in the deployment of nodes and the communication of nodes. In the beginning, most of the research on WSN focused on a two-dimensional ideal plane, assuming that the sensing model T.-K. Dao · S.-J. Jiang · X.-R. Ji · T.-T. Nguyen Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian University of Technology, Fuzhou, China T.-G. Ngo (B) Faculty of Computer Science and Engineering, Thuyloi University, 175 Tay Son, Dong Da, Hanoi, Vietnam T.-T. Nguyen · H.-T. Tran Department of Information Technology, Haiphong University of Manage and Technology, Haiphong, Vietnam © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_8

89

90

T.-K. Dao et al.

of nodes is a disk sensing model [3]. In reference [4], a centralized approximation algorithm based on Voronoi partition is proposed. First, Voronoi partition the target area, then determine the redundant node dependency graph of the coverage set and then calculate the redundant nodes that can be closed at the same time through a greedy algorithm to get the final coverage. Finally, the algorithm of the minimum spanning tree is used to add auxiliary nodes to ensure the connectivity of the network. In reference [5], a grid-based distributed energy-efficient k-cover multi-connected deployment algorithm is proposed on the two-dimensional plane. The sensor nodes are randomly deployed in the target area. The nodes need to be dense enough to meet the requirements of k-coverage and multi-connectivity. The area divided into several grids. The length of the grid angle is the size of the communication radius to ensure that the nodes in a network can communicate with each other from each grid [6]. In order to solve the deterministic coverage problem in WSN, the threedimensional surface is firstly reduced. Then the optimization algorithm is used to search for the global optimal coverage solution through continuous iteration. In the reference [7], for the 3D surface coverage problem, the target area is divided into n sub-areas. Then the multi-objective coverage problem is used to ensure the coverage and connectivity requirements of the network. Most of the researches on 3D surface problems is focused on coverage problems. Still, less on connectivity problems, often takes a long time to calculate and quickly leads to locally optimal solutions [8]. The observations of the previous works all are in two-dimensional or threedimensional space at the problem of target coverage WSN. Several implementing metaheuristic approaches [9, 10] are promising ways to solve the complicated issue of node coverage of WSN [11]. This paper explores the problem of target coverage of WSN on the 3D surface, suggests a target point distribution technique on the 3D surface to maximize coverage node location problems by applying a new metaheuristic algorithm called sailfish optimizer (SFO) algorithm [12]. In the process of the node perceiving the target points on the surface, there is the blind field of 3D perception, which realizes target coverage on the surface. The rest of the paper is organized as follows. Section 2 discusses the algorithm of the sailfish optimizer (FSO). Section 3 presents the mathematical coverage model. Section 4 gives the experimental results. Section 5 summarizes the conclusion.

2 Sailfish Optimizer (SFO) Sailfish optimizer (SFO) is a new meta-heuristic developed by the inspiration of combining action behaviors of both types of fish sailfish and sardine. The mathematical model can be simulated by observing the action hunting attaching of the sailfish for the prey that is sardine [12, 13]. The processing procedure of the SFO algorithm for optimization is presented as the following phases of processing descriptions.

A Coverage and Connectivity of WSN in 3D Surface …

91

Initialization: two vectors assigned for two types of fish: Sailfish and sardine: xik and y kj (i ∈ {sailfishes}, j∈ {sardines}) are generated randomly, initializing positions with Np is the population size and k ∈{a number of iteration}; and in boundaries of the problem space with a feasible solution. We calculated the objective functions (or fitness function of the desired problem) for the sailfishand sardine, respectively. k k , eli ∈ {set of sailfish}) i.e. F(xeli ) ≤ F xik , ∀k. with the sardine Elite sailfish (xeli k k (F(yiin j ), in j ∈ {a set of sardine}) i.e., (F(yiin j )) ≤ F(F(y kj ), ∀k. The fish positions work as agent searching in optimization is like the elitist procedure is to store elite sailfish and the injured sardine. Locations updating Both sailfish and sardine are two types of location’s fish can be improved the positions by updating their positions. The mathematical model equations of these updatings are stating as follows. For sailfish position updating with the elite sailfish is toward a promising area in searching is given as follows.     k k k k − λk ∗ β ∗ xeli + yin xik+1 = xeli j /2 − x i

(1)

where xik+1 and xik are generated as a new sailfish position over iteration of (k + 1)th, k k and yinj are and ith position the current sailfish, β is a random number ∈ [0, 1], xeli current elite sailfish and sardine positions; λk is a coefficient over iteration of kth, which is calculated as follows. λk = (2 ∗ β ∗ P D)−P D

(2)

where, P D is indicated as the density of the school fish as prey. Alternation of attacks on the prey school, the sailfishes are hunting sardines; therefore, the victim number will decrease over iterations. P D is defined as follows. PD = 1 −

Nsh Ns + Nsh

(3)

where Nsh and Ns are the number of sailfish and sardines, respectively. For sardine, position updating is considered as against the sailfish attacks. Let y k+1 j and y kj be a sardine new and the current, and that its vector is updated locations as the following description.  k  = r ∗ xeli − y kj + A P y k+1 j

(4)

k where r is a random number ∈ [0, 1]; the obtained best position so far xeli is called elite sailfish; Ap is the attack power that is modeled as follows.

Ap = A ∗ (1 − (2 ∗ ltr ∗ ε))

(5)

92

T.-K. Dao et al.

Factors decreased in power attack with A and ε are two variables; I tr is variable of iteration number. In the experiment, AP is set to 0.5 or less (A P < 0.5). A selected number α of sardines is defined as follows. α = Ns ∗ AP

(6)

where α is variable of selecting sardines that can be updated for is locations. Whenever a sailfish i catches up a sardine j, it means the position of sailfish move to the location of the sardine. The hunted sardine is substituted by sailfish that is simulated as follows. xik = y kj if (y kj ) < F(xik )

(7)

where xik and y kj indicate the position of sailfish i and sardine j at iteration kth with condiction (y kj ) < (xik ). It is that the sardine population is decreased; it is going on to terminate with meet the termination condition or the end of processing optimization if reaching the target.

3 Mathematical Coverage Model The coverage model for 3d space is a coverage model with node location as the middle and sensing distance as the radius. In the issue distribution in 3D space, node locations must be converted from a two-dimensional array o(x, y) to a 3D array o(x, y, z), which increases the height z coordinate. A complex homogeneous WSN as a sphere that is the basis for sensing and connectivity. Oi (xi , yi , z i ) is taken as the sphere core for every sensor node S i (x i , yi , zi ) in the network, and its sensing radius and contact radius are r and R, respectively, which are in the same system. Both nodes have the same range of feeling and contact. Simplifying the problem model is made with the following premises. The sensor network is connected; that is, it is possible for all sensor nodes in the sensor network to receive information about their location and communicate. The location migration can be correctly realized by the sensor nodes, depending on the measurement performance. Regardless of the node’s capability, the node is significant. The probability of points ξ in the WSN is monitored by the sensor node Si that denoted as P(ξ, Si ); d(ξ, Si ) is the distance between the target point and the sensor node; r is the sensing radius of the sensor node. The model is expressed as follows.  P(ξ, Si ) =

0, d(ξ, Si ) > r 1, d(ξ, Si ) ≤ r

(8)

A Coverage and Connectivity of WSN in 3D Surface …

93

where P(ξ, Si ) is the probability of points in the sensor node’s sensing radius, and d(ξ, Si ) is the distance between the points ξ , and Si . The distance between points to Si can be calculated within the sensing range of the node. d(ξ, Si ) =



(xi − xk )2 + (yi − yk )2 + (yi − yk )2 ≤ r

(9)

WSN often deployed with a large number of sensor nodes are randomly scattered in the three-dimensional space to be monitored. Assumed that 3 points A, B, C are all within the sensing range of node Si that coordinates of the positions are A(x1 , y1 , z 1 ), B(x2 , y2 , z 2 ), and C(x3 , y3 , z 3 ). The coordinates of P(x, y, z) can be expressed as follows. ⎧ ⎨ x = (x1 + λx2 )/(1 + λ) y = (y1 + λy2 )/(1 + λ) ⎩ z = (z 1 + λz 2 )/(1 + λ)

(10)

where λ is variable of a fixed proportion point of the direct to line, three points A, B, and C can be monitored by sensor node S i on the 3D surface. The spatial position coordinates of sensor nodes S i and A, B, and C are known. The formula for connectivity priority is as follows: eiα · eαj Pc =  β + z d ξi , s j

(11)

β  Among them, ei and e j represent the residual energy of nodes si and s j ; d ξi , s j is the Euclidean distance of nodes si and s j ; z is the parameter generated randomly to ensure that the value of PC is not repeated as much as possible; α and β are the parameters set by the user and not 0. The formula for override priority is as follows: PS = eiλ cθ + z

(12)

where: ei represents the residual energy of node si ; c represents the useful contribution of node si to the network; z is a randomly generated parameter to ensure that the Ps value is not repeated as much as possible; λ, θ are parameters set by the user and are not 0. The target point set is divided into various meshes. Many meshes are calculated for pair of nodes that are activated according to the connectivity priority of a multiconnectivity graph and are established with its neighbor grid to ensure that there are at least two active nodes in each network. The scale comparison of the active service area by the cell N nodes and the overall size of the restricted area. The field supervisory area formula is as follows:

94

T.-K. Dao et al.



Parea =

Pc − Ps m×m×h

(13)

The SFO algorithm is used to optimize as minimum auxiliary nodes so that the nodes can communicate with each other. There are joint points in the connected graph formed by all awakened nodes. For each collective point, find out the double connected graph containing joint points, and find out a node other than the related node in each double connected graph to establish the double connected graph by adding auxiliary nodes. The majority steps of the processing of the proposed scheme procedure are expressed as follows. Step 1: Initialization populations size of SFO N, set of target points St with connected graphs, Calculate fitness function Eq. (9).with the grid number q; Step 2: Processing procedure of the coverage: while (q) For each mesh Ma , a multi-connected graph is built with its neighbor mesh; q- -; The coverage probability is calculated of the mobile node to a pixel, according to Eq. (11). End while if (Grid Ma does not reach k-coverage) According to the priority of coverage set, the node wakes up and enters the active state; if (Mesh Ma is not connected) According to the obtained optimal results, auxiliary nodes are added to make it connected; Step3: Join the optimized coverage mesh if (Joint points in mesh Ma ) for Every joint a do Let B1 , B2 , . . . , Bk be a bipartite graph with node a, let vi be a node of Bi , and Vi = a, 1 ≤ i ≤ k In the path of (vi , vi+1 ), 1 ≤ i ≤ k, the least auxiliary nodes of communication are activated; The joint coverage is calculated according to Eq. (13). Step 4: Terminal condition: If condition of terminal meet e.g. max-iterations, threshold values Repeat step 2 to step 4. Step 5: Output the results

4 Experimental Results The majority steps of the processing of the proposed scheme procedure are expressed as follows. Assumed a deployed network with N mobile nodes that are placed arbitrarily in the desired area of M × M m2 and height H (M = 20, 30, 80, 130, 150, H = 2, 4, 5, 6). The sensing radius r of all mobile nodes is the same that is

A Coverage and Connectivity of WSN in 3D Surface …

a) the 3D surface

95

b) the projected two dimension

Fig. 1 Projection of three-dimensional space surfaces to the two-dimension

r set to 3 m of sensing radius, the communication radius is R set to 6 m; In probability model, λ = 0.9; α = 1.5; β = 1.1; The reliability measurement parameters is re = 0.5; r = 1.5 m; The maximum number of iterations T max = 500; At the same time in simulating a series of experiments. Figure 1 shows the projection of three-dimensional space surfaces to the two-dimensional. It means the 3D surface is projected two-dimension. The aim of this simulation test is to verify the efficiency of the proposed scheme that allows the node to leave these few obstacles and to begin tracking the corresponding target points in order to achieve the objective. Four obstacle points are chosen on the three-dimensional space surface and the location of the obstacle point projection on the plane such as: P1(1.5, 1.5, 0), P2(1.5, 1.5, 0), P3(−1.5, −1.5, 1.0), P4(14.5, −15.5, 20.0), and P4(10.5, −1.5, 30), respectively. The simulation environment and control parameter settings are the points that must be controlled on the floor, like N target points, but also four obstacle points. Figure 2 displays the settings simulation environment and control parameters. It is necessary to set the SFO algorithm parameters to get the ideal results fairly for the minimal number of sensor nodes is used to obtain maximum coverage of the target points on the three-dimensional terrain. Regardless of the inconsistency between the convergence speed and the precision of the SFO algorithm, the simulation check of the

a) several obstacle points

b) settings the 3D surface

Fig. 2 The settings simulation environment and control parameters

96

T.-K. Dao et al. 8 ACO

The best score obtained so far

7 GA 6

PSO

5

SPO

4 3 2 1

50

100

150

200

250

300

350

400

450

500

Iterations

Fig. 3 Comparison of the proposed method (SFO) with the GA, PSO, and ACO for the objective function of coverage

algorithm can be performed on the basis of the correct sacrificing of the convergence speed to get the most precise coverage. The SFO algorithm optimizes the positioning positions of sensor nodes in space so that the sensor nodes can know the maximum range of the target points on the space surface, to validate the method’s precision and viability. SFO-optimized distribution of sensor nodes in three-dimensional space where sensor nodes are spread uniformly in three-dimensional surface space example, the node coordinates are described as follows. The obtained results of the proposed shame are compared with the other methods in the literature, e.g., Genetic algorithm (GA) [8]. Particle swarm optimization (PSO) [14], and Ant colony optimization (ACO) [15] for the coverage problem in WSN. Figure 3 depicts the comparison of the proposed method with the GA, PSO, and ACO for the objective function of coverage and connective probability in WSN. It is clearly seen that the proposed scheme produces converge fastest in comparison. Table 1 shows the comparisons of the results of the proposed scheme with the GA and PSO for different regions of the coverage optimization performance. Observed Table 1, it is clearly seen that the proposed method can achieve the optimal global solution regardless of the different coverage areas. The proposed plan can cover the entire monitoring area with the best layout of the nodes.

5 Conclusion In this paper, a new solution was introduced to node and connection coverage optimization in Wireless sensor networks (WSN) based on the Sailfish Optimizer (SFO). The architecture of the sensor nodes typically allowed optimal efficiency on the entire

A Coverage and Connectivity of WSN in 3D Surface …

97

Table 1 Comparisons of the results of the proposed scheme with the GA and PSO for different regions of the coverage optimization performance Area

Mobile nodes

Proposed scheme (SFO)

The PSO

The GA

Coverage rate (%)

No. of iterations

Coverage rate (%)

No. of iterations

Coverage rate (%)

No. of iterations

20 × 20 ×2

20

81.8

187

77.3

211

78.8

423

30 × 30 ×4

50

84.0

458

82.3

420

80.1

440

80 × 80 ×5

60

81.0

468

80.2

440

80.0

450

130 × 130 × 6

160

81.0

468

78.2

446

77.0

455

150 × 150 × 6

200

81.0

468

80.2%

540

80.0%

456

system life of the network based on node coverage and connection. The problem of node coverage in WSN is dedicated to the implementation of control and tracking applications. The desired deployment area of the network divided into mesh grids to establish multi-connectivity of every grid, the cover set constructed through the probabilistic model. The probability definition and geographic coverage rate are used to model as an empirical function of increasing node position to reach the optimal coverage. The SFO was applied to optimizing the connected graph of the multiconnectivity of a grid WSN and the joint points to a graph within the grid WSN. The node density cases were performed to determine the optimal method for maximum distribution trials in WSN. Experimental findings demonstrate that the proposed solution effectively increases convergence speed and node coverage efficiency, resulting in maximum network coverage impact and increasing network life.

References 1. Othman, M.F., Shazali, K.: Wireless sensor network applications: a study in environment monitoring system. In: Procedia Engineering. pp. 1204–1210 (2012). https://doi.org/10.1016/ j.proeng.2012.07.302 2. Nguyen, T.T., Pan, J.S., Dao, T.K.: A compact bat algorithm for unequal clustering in wireless sensor networks. Appl. Sci. 9, 1973 (2019). https://doi.org/10.3390/app9101973 3. Nguyen, T.-T., Pan, J.-S., Lin, J.C.-W., Dao, T.-K., Nguyen, T.-X.-H.: An optimal node coverage in wireless sensor network based on whale optimization algorithm. Data Sci. Pattern Recognit. 02, 11–21 (2018) 4. Yang, X., Liu, J.: Sequence localization algorithm based on 3D voronoi diagram in wireless sensor network. In: Applied Mechanics and Materials. pp. 4422–4426. Trans Tech Publ (2014) 5. Zhouping, Y.: Nodes control algorithm design based coverage and connectivity of wireless sensor network. Int. J. Smart Sens. Intell. Syst. 8, 272–290 (2015)

98

T.-K. Dao et al.

6. Chen, J., Xu, T.R., Lan, X.: Distributed energy-efficient grid-based sensor deployment algorithm for k-coverage and multi connectivity in WSN. Appl. Res. Comput. 31, 2466–2472 (2014) 7. Al-Turjman, F.M., Hassanein, H.S., Ibnkahla, M.: Quantifying connectivity in wireless sensor networks with grid-based deployments. J. Netw. Comput. Appl. 36, 368–377 (2013) 8. Unaldi, N., Temel, S., Asari, V.K.: Method for optimal sensor deployment on 3D terrains utilizing a steady state genetic algorithm with a guided walk mutation operator based on the wavelet transform. Sensors 12, 5116–5133 (2012) 9. Nguyen, T.-T., Shieh, C.-S., Horng, M.-F., Dao, T.-K.: A genetic algorithm with selfconfiguration chromosome for the optimization of wireless sensor networks (2014). https:// doi.org/10.1145/2684103.2684132 10. Pan, J.-S., Wang, X., Chu, S.-C., Nguyen, T.-T.: A multi-group grasshopper optimisation algorithm for application in capacitated vehicle routing problem. Data Sci. Pattern Recognit. 4, 41–56 (2020) 11. Nguyen, T.T., Pan, J.S., Dao, T.K.: An improved flower pollination algorithm for optimizing layouts of nodes in wireless sensor network. IEEE Access 7, 75985–75998 (2019). https://doi. org/10.1109/ACCESS.2019.2921721 12. Shadravan, S., Naji, H.R., Bardsiri, V.K.: The sailfish optimizer: a novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 80, 20–34 (2019) 13. Hammouti, I., Lajjam, A., Merouani, M., Tabaa, Y.: A modified sailfish optimizer to solve dynamic berth allocation problem in conventional container terminal. Int. J. Ind. Eng. Comput. 10, 491–504 (2019) 14. Chaudhary, D.K., Dua, R.L.: Application of multi objective particle swarm optimization to maximize coverage and lifetime of wireless sensor network. Int. J. Comput. Eng. Res. 2, 1628–1633 (2012) 15. Qasim, T., Zia, M., Minhas, Q.-A., Bhatti, N., Saleem, K., Qasim, T., Mahmood, H.: An ant colony optimization based approach for minimum cost coverage on 3-D grid in wireless sensor networks. IEEE Commun. Lett. 22, 1140–1143 (2018)

A Classification Model for Software Bug Prediction Based on Ensemble Deep Learning Approach Boosted with SMOTE Technique Thaer Thaher and Faisal Khamayseh

Abstract In the software development process, the testing phase plays a vital role in assessing software quality. Limited resources pose a challenge in achieving this purpose efficiently. Therefore, early stage procedures such as software fault prediction (SFP) are utilized to facilitate the testing process in an optimal way. SFP aims to predict fault-prone components early based on some software metrics (features). Machine learning (ML) techniques have proven superior performance in tackling this problem. However, there is no best classifier to handle all possible classification problems. Thus, building a reliable SFP model is still a research challenge. The purpose of this paper is to introduce an efficient classification framework to improve the performance of the SFP. For this purpose, an ensemble of multi-layer perceptron (MLP) deep learning algorithm boosted with synthetic minority oversampling technique (SMOTE) is proposed. The proposed model is benchmarked and assessed using sixteen real-world software projects selected from the PROMISE software engineering repository. The comparative study revealed that ensemble MLP achieved promising prediction quality on the majority of datasets compared to other traditional classifiers as well as those in preceding works. Keywords Software fault prediction · Deep learning · Neural networks · Multi-layer perceptron · Ensemble learning · SMOTE · Imbalanced data

1 Introduction During the software development life cycle (SDLC) process, the testing stage plays a significant role in evaluating software quality [1]. In this stage, developers investigate whether the code and programming work according to customer requirements. Various types of testing including quality assurance (QA) testing, user acceptance testing (UAT), and system integration testing (SIT) are involved with the aim of minimizing the number of errors within the software [2]. QA testing concerns with standardized procedures that ensure delivering the desired product quality before it is released for T. Thaher (B) · F. Khamayseh Palestine Polytechnic University, Hebron, Palestine © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_9

99

100

T. Thaher and F. Khamayseh

the public. However, limited resources constitute a significant challenge to provide the rapid test results required by the development team. For this purpose, early stage procedures such as software fault prediction (SFP) are utilized to facilitate the testing process in an optimal way [3]. SFP is defined as predicting fault-prone software modules based on some characteristics (metrics) of the software project [3]. It is an early step that is conducted before the real start of the testing phase to obtain a high-quality product with minimum cost and effort [4]. Some faulty modules may be passed during design and development without being detected and fixed, which will adversely affect the later produced versions of the software. Therefore, employing SFP early in the software development process has attracted considerable attention to conduct the SDLC healthily [5]. The application of SFP techniques can primarily contribute to reducing the number of potential defects and hence improve the quality of produced software. Besides, the early prediction of faults will reduce time and effort, and cost should be spent during the development process [6]. The analysis of gathered software metrics and historical data of projects has found its wide application in enhancing the SDLC processes [4]. However, analyzing the complicated and vast amounts of data poses a significant challenge. Therefore, machine learning (ML) techniques have been introduced to serve as potential computational methods for solving data mining-related problems such as SFP. These techniques have proven superior performance in building efficient SFP models [7]. The effectiveness of the SFP model depends on several factors, including the exploited ML algorithm and the quality of the dataset [8]. One of the main challenges in this domain is the lack of high-quality datasets. That is to say, the available datasets may have irrelevant features and imbalanced distribution of ordinary and defected instances. These two common aspects significantly impact the performance of ML techniques [9, 10]. The main purpose of this research is to introduce an efficient SFP model based on ensemble deep learning algorithm incorporated with synthetic minority oversampling technique (SMOTE) technique to enhance the overall prediction quality. The major contributions are summarized as follows: • Two significant aspects of data quality were considered: The SMOTE was utilized for the sake of re-balancing datasets, and the filter-based feature selection technique was employed to present the essential object-oriented metrics in predicting defected software. • An ensemble classification model is constructed based on a multi-layer perceptron (MLP) neural network to enhance the performance of the traditional MLP classifier. • In comparison with the traditional classifiers as well as the state-of-the-art approaches, the proposed model showed a clear superiority. The rest of this paper is structured as follows: The recent SFP approaches in the literature are reviewed in Sect. 2, including ML-based and FS-based methods.

A Classification Model for Software Bug Prediction …

101

The details of the proposed methodology are presented in Sect. 4. The experimental design and results are discussed in Sect. 5. Finally, the conclusion and the future directions are drawn in Sect. 6.

2 Review of Related Works ML techniques have attracted considerable interest from the research community to serve as potential computational methods for extracting knowledge from the software projects. Various SFP datasets available as benchmarks are employed to assess the performance of proposed models. The most frequently used benchmarks in this domain are PROMISE repository, NASA datasets, and qualitas corpus [3]. This section presents a review of ML techniques for bugs detection in SE projects as well as the deployment of sampling and feature selection methods in this area.

2.1 ML Based Techniques The literature is rich with various traditional ML techniques to handle SFP problem including statistical classification, supervised, and unsupervised techniques. Examples of these techniques including support vector machine (SVM), decision trees (DT), Bayesian networks (BN), naive Bayes (NB), K-nearest neighbors (KNN), artificial neural networks (ANN), multi-layer perceptron (MLP), and logistic regression (LR) [3]. For instance, Caglayan et al. employed BN, N, and LR methods to predict faults for an industrial software. Experimental results revealed that the proposed methods provide better accuracy to predict the software defects. An empirical study was conducted by Singh and Malhotra [11] to predict fault-prone software based on object-oriented (OO) metrics. SVM was employed as a classification model, and receiver operating characteristic (ROC) was used as an evaluation measure. The proposed model was assessed using the KC1 dataset from the NASA repository. The results confirmed the efficiency of SVM in predicting faulty classes in OO-based systems. Some of the commonly used algorithms, such as DT, NB, and neural networks (NN), have provided improvements to prediction accuracy. However, it still has some limitations; on the one hand, expert knowledge is required for processing data. On the other hand, a massive amount of data is necessary for training operations, which becomes a significant challenge, especially in a dynamic environment [12]. To overcome these limitations and achieve better performance, researchers moved to take advantage of deep learning-based and ensemble techniques to tackle the SFP problem. For instance, Al Qasem and Akour [5] examined the performance of two deep learning algorithms to handle the SFP problem. They investigated the MLP and convolutional neural network (CNN) on four NASA benchmarks from the PROMISE repository. The experimental results revealed the superiority of deep learning algorithms for the SFP problem.

102

T. Thaher and F. Khamayseh

2.2 Feature Selection-Based Approaches for SFP Recently, various wrapper-based and filter-based FS methods are increasingly utilized to enhance the prediction quality of SFP models. Few studies have been conducted on the filter-based FS approach. For instance, Balogun et al. [13] studied the effect of filter-based FS techniques on the efficiency of SFP models. Eighteen FS methods (fourteen feature subset selection and four feature ranking) were assessed using four classifiers and five datasets from NASA repository. The results showed that FS had a significant effect on improving prediction quality of SFP models. Catal and Diri [14] investigated a correlation-based FS method to identify the most relevant predictors for the SFP problem based on ML algorithms. Four different classifiers were applied over public datasets from NASA repository. The authors found that the performance of FS methods varies based on the employed classifier. Besides, many wrapper FS methods based on evolutionary and swarm intelligence algorithms have been conducted for the SFP problem, such as the studies presented in [4, 8, 10].

2.3 Motivation of the Study Mostly, it is clear from the reviewed studies that most of them were dedicated to traditional classification paradigms. Few of them deployed neural networks and deep learning algorithms. Besides, the non-free lunch (NFL) theorem for optimization [15] can also be derived for ML and classification. Based on previous and current research and up to our knowledge, there is no best classifier to handle all possible classification problems. Therefore, the way to build a reliable SFP model is still a challenge and open for research. To the best of our knowledge, the ensemble of deep learning MLP classifier has not been utilized for tacking the SFP problem yet. The MLP ensembles will be a promising strategy to overcome the drawbacks of a single MLP network. These facts motivated the authors of this work to propose an efficient classification framework for the early prediction of software fault prone by employing ensemble MLP classification technique augmented with filter-based FS and SMOTE oversampling techniques.

3 Investigated Software Projects In this research, we used open research datasets in software engineering. They are gathered from PRedictOr Models In Software Engineering (PROMISE) repository, which is created to encourage the SE community to build verifiable, repeatable, refutable, and improvable predictive models [16]. Sixteen releases fall into six different projects (Ant, Camel, jEdit, Log4j, Lucene, and Xalan) have been selected for the experimental work in this study. They are of OO defect projects. Table 1 shows

A Classification Model for Software Bug Prediction …

103

Table 1 Details of the 16 software projects (datasets) from PROMISE repository Dataset Version No. instances No. normal No. defective Rate of instances instances defective instances ant camel

jedit

log4j lucene xalan

1.7 1.0 1.2 1.4 1.6 3.2 4.0 4.1 4.2 4.3 1.0 1.1 2.0 2.4 2.5 2.6

745 339 608 872 965 272 306 312 367 492 135 109 195 723 803 885

579 326 392 727 777 182 231 233 319 481 101 72 104 613 416 474

166 13 216 145 188 90 75 79 48 11 34 37 91 110 387 411

0.223 0.038 0.355 0.166 0.195 0.331 0.245 0.253 0.131 0.022 0.252 0.339 0.467 0.152 0.482 0.464

the characteristics of the selected datasets. For more details about the investigated projects, interested readers can refer to [17, 18]. Software metrics (factors) are usually introduced to analyze and evaluate the quality of the software project. Object-oriented (OO) metrics are calculated from software created utilizing the OO development strategy. The datasets utilized in this study are mainly OO defect projects. That is, a set of OO metrics is used to judge whether the system is faulty or non-faulty. All investigated datasets consist of 20 metrics belong to different suites. The metrics are CK suite proposed by Chidamber and Kemerer [19] which includes weighted method count (WMC), number of children (NOC), coupling between object class (CBO), depth of inheritance tree (DIT), lack of cohesion in methods (LCOM), and response for a class (RFC). Martin [20] suggested two metrics called Afferent couplings (CA) and Efferent couplings (CE). One metric introduced by Henderson-seller [21] called lack of cohesion in methods (LCOM3). The suite suggested by Bansiy and Davis including [22] number of public methods (NPM), data access metric (DAM), measure of aggregation (MOA), measure of functional abstraction (MFA), and cohesion among methods of class (CAM). The quality-oriented suite suggested by Tang et al. [23]: inheritance coupling (IC), coupling between methods (CBM), and average method complexity (AMC). Maximum cyclomatic complexity (MAX_CC) and average cyclomatic complexity (AVG_CC) metrics introduced by McCabe [24].

104

T. Thaher and F. Khamayseh

4 The Proposed Methodology This study aims to create an efficient classification model for the early prediction of faulty software modules. Four major components are employed for this purpose: SFP datasets, preprocessing techniques, learning algorithms, and performance evaluation measures. The proposed SFP methodology is illustrated in Fig. 1.

4.1 Preprocessing Techniques After inspecting the employed datasets, we found that they are free of noise and missing values. All features are numeric with different scales. Therefore, the minmax normalization method was applied to standardize the range in the interval [0, 1] and thus avoiding bias towards some dominant features. Moreover, it was observed that datasets are highly imbalanced. Thus, it is essential to resolve this problem by balancing the datasets classes before training the model. To deal with data imbalance problem, we applied SMOTE oversampling technique. Another essential aspect investigated in this stage is checking the most informative OO metrics for building the SFP model. In this study, we applied the filter-based FS method using the information gain ranking technique.

4.2 Proposed Classification Paradigm Multi-layer perceptrons (MLPs) are a type of feed-forward artificial neural network. In this kind of NNs, the neurons are organized in parallel fully interconnected layers such that the connection between neurons is one-directional and one-way. MLP consists of three levels of layers: The first layer represents the input layer, the last layer represents the output layer, and the other layers between them are called hidden layers [25]. The MLP that consists of multiple hidden layers is called deep NNs, which have become popular due to the promising performance in dealing with complex ML

Fig. 1 Software fault prediction process

A Classification Model for Software Bug Prediction …

105

Fig. 2 Ensemble MLP learner architecture

tasks. Deep learning neural networks, including MLPs, provide enough flexibility for nonlinearly separable problems. However, there are some limitations with MLPs such as premature convergence (local minima) and likely of overfitting problems [26]. Therefore, utilizing ensemble learning will solve these problems. Ensemble learning combines several traditional ML models to enhance the predictive performance, generalizability, and robustness over a single classifier. Ensemble learners often have lower error than every individual classifiers by themselves. Besides, they offer less overfitting as well as less biases caused by traditional learners [27]. In this study, an ensemble classification model based on MLP as a base classifier is introduced to handle the SFP problem. We applied the bagging method (also called bootstrap aggregation) to build the ensemble classifier such that each individual MLP classifier is trained on a random subset of the original training data generated using the bootstrap technique. Then, the predictions of the fitted models are aggregated using a voting method to form the final prediction. Figure 2 shows the architecture of the ensemble MLP model.

4.3 Evaluation Measures Accuracy is the most widely used metric to evaluate classic models. However, it can be profoundly misleading of judging a model when dealing with imbalanced training data [10]. For this purpose, we relied on area under the ROC (AUC), true positive rate (TPR), and true negative rate (TNR) to measure the performance of the proposed model. These measures are popular in the case of imbalanced data [4]. The confusion matrix of binary classification used to calculate the evaluation measures as well as formulas of calculating TPR, TNR, and AUC are presented in [10, 28].

106

T. Thaher and F. Khamayseh

5 Evaluation Results and Discussion In this part, we intensely performed a set of experiments to investigate the efficiency of the proposed model. In all experiments, we used the validation set approach to estimate the prediction quality. All datasets are randomly split into 66% for training the model and 34% for testing the fitted model. Hence, the models are fitted and tested on completely separated samples. To reduce the impact of random components, we reported the average results for total runs of methods. Hence, we repeated the experiments 30 times for each algorithm. In reported results, the best values are highlighted with boldface. In order to record all results based on fair conditions, we employed a single computing system. The utilized system is a PC with Intel Core i7-8550U, 2.2 GHz CPU, 16 GB RAM. We used the Python programming language to implement the classification framework. Various libraries were used to facilitate the implementation of ML algorithms such as Panda, Numpy, Matplotlib, and Scikit-learn (SKlearn).

5.1 Feature Selection (FS) Results The information gain filter-based FS method was applied to identify the most relevant metrics. The average ranks for the OO metrics among all datasets were calculated. The obtained results showed that r f c metric achieved the best rank of (0.1138), followed by wmc (0.1013), loc (0.0992), lcom (0.0801), nbm (0.0776), lcom3 (0.0734), ce (0.0683), cam (0.0660), cbo (0.0636), max-cc (0.0616), respectively. r f c metric represents the number of methods including the inherited methods per class, wmc is the sum of complexities for all methods per class which represents the development and maintenance cost of the class, and loc indicates the number of code lines in a class. This confirms that these three metrics are relevant and should not be ignored for SFP data.

5.2 Evaluation of MLP Classifier In this part, the feed-forward MLP neural network that uses back-propagation learning is applied to solve the SFP problem. Besides, various classification models, namely KNN, NB, LDA, linear regression (LR), DT, and SVM are also examined. A deep comparison in terms of AUC rates is presented.

A Classification Model for Software Bug Prediction …

107

Table 2 Assessing the impact of SMOTE method in terms of TPR, TNR, and AUC measures [Averaged over 30 independent runs, p ≤ 0.05 is significant] Dataset Original SMOTE P_values of wilcoxon test (WT) TPR TNR AUC TPR TNR AUC for AUC score ant1.7 camel1.0 camel1.2 camel1.4 camel1.6 jedit3.2 jedit4.0 jedit4.1 jedit4.2 jedit4.3 log4j1.0 log4j1.1 lucene2.0 xalan2.4 xalan2.5 xalan2.6

0.9387 0.9899 0.9666 0.9655 0.9696 0.8499 0.9665 0.9229 0.9623 0.9930 0.8481 0.8424 0.7148 0.9636 0.6933 0.8207

0.4053 0.0540 0.0906 0.1508 0.1473 0.6263 0.2200 0.4813 0.3388 0.0111 0.5869 0.6205 0.6298 0.2210 0.5853 0.6447

0.6720 0.5220 0.5286 0.5581 0.5584 0.7381 0.5933 0.7021 0.6505 0.5020 0.7175 0.7314 0.6723 0.5923 0.6393 0.7327

0.7555 0.8728 0.6597 0.7521 0.6956 0.7916 0.7592 0.8142 0.7844 0.9119 0.7558 0.7676 0.6688 0.7360 0.6720 0.7846

0.7674 0.9908 0.6032 0.7296 0.7709 0.7919 0.8007 0.7851 0.8341 0.9996 0.8311 0.8096 0.6990 0.8257 0.6105 0.6738

0.7614 0.9318 0.6314 0.7409 0.7333 0.7917 0.7799 0.7996 0.8092 0.9557 0.7934 0.7886 0.6839 0.7809 0.6413 0.7292

0.000002 0.000002 0.000002 0.000002 0.000002 0.000063 0.000002 0.000002 0.000002 0.000002 0.000115 0.005667 0.271155 0.000002 0.909931 0.628843

Bold indicates the best values

5.2.1

Assessing the Impact of SMOTE Technique

In the current subsection, the efficiency of the SMOTE technique is appraised. Table 2 reveals the prediction performance of MLP classifier before and after applying SMOTE by inspecting the average AUC, TPR, and TNR on all datasets. By assaying the reported results, it can be observed that the integration between MLP classifier and SMOTE method achieves better performance in almost 94% of the datasets (15 out of 16). Additionally, it can be seen significant improvements in the quality of predicting the faulty instances (i.e., TNR). These results prove the importance of balanced data to enhance the overall performance of the model. Therefore, the subsequent experiments are performed on the balanced datasets.

5.2.2

Hyper-parameters Tuning

One of the major challenges when dealing with MLP is the large number of free parameters such as the number of epochs, hidden layers, activation function, the

108

T. Thaher and F. Khamayseh

optimizer, and learning rates. The performance of the MPL is influenced strongly by the chosen parameters. Therefore, this part exhibits a comprehensive experimental design in which fourteen experiments with different combinations of the main parameters (optimizer, epochs, hidden layers, and activation function) were conducted. Two optimizers, namely stochastic gradient descent (SGD) and adam (an extension of SGD), were tested. The number of epochs is allowed to take four values (1000, 2000, 3000, and 5000). The hidden layers parameter is allowed to be 2, 3, 4, and 5. Three different activation functions, namely tanh, ReLU, logistic, and identity, were evaluated. The obtained results showed that the MLP model can provide better performance on most datasets when the parameters epochs, hidden layers, activation function, and optimizer are set to (1000, 3, ReLU, adam), respectively. The learning rate in this study is set to be adaptive. Therefore, these settings are fixed for the subsequent experimental work.

5.3 Evaluation Results Using Bagging Ensemble Method For an intense investigation about the performance of the MLP classifier, it is compared to six traditional classifiers KNN, NB, LDA, LR, SVM, and DT in terms of test AUC rates. The obtained results are reported in Table 3 [denoted as basic]. For providing a fair comparison between these classifiers and finding the overall rank, the average ranking values of the Friedman test (F-Test) are also discussed. The results of basic classifiers in Table 3 reveal that MLP outperforms the other competitors in achieving higher AUC on 63% of the utilized dataset. These results confirm the superiority of MLP classifier to handle the SFP problem. Extensive experiments were conducted to investigate whether ensemble learning exceeds individuals classifiers in dealing with the SFP problem. A paired comparison between each individual classifier and the corresponding ensemble bagging method is presented in Table 3. The resultant table clarifies that in most cases, ensemble methods exceed the traditional classifiers in achieving higher AUC values. Regarding the MLP classifier, it can be seen that ensemble MLP outperforms the basic MLP in 15 out of 16 datasets (94%). Regarding the overall rank of all methods in the table, the top five methods are ensemble MLP, RF, ensemble DT, MLP, and ensemble KNN. Overall, the proposed ensemble MLP presents the best performance compared with the other approaches. This confirms that ensemble MLP makes a powerful model that provides superior performance compared to other classifiers when dealing with the utilized datasets. For an intense investigation about the performance of the MLP classifier, it is compared to six traditional classifiers KNN, NB, LDA, LR, SVM, and DT in terms of test AUC rates and the F-test ranking for all coded algorithms.

0.6986

0.8285

0.7835

0.8138

0.8120

0.6806

0.8546

camel1.2 0.6649

camel1.4 0.7834

camel1.6 0.7577

0.8039

0.8226

0.8444

0.8648

0.9708

0.8316

0.7980

0.6486

0.7332

0.00064

4.81

4

jedit4.0

jedit4.1

jedit4.2

jedit4.3

log4j1.0

log4j1.1

lucene2.0 0.6682

0.8275

jedit3.2

xalan2.4

xalan2.5

xalan2.6

P_value (WT)

F-Test

Rank

Bold indicates the best values

1

2.13

0.7499

0.6678

0.8615

0.9727

0.8806

0.8618

0.8504

0.9613

0.9638

0.8207

0.7896

came1.0

6

6.06

0.36544

0.7362

0.6292

0.8258

0.6513

0.8150

0.7879

0.9422

0.8531

0.8144

0.8221

0.7942

0.7743

0.7663

0.6724

0.9008

0.8137

KNN

ensemble Basic

MLP

Basic

ant1.7

Dataset

5

5.94

0.7376

0.6334

0.8283

0.6584

0.7901

0.7990

0.9415

0.8521

0.8166

0.8228

0.7870

0.7828

0.7746

0.6655

0.9009

0.8117

15

14.5

0.27753

0.6909

0.5576

0.6462

0.6468

0.7597

0.7522

0.7178

0.7410

0.7043

0.6699

0.7359

0.5833

0.5920

0.5554

0.8135

0.7026

ensemble Basic

NB

LDA

14

13.63

0.6929

0.5474

0.6539

0.6753

0.7802

0.7515

0.7042

0.7347

0.7064

0.6995

0.7248

0.5883

0.5977

0.5576

0.8191

0.7027

9

9

0.45331

0.7195

0.6002

0.7386

0.6848

0.7602

0.7685

0.8695

0.8203

0.7976

0.7406

0.7890

0.6706

0.6996

0.6142

0.8344

0.7589

ensemble Basic

Table 3 Evaluation results using bagging (ensemble) methods LR

8

8.06

0.7212

0.6020

0.7397

0.6815

0.7872

0.7704

0.8781

0.8172

0.7969

0.7291

0.8001

0.6737

0.6995

0.6098

0.8508

0.7562

13

11.13

0.50145

0.7236

0.5927

0.7331

0.6928

0.7769

0.7681

0.7694

0.7772

0.7579

0.7136

0.7822

0.6495

0.6612

0.5898

0.8005

0.7281

ensemble Basic

SVM

11

10.63

0.7209

0.5949

0.7393

0.6897

0.7806

0.7755

0.7759

0.7856

0.7636

0.7144

0.7745

0.6465

0.6624

0.5817

0.8022

0.7256

10

10.19

0.48499

0.7096

0.5901

0.7323

0.7109

0.7818

0.7794

0.7844

0.7925

0.7601

0.7119

0.7907

0.6392

0.6818

0.5812

0.8254

0.7291

ensemble Basic

DT

12

10.75

0.7110

0.5810

0.7347

0.6878

0.7686

0.7713

0.7836

0.8031

0.7593

0.7230

0.7760

0.6437

0.6712

0.5884

0.8242

0.7335

7

7.13

0.00044

0.7200

0.6415

0.8296

0.6295

0.7603

0.7864

0.9546

0.8615

0.7861

0.7926

0.7751

0.7936

0.8108

0.6755

0.9225

0.7964

ensemble Basic

3

3.81

0.7626

0.6574

0.8473

0.6558

0.7808

0.8181

0.9678

0.8776

0.8452

0.8132

0.8042

0.8363

0.8597

0.6867

0.9437

0.8486

ensemble

2

2.25

0.7552

0.6604

0.8536

0.6931

0.8075

0.8387

0.9718

0.8798

0.8376

0.8287

0.7968

0.8479

0.8604

0.6941

0.9539

0.8535

RF

A Classification Model for Software Bug Prediction … 109

0.961

0.699

0.829

0.783

0.814

0.850

0.862

0.881

0.973

0.862

0.812

0.681

0.855

0.668

0.750

camel1.2

camel1.4

camel1.6

jedit3.2

jedit4.0

jedit4.1

jedit4.2

jedit4.3

log4j1.0

log4j1.1

lucene2.0

xalan2.4

xalan2.5

xalan2.6

0.755

0.660

0.854

0.693

0.807

0.839

0.972

0.880

0.838

0.829

0.797

0.848

0.860

0.694

0.954

0.853

Bold indicates the best values

0.821

camel1.0





0.7521



0.8395

0.8297

0.8081

0.8290

0.8133

0.7661

0.8270

0.6762

0.7029

0.6467

0.8107

0.7727

VEBHHO

Ensemble MLP

RF

Thaher and Arman [28]

Our results

ant1.7

Dataset





0.7495



0.7986

0.7937

0.7499

0.7973

0.7868

0.7161

0.8053

0.6580

0.6918

0.6215

0.7552

0.7615







0.770







0.840

0.820

0.770



0.650

0.700

0.570



0.830







0.750







0.750

0.750

0.700



0.590

0.670

0.560



0.790

NB

Shatnawi [29]

EBMFOV3 LR

Tumar et. al [4]







0.700







0.770

0.800

0.810



0.660

0.670

0.640



0.760

5NN







0.670







0.640

0.690

0.720



0.540

0.600

0.520



0.740

C4.5

Table 4 Comparison between the proposed approach and the state-of-the-art methods in terms of AUC rates



0.624









0.658

















0.820

Bayesian networks

Okutan and Yildiz [30]

0.506

0.510

0.536

0.553

0.522

0.535

0.684

0.519



0.549

0.518

0.505

0.521

0.443

0.607

0.523

L-RNN without CV

0.64

0.6575

0.7621

0.8136

0.897

0.8455

0.8273

0.8352

0.7055

0.6777

0.6878

0.6603

0.5531

0.8881

0.7261

NB

Turabieh and Mafarja [6]

110 T. Thaher and F. Khamayseh

A Classification Model for Software Bug Prediction …

111

5.4 Comparing with State-of-the-Art Methods In the current section, we validate the AUC results of the proposed ensemble MLP by comparing it with those reported in preceding works. For this purpose, we compared the proposed method with VEBHHO [28], EBMFOV3 [4], (LR, NB, 5NN, C4.5) [29], Bayesian networks [30], and (L-RNN, NB) [6]. Based on Table 4 results, it can be recognized that the proposed model outperforms the other competitors in most cases. Additionally, ensemble MLP displays a superiority in the classification of around 85% of the datasets over the recently published VEBHHO [28] approach and 100 % of the datasets over the recently published EBMFOV3 [4]. In this way, our proposed approach proves its efficiency and accuracy in classifying the majority of the datasets over the works in the literature.

6 Conclusion and Future Works In this paper, a classification framework based on the aggregation of multiple MLP classifiers (ensemble MLP) and SMOTE technique was proposed with the aim of enhancing the prediction performance for the SFP problem. Ensemble learner was employed as a classification model, while SMOTE was utilized to handle the problem of imbalanced data. Sixteen real-world object-oriented projects from the PROMISE repository were utilized to evaluate the proposed model. The experimental results demonstrated that the MLP classifier is significantly sensitive to its parameters. It was also noted that the ensemble methods have a more significant influence on the performance of the SFP model than individual classifiers used in comparisons. Comparison results revealed that our proposed approach is efficient in handling the SFP problem compared to well-known classifiers such as KNN, LDA, LR, DT, and SVM as well as previous works. Overall, the proposed ensemble learner using MLP as a base classifier combined with SMOTE is recommended for the SFP problem in terms of prediction quality. The future work will investigate swarm intelligence meta-heuristics to train MLP. We will also exploit other ensemble methods such as boosting and stacking. Another exciting challenge we will plan to check is the adoption of SFP for the rapidly changing environment of Agile-based models such as extreme programming, where enough data to train the model, is not available at the early stages of the SDLC.

References 1. Honest, N.: Role of testing in software development life cycle. Int. J. Comput. Sci. Eng. 7, 886–889 (2019) 2. Sommerville, I.: Software Engineering, 10th edn. Pearson, London (2015)

112

T. Thaher and F. Khamayseh

3. Rathore, S.S., Kumar, S.: A study on software fault prediction techniques. Artif. Intell. Rev. 51(2), 255–327 (2019) 4. Tumar, I., Hassouneh, Y., Turabieh, H., Thaher, T.: Enhanced binary moth flame optimization as a feature selection algorithm to predict software fault prediction. IEEE Access 8, 8041–8055 (2020) 5. Qasem, O., Akour, M.: Software fault prediction using deep learning algorithms. Int. J. Open Source Softw. Process. 10, 1–19 (2019) 6. Turabieh, H., Mafarja, M., Li, X.: Iterated feature selection algorithms with layered recurrent neural network for software fault prediction. Expert Syst. Appl. 122, 27–42 (2018) 7. Deep Singh, P., Chug, A.: Software defect prediction analysis using machine learning algorithms. In: 2017 7th International Conference on Cloud Computing, Data Science Engineering—Confluence, pp. 775–781 (2017) 8. Thaher, T., Arman, N.: Efficient multi-swarm binary harris hawks optimization as a feature selection approach for software fault prediction. In: 2020 11th International Conference on Information and Communication Systems (ICICS), pp. 249–254 (2020) 9. Khoshgoftaar, T., Van Hulse, J., Napolitano, A.: Comparing boosting and bagging techniques with noisy and imbalanced data. IEEE Trans. Syst. Man Cybern. Part A 41, 552–568 (2011) 10. Thaher, T., Mafarja, M., Abdalhaq, B., Chantar, H.: Wrapper-based feature selection for imbalanced data using binary queuing search algorithm. In: 2019 2nd International Conference on new Trends in Computing Sciences (ICTCS), pp. 1–6 (2019) 11. Yogesh, S., Arvinder, K., Malhotra, R.: Software fault proneness prediction using support vector machines. Lecture Notes in Engineering and Computer Science, vol. 2176 (2009) 12. Zhao, R., Yan, R., Chen, Z., Mao, K., Wang, P., Gao, R.X.: Deep learning and its applications to machine health monitoring. Mech. Syst. Sign. Process. 115, 213–237 (2019). http://www. sciencedirect.com/science/article/pii/S0888327018303108 13. Balogun, A.O., Basri, S., Abdulkadir, S.J., Hashim, A.S.: Performance analysis of feature selection methods in software defect prediction: a search method approach. Appl. Sci. 9(13), 2764 (2019) 14. Catal, C., Diri, B.: Investigating the effect of dataset size, metrics sets, and feature selection techniques on software fault prediction problem. Inf. Sci. 179, 1040–1058 (2009) 15. Wolpert, D., Macready, W.: No free lunch theorems for optimization. Evol. Comput. IEEE 1, 67–82 (1997) 16. Sayyad Shirabad, J., Menzies, T.: The PROMISE Repository of Software Engineering Databases. School of Information Technology and Engineering, University of Ottawa, Canada (2005). http://promise.site.uottawa.ca/SERepository 17. Erturk, E., Sezer], E.A.: Iterative software fault prediction with a hybrid approach. Appl. Soft Comput. 49, 1020–1033 (2016). http://www.sciencedirect.com/science/article/pii/ S1568494616304197 18. Jureczko, M., Madeyski, L.: Towards identifying software project clusters with regard to defect prediction, vol. 9, p. 9 (2010) 19. Chidamber, S.R., Kemerer, C.F.: A metrics suite for object oriented design. IEEE Trans. Softw. Eng. 20(6), 476–493 (1994) 20. Martin, R.D.: Object Oriented Design Quality Metrics: An Analysis of Dependencies (1995) 21. Henderson-seller, B.: Object-Oriented Metrics: Measures of Complexity (1996) 22. Bansiya, J., Davis, C.G.: A hierarchical model for object-oriented design quality assessment. IEEE Trans. Softw. Eng. 28(1), 4–17 (2002) 23. Tang, M.-H., Kao, M.-H., Chen, M.-H.: An empirical study on object-oriented metrics. In: Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403), pp. 242– 249 (1999) 24. McCabe, T.J.: A complexity measure. IEEE Trans. Softw. Eng. SE-2(4), 308–320 (1976) 25. Fausett, L.: Fundamental of Neural Networks: Architectures, Algorithms, and Applications (1993) 26. Windeatt, T.: Ensemble MLP Classifier Design, pp. 133–147. Springer Berlin Heidelberg, Berlin, Heidelberg (2008)

A Classification Model for Software Bug Prediction …

113

27. Franc, I., Macek, N., Bogdanoski, M., Doki´c, D.: Detecting malicious anomalies in iot: Ensemble learners and incomplete datasets. In: The Eight International Conference on Business Information Security (BISEC) (2016) 28. Thaher, T., Heidari, A.A., Mafarja, M., Dong, J.S., Mirjalili, S.: Binary Harris Hawks Optimizer for High-Dimensional, Low Sample Size Feature Selection, pp. 251–272. Springer Singapore, Singapore (2020). https://doi.org/10.1007/978-981-32-9990-0_12 29. Shatnawi, R.: The application of roc analysis in threshold identification, data imbalance and metrics selection for software fault prediction. Innov. Syst. Softw. Eng. 13(2), 201–217 (2017). https://doi.org/10.1007/s11334-017-0295-0 30. Okutan, A., Yildiz, O.: Software defect prediction using bayesian networks. Empirical Softw. Eng. 19, 154–181 (2014)

Modeling the Relationship Between Distance and Received Signal Strength Indicator of the Wi-Fi Over the Sea to Extract Data in Situ from a Marine Monitoring Buoy Miguel Angel Polo Castañeda , Constanza Ricaurte Villota , and Danay Vanessa Pardo Bermúdez Abstract In situ ocean monitoring is expensive and involve risk for the human crews. In order to reduce costs and risk associated with it is essential to have seamless ways of communicating with monitoring devices as marine buoys. It is interesting to have wireless communication in situ, but there is no known marine monitoring system that uses this technology as a means of extracting data over the sea. Therefore, this study is based on finding the maximum distance at which devices can be connected for the visualization and extraction of data without losing information in the process, from the modeling of the equation that relates the received signal strength indicator with the separation distance of the monitoring system, allowing the development of a prototype buoy that includes the extraction of data at sea wirelessly. In the results of the modelling show that data could be safely retrieved up to 41 m distance. Keywords Wi-fi · Marine buoy · Data collection · RSSI

1 Introduction Human beings depend on marine and coastal systems given their influence both on their activities (industrial, tourist and urban development), as well as on natural processes (climate, extreme events, etc.), due to this the need has become evident to monitor the coastal marine environment, for which it is necessary to create and implement monitoring systems that allow greater temporal resolution and lower cost [1]. To accurately monitor the health of oceanic systems, it is necessary to know the temporal behavior of their physical, chemical and biological variables, M. A. P. Castañeda (B) · C. R. Villota Instituto de Investigaciones Marinas Y Costeras “José Benito Vives de Andréis” INVEMAR, Santa Marta D.T.C.H, Santa Marta, Colombia e-mail: [email protected] M. A. P. Castañeda · D. V. P. Bermúdez Facultad de Ingeniería, Universidad Del Magdalena, Carrera 32 no.22-08, Santa Marta 470004, Colombia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_10

115

116

M. A. P. Castañeda et al.

such as temperature, acidity, dissolved oxygen, salinity, chlorophyll, among others [2]. In order to counter any threat or predict an environmental catastrophe through knowledge of the real-time changes of the variables, it is necessary to have different spatially distributed sensors, creating a Wireless Sensor Network (WSN) that allow transmitting these data instantly by satellite or by mobile communications system (GSM or GPRS) [3]. One of the most common situations in marine monitoring through buoys are in analysis, data download and transmission fails, among others. These situations can impose challenges since maintenance involves the manipulation of these equipment at sea. Even though marine monitoring systems have real-time transmission, when this service is not available, changes or preventive maintenance are made in situ, the data stored in an SD memory is extracted to have a backup of these. Even though this is a frequent task, current systems require direct connection to the datalogger, despite the difficulties involved in handling it at sea, especially in adverse sea conditions and in highly dynamic places. For this reason, connecting the buoy wirelessly through Wi-Fi is an option. Some monitoring systems, such as HOBO Data Loggers, have data extraction wirelessly via Bluetooth [4]. To use this technology, the user must have an application that allows data extraction, as is the case of HOBOmobile® from Onset [5]. However, if the device with the application fails, and there is no backup equipment, the field trip could be affected. Although Bluetooth and Wi-Fi work on the same frequency band, 2.4 GHz [6], as of 2017, monitoring buoys are not known to have wireless data extraction and visualization from the Wi-Fi network in situ [7, 8]. The latter being in its IEEE 802.11 g version is compatible with the vast majority of computers and communication equipment for everyday use such as PDAs and Smartphones [9]. This compatibility could allow to connect through the web browser to the IP of the Datalogger and extract the data without having an installed application. By knowing how the Wi-Fi network behaves with increasing distance, it is possible to assess a treshold to viable communication. Knowing this treshold would allow reliable in situ data visualization and extraction. Since, if the received signal strength indicator (RSSI) by the device with which the data will be collected is less than -90 dBm it would be practically impossible to establish a connection [10]. Being the RSSI a reference scale in relation to 1 milliwatts (mW) that is used to measure the strength indicator of a wireless signal (typically Wi-Fi or mobile telephony) present in the receiver and its unit is dBm (Decibel-milliwatt) [11], having a scale with 0 dBm as the ideal signal and represented within negative numbers, the more negative, the greater the signal loss [10]. Considering that Wi-Fi is an electromagnetic wave that travels through the air, being in an environment other than free space, it will have propagation losses. Because water is a conductor and behaves like a mirror to electromagnetic waves, makes WiFi links over the water one of the most complex to design. As there are no obstacles, there are multiple signals that bounce off the sea and reach the receiver at different times, causing interference. As the sea is not still, these multipaths and attenuations change continuously, increasing or decreasing according to the angle with which the radio signals are reflected [12].

Modeling the Relationship Between Distance …

117

The signal strength decreases when the receiver encounters terrain obstacles or other objects, such as sea waves. So, the signal reaches the receiver after undergoing diffraction or reflection around these obstacles. If the size and shape of the obstacles is known, in this case the sea waves, the calculation of the additional losses that occur along the way are possible. If only general information about the environment is available, an estimate of path loss can be made from measurements made in similar situations [12]. The objective of this study, is to model the relationship between the distance and the signal strength of the Wi-Fi signal over the sea using of an equation that represent this relationship., Results from this modeling approach will allow the implementation of this technology for downloading data from a marine monitoring buoy, as well as evaluating the maximum distance from which operators can do this task.

2 Materials and Methods To model the equation that represents the relationship of the signal strength of the WiFi network over the sea versus the separation distance from the emitter, a study was done where RSSI was identified at different points. The Eq. (1) was used to estimate the number of replicates at each measurement point made from the following equation taken from [13]: r=

N ∗ Z 2 ∗ p ∗ (1 − p) (N − 1) ∗ e2 + Z 2 ∗ p ∗ (1 − p)

(1)

N = Ideal replica size Z = Deviation from the mean value we accept with the desired confidence level e = Error range p = Proportion expected to approximate r = Replica size After identifying the number of replicates at each measurement point, it was found at what distance signal strength data would be obtained. In accordance with the recommendation proposed in ITU-R P.1406-2, in which a certain number of measurements must be carried out at equal intervals along a distance of 40 wavelengths, these repetitions being the r value mentioned above. Redundant to this process for other distance intervals of 40 wavelengths until covering the entire area of interest [12], in this case it was done until it was not possible to connect to the Wi-Fi network. Once the number of observations to be taken and the separation distance between measurements had been identified, the mobile application “Información de señal de red” [14] for Android was used to measure the RSSI of the Wi-Fi network. using this app, different distances from the transmitter were set-up. After this preparations, the strength tests were carried out at each of the selected distances, on Salguero beach, in the city of Santa Marta, Colombia, on June 25, 2020.

118

M. A. P. Castañeda et al.

After having all the RSSI data at the different distances, we proceeded to model the equation that will represent them. The most common signal propagation model in the wireless sensor network is the free space model, but due to the movement of the sea waves, it was determined that the system operates in an environment without line of sight (NLOS), which has a predictive model that represents the losses in a separation di between the receiver and the transmitter, which is:  P(di ) = P(d0 ) + 10 ∗ n ∗ log10

di d0

 (1)

n = Loss exponent P(d0 ) = Known loss referring to a distance d0 [15] As the loss exponent is an empirical constant that depends on the propagation environment [15], its value should have been found, removing it from Eq. (2), obtaining: n=

P(di ) − P(d0 )   10 log10 dd0i

(2)

Using n to identify the received signal strength related to the separation distance from the following equation, where A is the received signal strength at 1-meter distance: RSSI = 10 ∗ n ∗ log10 (d) + A

(3)

Being the Eq. (4) the result of this study as a RSSI model of the Wi-Fi network related to the distance over the sea for the extraction of data from a marine monitoring buoy. In order to check Eq. (4), we proceeded to enter the sea and connection tests were carried out with the prototype buoy that was made from a Raspberry Pi, which is responsible for the storage, processing and transmission of data. Because the Raspberry Pi network card was not predefined as an access point, the RaspApp library was used, which allowed to quickly obtain a Wi-Fi access point and create a default configuration with all the necessary requirements [16]. On the Raspberry Pi a Django server with PostgreSQL data storage was used, which was accessed with a mobile device that connected to the server at its IP through the network generated by the access point and the data download tests were performed at the points where the signal strength values were taken, at a height of about 1 m.

3 Results In applying the methodology described above, initially, Eq. (1) was used where the ideal replica size was 36 to obtain a mean value with a precision of 1 dB [12]. With a

Modeling the Relationship Between Distance …

119

confidence of 90% of the data, considering a margin of error in the data of 10% and expecting to approach 90%, the following values were obtained for the variables to be entered to the equation: N = 36 Z = 0.9 e = 0.1 p = 0.9 Upon entering these data in (1), it was obtained that 6 replications had to be made at each monitoring point without losing the confidence or precision of the data. The measurements were made in accordance with Recommendation ITU-R P.1406-1, in which the measurements are said to be every 40 wavelengths [12], Since the WiFi works at 2.4 GHz, and c is the speed of light in a vacuum that moves at 3 × 108 m /s, the wavelength is equal to: λ=

3 × 108 m/s c = = 0.125 m f 2.4 × 106 Hz

(4)

Because the wavelength is equal to 0.125 m, and measurements must be made every 40 wavelengths, this resulted in the RSSI having to be monitored every 5 m from the emitter, making 6 replicates at each point. With these calculations, a Motorola G5 cell phone was placed as an access point and a Xiaomi Redmi Note 8 to measure the signal intensity with the application “ Información de señal de red” in Salguero beach in Santa Marta, Colombia. The results of this study are found in Table 1. Be noted that the higher the Wi-Fi access point generating device was with respect to the water surface, the better the signal reception intensity, which is attributed to the fact that it increases the Fresnel zone radius, preventing that the signal be affected with by ocean waves [17]. From the tests it was determined that the maximum connection distance was found to be approximately 90 m, although at this distance the signal of the Wi-Fi network was lost, and it was not possible to calculate the signal strength at that point. Figure 1 shows each of the aftershocks versus the distance, and it can be seen that the regression model that represents the behavior of the average of the data in Table 1, which is a Log-normal function, according to what is expected according to the Eq. (4). To determine the parameters of Eq. (4), the value of n was found from Eq. (3), taking the average from Table 1, giving as a result a value of n of −2. Based on the collected data, expressing the distance as x(m) and the RSSI as y (dBm), the relationship between the distance and the received signal intensity was modeled from Eq. (4) as: y = −20 ∗ log10 x − 37.67

(5)

Seeing represented the Eq. (6) and the data obtained in Table 1, in Fig. 2, being the result of the study.

120

M. A. P. Castañeda et al.

Table 1 Signal strength versus distance Distance (m)

Signal strength (dBm)

1

−38

−36

5

−50

−52

10

−51

−54

15

−54

−53

20

−58

25

−65

30

−35

−37

−43

−37

−49

−52

−51

−51

−57

−55

−55

−58

−60

−60

−51

−62

−59

−61

−60

−60

−64

−63

−66

−61

−63

−66

−68

−66

−63

−66

−65

−67

35

−68

−65

−69

−68

−67

−70

40

−72

−69

−68

−69

−73

−70

45

−70

−72

−70

−73

−70

−71

50

−75

−71

−72

−69

−70

−73

55

−74

−73

−71

−70

−72

−75

60

−74

−72

−79

−76

−80

−74

65

−73

−79

−80

−81

−78

−75

70

−79

−78

−77

−81

−79

−76

75

−80

−81

−78

−78

−80

−77

80

−81

−77

−81

−80

−82

−77

85

−80

−82

−83

−85

−85

−80

Signal strength (dBm)

-20 -30

0

5

10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90

-40 -50 -60 -70 -80 -90

Distance (m)

Series1

Series2

Series3

Series4

Series5

Series6

Fig. 1 Signal strength versus distance

Modeling the Relationship Between Distance … -25.00

1

5

121

10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85

Signal strength (dBm)

-35.00 -45.00 -55.00 -65.00 -75.00 -85.00 -95.00

Distance (m)

Experimental

Theoretical

Fig. 2 Modelling of signal strength versus distance

To find the maximum recommended distance to extract the data from the marine monitoring buoy with minimum data loss, x was solved from Eq. (6) x = 10

y+37.67 −20

(6)

Based on [10], to ensure a reliable connection to the monitoring system, the RSSI value must be greater than −70 dBm. Considering the Eq. (7) and replacing in the value of −70 dBm, it was found that to perform the task of downloading or displaying data from the marine monitoring buoy, to guarantee that you will extract the data without any problem, the maximum distance at which the operator must be it is 41 m. Although it can be further away, it would be advisable not to exceed that distance. Which was verified when entering with the Raspberry Pi and finding that at that distance it was possible to see the data that it was acquiring and download it without any disadvantage.

4 Conclusions This study presents relevant data for future studies of Wi-fi connectivity at sea. Our results allow allowing us to know that it is feasible to use this technology as a method of data extraction, since the device from which it is going to connect to the Wi-Fi that generates the monitoring buoy, may be separated up to 41 meters, guaranteeing your connection and without losing data with a connection quality of up to 60%. The equation that models the relationship between the distance and the received signal strength indicator of a Wi-Fi network over the sea is y = −20 ∗ log10 x − 37.67. In

122

M. A. P. Castañeda et al.

operational terms, 41 m is a safe distance, both for the safety of the equipment and the operator, allowing the times and conditions to perform this task to be improved in the future. Additionally, in the data collection study it was found that the higher the device that generates the access point and the data reception device is located, the signal will improve, due to the fact that the fresnel zone increases, making it is recommended to install the antenna at the highest point of the monitoring system, which would guarantee greater distance for data download.

References 1. Albaladejo Pérez, C.: Propuesta de una Red de Sensores Inalámbrica para un Sistema de Observación Costero (2011) 2. Albaladejo, C., Sánchez, P., Iborra, A., Soto, F., López, J.A., Torres, R.: Wireless sensor networks for oceanographic monitoring: a systematic review. Sensors (Basel) 10(7), 6948–6968 (2010). https://doi.org/10.3390/s100706948 3. Paz, H., Arévalo, J.A., Ortiz, M.A.: Design and development of an electronic device for data transmission with universal coverage. Ing. e Investig. 35(1), 84–91 (2015). https://doi.org/10. 15446/ing.investig.v35n1.46071 4. Humidity, R., Point, D.: HOBO® MX1101 data logger, pp. 5–6 (2015) 5. Onset Computer Corporation, “HOBOmobile,” 2020. https://play.google.com/store/apps/det ails?id=com.onsetcomp.hobo. Accessed June 23 2020 6. Garroppo, R.G., Gazzarrini, L., Giordano, S., Tavanti, L.: Experimental assessment of the coexistence of Wi-Fi, ZigBee, and bluetooth devices. In: 2011 IEEE International Symposium on a World Wireless, Mobile Multimedia Networks, WoWMoM 2011—Digital Procesdings (2011). https://doi.org/10.1109/WoWMoM.2011.5986182 7. Xu, G., Shen, W., Wang, X.: Applications of wireless sensor networks in marine environment monitoring: a survey. Sensors (Switzerland) 14(9), 16932–16954 (2014). https://doi.org/10. 3390/s140916932 8. Zainuddin, Z., Wardi, Nantan, Y.: Applying maritime wireless communication to support vessel monitoring. In: Proceedings—2017 4th International Conference on Information Technology, and Computer Electrical Engineering ICITACEE 2017, vol. 2018-Jan, pp. 158–161 (2017). https://doi.org/10.1109/icitacee.2017.8257695 9. Cázarez Ayala, G., Castillo Meza, H., Fonseca Beltrán, J.: Unidad de adquisición de datos y medición basada en protocolo de comunicación Wi-Fi. Ra Ximhai, vol. 8, pp. 355–366 (2012). https://doi.org/10.35197/rx.08.02.2012.11.gc 10. Domingo, J.D., Somolinos, C.C., Valero, E.: LOCALIZACIÓN DE PERSONAS MEDIANTE CÁMARAS RGB-D Y REDES INALÁMBRICAS. XXXVI Jornadas de Automática. pp. 2–4 (2015) 11. Adewumi, O.G., Djouani, K., Kurien, A.M.: RSSI based indoor and outdoor distance estimation for localization in WSN. In: Proceedings on IEEE International Conference on Industrial Technology, pp. 1534–1539 (2013). https://doi.org/10.1109/icit.2013.6505900 12. de Telecomunicaciones, U.I.: Método de predicción de la propagación específico del trayecto para servicios terrenales punto a zona en las bandas de ondas métricas y decimétricas Serie P. vol. 1 (2009) 13. Actualícese, “Determinación del tamaño de una muestra en auditoría,” 2016. https://actual icese.com/determinacion-del-tamano-de-una-muestra-en-auditoria/. Accessed June 26 (2020) 14. KAIBITS Software GmbH, “Información de señal de red - Aplicaciones en Google Play,” 2020. https://play.google.com/store/apps/details?id=de.android.telnet. Accessed June 26 (2020)

Modeling the Relationship Between Distance …

123

15. Oguejiofor, O.S., Okorogu, V.N., Adewale, A., Osuesu, B.O.: Outdoor localization system using RSSI measurement of wireless sensor network. Int. J. Innov. Technol. Explor. Eng. 2(2), 1–6 (2013) 16. GitHub - billz/ raspap-webgui: configuración simple de AP y gestión de WiFi para dispositivos basados en Debian.” https://github.com/billz/raspap-webgui. Accessed June 23 2020 17. Correa, A.C., Godoy, S.R., Grote, H.W., Orellana, F.M.: Evaluación De Enlaces Inalámbricos Urbanos Usando Protocolo IEEE 802.11b. Rev. Fac. Ing. Univ. Tarapacá, 13(3), 38–44 (2005). https://doi.org/10.4067/s0718-13372005000300006

Data Classification Model for Fog-Enabled Mobile IoT Systems Aung Myo Thaw, Nataly Zhukova, Tin Tun Aung, and Vladimir Chernokulsky

Abstract Fog computing has become the key solution that allows reduce latency and energy consumption when collecting and processing data in IoT systems. The advantages of fog computing can also be used to collect and process data in new mobile IoT systems. It is a challenging task because these systems contain multiple mobile devices that generate huge amounts of heterogeneous data. In many cases, the data is redundant. The same data can be sent several times. Such data is not useful for end users. Therefore, we propose a new model for data collecting and processing in fog-enabled mobile IoT systems. Instead of collecting and processing huge amounts of data using cloud technologies, the data is collected and processed on fog nodes. The new model is based on using classification algorithms, in particular, K-nearest neighbor algorithm. It is used to classify heterogeneous data generated by mobile devices. This allows reduce the amount of data that is transmitted to the cloud for further processing and storage. The proposed model can be considered as, an effective intermediary that allows collect and process data in mobile IoT systems. Keywords Fog computing · Data classification · Mobile IoT network

A. M. Thaw (B) · T. T. Aung ITMO University, St Petersburg, Russia 197101 N. Zhukova St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St Petersburg, Russia 199178 V. Chernokulsky Saint Petersburg Electrotechnical University “LETI”, St. Petersburg, Russia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_11

125

126

A. M. Thaw et al.

1 Introduction In recent time, multiple physical devices have been connected with each other using Internet networks. For these networks, a new concept called “Internet of Things” (IoT) has been developed. Nowadays, the majority of IoT devices are mobile devices that produce huge amounts of data. They are connected in the networks using wireless technologies. Such devices and networks are considered within the concept of mobile IoT [1]. The mobile IoT forms the base for many different smart environments such as smart city, smart transportation, and smart home [2, 3]. In IoT systems, the data that is generated by the devices is send to the cloud. The cloud computing infrastructure is used to store and process the sensed data and provide it to the users anytime and anywhere. Cloud computing is based on centralization of computing services. Usage of these technologies does not allow gather and process huge amounts of heterogeneous data that is produced by mobile IoT devices in real time. In these conditions, IoT systems face the low latency and high energy consumption problems. With the increase in the size of the IoT networks, the problems become more complicated. To solve these problems, fog computing technologies have been developed [4, 5]. Fog computing enables a substantial amount of computational resources for processing and storing data that is generated by the devices. In fog-enabled systems, fog nodes form an intermediate layer between end devices and cloud services. The data that is generated by the devices is send to the fog nodes. After the data is processed by the nodes, it is transferred to the cloud. The main advantage of fog processing is that the fog nodes that provide the resources are placed in the proximity to end devices. This allows process data in real time. In mobile IoT systems, the capacities of fog nodes to process data in real time can be used to reveal the redundant data as well as repeated data. Such data is excluded from the data that is transferred to the cloud. It makes possible to reduce the amount of data that is processed and stored in the cloud. To reveal redundant and repeated data classification, techniques can be used. In the paper, we propose a data classification model for fog-enabled mobile IoT systems. The mobile IoT systems that we consider are capable to classify data. They assume that the heterogeneous data that is generated by the mobile devices is classified on the fog nodes, the redundant and the repeated data is removed from the input data streams. After this processing, the data can be sent to the cloud and stored there as history data, provided to users, shared between fog services, etc. The paper has the following structure. In the second section, existing classification techniques are discussed. In the third section, the proposed data classification model for fog-enabled systems is presented. In the fourth section, its performance is evaluated and the obtained experimental results are described. Finally, the conclusion is given and the directions of proposed model further development are discussed.

Data Classification Model for Fog-Enabled Mobile IoT Systems

127

2 Background For data classification, multiple techniques have been proposed, including K-nearest neighbors (KNN), decision trees, hierarchical classifiers, neural networks, Naïve Bayes classifier, support vector machines (SVM), and others. These classifiers have been successfully used by the researchers in solving various complex practical tasks that require information retrieval [6]. In particular, usage of classifiers allowed solve such tasks as discriminating classes of objectivity and polarity form twitter posts [7], classifying heterogeneous web content [8], etc. In early studies of wireless sensor networks (WSN) [9, 10], machine learning and mining methods were used to solve big data problem. They assumed classification of both real time and historical data. Whenever the data was classified, a class was assigned to it. Initially a predefined set of classes was defined. For each class, additional information about the data that belongs to the class was provided. Commonly for each class of data, its characteristics were specified. This information was taken into account in data processing. In IoT systems, this information can be used to remove redundant and repeated data. Since then many models have been developed. A number of them allow classify heterogeneous data gathered in mobile IoT networks. In [11], a probabilistic neural network (PNN) model is proposed that allows classify data in real time. The model is based on well-established statistical principles and Bayesian decision strategy. The model includes three layers that are input data layer, the layer of base classifiers, and the layer of ensemble of classifiers. Base classifiers are Bayesian classifiers. For building an ensemble of classifiers, AdaBoost is used. This model is adaptive to dynamic and heterogeneous IoT environments. It also allows reduce computational complexity of IoT data processing. In [12], a system model based on SVM classification is proposed. The model can effectively classify two types of data streams in real time. They are video and audio streams. In the model, the size of the files and the type of the data are used as the features for data classification. The model compares the features of incoming data streams with the expected features. These features are defined according to the sizes of the files that were received before. The results of the experiments showed that usage of information about file sizes allow improve the accuracy of data classification. In [13], a machine learning method for noise classification based on SVM and KNN algorithms is proposed. In this model, mel-frequency cepstral coefficients (MFCC) are calculated and used as the features of audio streams. The method is targeted on classification of data streams that contain sounds of the environment. At the first step, MFCC is calculated for incoming audio streams. At the second step, the audio streams are classified. For streams classification, SVM and KNN classifiers are applied to the calculated coefficients. The experimental results showed that usage of KNN and SVM algorithms allow assume high accuracy of classification of the noise sounds with high frequency. This method is suitable for noise classification in smart cities. It allows reach low consumption of energy and computational resources.

128

A. M. Thaw et al.

Based on classification techniques, multiple security models have also been developed. They are targeted for detecting of anomalous behavior of devices and revealing attacks on WSN networks. Originally, SVM was developed to detect various types of behavior and classify the behavior in social, industry, and other fields. In [14] a c-SVM model was proposed. It was used as intrusion detection systems (IDS) to detect routing attacks in WSNs. The model was trained using available data about malicious activities in local sensor networks. IoT systems can also use SVM models to detect network intrusion and spoofing attacks. In [15], two-layer dimension reduction and two-tier classification (TDTC) model for intrusion detection in IoT networks was proposed. The model is based on identification of anomalous behavior in the networks. It can effectively detect “hard-todetect” intrusions, such as remote-to-local and user-to-root attacks. Two-tier classification models include Naïve Bayes and KNN classifiers. TDTC is trained using the transformed datasets. In [16], a honeypots-based model is developed. It uses a honeypot framework for DDoS attacks detection. The base of the honeypot framework forms the realtime DDoS detection. To distinguish the normal traffic from the DDoS traffic, binary classifiers are used. Within the model, such classifiers as K-nearest neighbors, random forests, support vector machines, and deep neural networks are used. The model is trained using the collected data. Usually, the data is presented in the form of log files.

3 Data Classification Model 3.1 General Structure of the Data Classification Model Usage of fog computing provides many advantages for IoT systems. Fog-enabled systems have low latency of data collection and processing, and they allow provide low consumption of energy of end devices. To have the same advantages in mobile IoT systems, we propose a new model. This model allows provide real-time processing of heterogeneous data incoming from mobile devices. It is achieved through processing the initial data using classification algorithms. All computational operations are executed on the fog nodes. Based on the results of data classification, the amount of the data that is transferred to the cloud from the devices is reduced in times. The structure of the proposed model is shown in Fig. 1. The model has three layers: physical layer, fog layer, and cloud layer.

Data Classification Model for Fog-Enabled Mobile IoT Systems

129

Fig. 1 Structure of data classification model for fog-enabled IoT systems

A.

B.

Physical layer. At the physical layer, heterogeneous data is collected from IoT mobile devices using clustering techniques, in particular, LEACH-M algorithm [17]. Usage of clustering techniques solves the problem of the dynamic routing in the mobile networks. Building dynamic routs allows reduce energy consumption of mobile devices. The dynamic routes are built in the following way. Each of the IoT devices sends its location and identification number (ID) to the fog node. The fog node defines cluster heads based on the device’s location, remaining power of the devices, and the level of trust in them. The level of trust is defined according to the access control lists or blacklists of the nodes that are available on fog nodes. Access control lists contain IDs of the nodes that have high level of trust can be used as cluster heads. Blacklists contain IDs of malicious nodes. The lists are formed according to the algorithm proposed in [18]. Due to the mobility of the devices, it is assumed that the clusters are rebuilt over time. In the networks in which clustering techniques are used for data collection, end devices send their data to cluster heads. The cluster heads transmit the data to the fog nodes. Fog layer. The fog nodes are located between end devices and cloud services. At this level, the following tasks are solved. The fog nodes define the heads of the clusters at the physical layer, collect data from the cluster heads, and classify the collected data. Data classification allows to define the classes of the collected data. The information about the classes to which the data belongs is used to provide data to end users according to their requests or to reduce the amount of data before sending it to the cloud. In fog layer, the distributed fog nodes can collaborate with each other when they solve data collection and data processing tasks. They can also share resources to achieve common goals.

130

C.

A. M. Thaw et al.

Cloud layer. Cloud storages are used to store and manage historical data. Cloud layer also provides services to analyze the historical data on user requests. The data provided by cloud storages is often used to predict the future state of the networks.

3.2 The Structure of the Data Classification Model in the Fog Layer Data classification model in the fog layer assumes application of the following methods and algorithms. A.

B.

C.

History matrix method. The history matrix method is based on comparing the data incoming from the devices with the history data from the temporary fog storage. It is assumed that the IoT devices collect the data and then send it to the fog nodes. After classifying the received data, the fog nodes save it in the temporary storage. Using the stored data the history matrix method estimates the new incoming data. If the data is not similar to the previously collected data, then the new data is rejected. In distributed smart environments temporarily storage can be distributed among the fog nodes. In these conditions, fog nodes collaborate with each other to execute the history matrix method. The period of time during which the data is stored depends on the IoT environment. Extraction of data features. In IoT systems, devices produce different types of data such as video streams, audio streams, texts, and many others. Each data type has different size of data files. For example, the size of a text file differs from the size of a video file. Thus, the incoming data can be characterized with two features that are the size of the data file and the type of the data. These features are used to classify the incoming data streams. Data classification. For data classification, K-nearest neighbors classifier is used. KNN is one of the machine learning algorithms used for classification of different types of datasets. To classify the data with KNN, it is necessary to define a set of training values. The classes of the training values are known. To classify a test value, KNN finds its neighbors among the training values. The training value that has the minimum distance to the test value is identified. The test value is assigned to the class that has the nearest training value.

In the data classification model, the KNN classifier is used to classify the data according to the format of the data. Two classes of data are considered: sensitive data and non-sensitive data. Texts are assigned to the class of sensitive data. Audio and video streams refer to the class of non-sensitive data. For each of the classes, the thresholds for the values of data features are defined. They are defined using history data. If the values of the features of the collected data do not exceed the defined thresholds, then the data is excluded from further processing. The rest of the data that exceeds the defined thresholds is sent for further processing on fog nodes.

Data Classification Model for Fog-Enabled Mobile IoT Systems

D.

131

Temporary storage of the data. The data can be stored in temporarily storage of fog nodes and longtime storage, provided by cloud. Temporary storage is used to solve the following tasks. The first one is to build dynamic routes for transferring data from the devices to the fog nodes. The second is to estimate the incoming data using history matrix method. The resources of the temporary storage are limited. To free up resources, the data is automatically transferred from the temporary storage to the cloud. For this time and memory usage, configuration trigger mechanism is used. For example, the data can be automatically transferred to the cloud when the amount of free memory is less than 10%. Currently, there are many databases that can be used as temporary storages. They are Redis, SQL, MySQL and SQE. Redis database is suitable for storing real-time data. It supports suitable data structures, such as list of data types. It also uses hash table functions and weighted sort algorithms. Usage of sorted sets of time or location attributes allow users search the required data about smart things in the database. Moreover, the data records are linked with smart things’ ID using associative tables. Therefore, the data records can be searched using IDs of the things.

3.3 Algorithm for Data Classification for the Fog Node The proposed data classification algorithm for the fog nodes is given below.

132

A. M. Thaw et al.

Data Classification Model for Fog-Enabled Mobile IoT Systems

133

4 Data Classification Model Evaluation The proposed data classification model assumes classification of sensitive and nonsensitive data. Using classification algorithm allows exclude repeated and redundant data and thus reduces the amount of data that is transferred to the cloud for further processing. To evaluate the proposed model, three types of data were used as incoming data. They are text data, video data, and data of other types. The text data

134

A. M. Thaw et al.

was considered as sensitive data, while video and data of other types were assigned to the class of non-sensitive data. The percentage of the text data was 30%. The percentage of the video data and the data of other types was 20% and 50%, respectively. The length of the text data was 50 bits, the length of the video files was 1000 bits, and the length of other types of data was 400 bits. Other related parameters that were used for the evaluation of the proposed data classification model are listed in Table 1. From the incoming data, three types of features were extracted. To extract the features, at first, the fast Fourier transform (FFT) was performed on the data. Usage of FFT allowed represents the initial data in the frequency domain in the form of vectors of Fourier coefficients. For the vectors of the coefficients their mean values, standard deviation and variance were calculated. The feature dataset is defined as Feature Dataset = {Mean, Standard Deviation, Variance}

(1)

The repeated data was rejected using the history matrix method. All the data that was send from the nodes of the network to the fog nodes was stored in the temporary storage. When the new data from the network nodes was received, it was compared with the data in the temporary storage. If the data was identified as repeated data, then it was not considered in further data processing. Each time when the new data was received, the history data in the temporary storage was updated. Within the model evaluation, we considered that each node sends the data 50 times. The same data is send 2 times. After rejecting the repeated data, KNN algorithm was used to reveal the redundant data. The possible classes were the class of sensitive data and the class of nonsensitive data. To train the classifier algorithm 70% of the data was used. The classifier was tested on the remaining 30% of data. The classifier accuracy was estimated as an averaged accuracy over all iterations of collected data classification. The accuracy of the proposed KNN classifier is shown in Fig. 2. The proposed model achieves better Table 1 Parameters that were used to evaluate the proposed classification model No.

Description of the parameter

Value of the parameter

1

No of iterations of data collection and classification

2

Total number of nodes

3

Number of repeated observations

4

Total number of features

5

Percent of text data

6

Percent of video data

20%

7

Percent of data of other types

50%

8

Length of the files of text data

50 bits

9

Length of the files of video data

1000 bits

10

Length of the files of data of other types

400 bits

100 50 2 3 30%

Data Classification Model for Fog-Enabled Mobile IoT Systems

135

Fig. 2 Accuracy of the proposed KNN classifier

Table 2 Comparison of the accuracy of the proposed method with other classification methods No.

Classification methods

Average accuracy (%)

Data type

1

Proposed KNN classifier

98

Heterogeneous

2

Bayesian learning-based classifier [11]

98

Texts

3

Enhanced SVMcW classifier [12]

88

Heterogeneous

4

Hybrid + KNN-based classifier [19]

90

Texts

5

SVM-based classifier [20]

92.8

Image

accuracy (98%) of heterogeneous data classification when compared to the other methods (see Table 2). The values of the features of non-repeated data are shown in Fig. 3. The values of the features of non-repeated and repeated data are shown in Fig. 4.

5 Conclusion In the paper, we have proposed a new data classification model for fog-enabled mobile IoT systems. The model is supposed to be used on fog nodes. Usage of the

136

A. M. Thaw et al.

Fig. 3 FFT features of non-repeated data

model allows solve the main tasks of the fog layer in conditions of mobile IoT, in particular, to collect huge amounts of data from the mobile devices, process it in real time, and provide the data to the end users according to their requests. The solution of the enumerated tasks in conditions of mobile IoT has become possible due to using classification algorithms on the fog nodes. This allows exclude the repeated and redundant data from initial data received from the devices and thus reduce the amount of data that is transferred to the cloud for further processing and longtime storage. As the result, the latency on the networks and the amount of the consumed energy is also reduced. The proposed classification model assumes usage of the history matrix method and KNN classifier. The history matrix method allows identify and exclude repeated data from the incoming data streams. The KNN classifier is used to classify sensitive and non-sensitive data. For each class of data, the thresholds for the values of the data features are defined. If the values of the features of the incoming data do not exceed the thresholds, the data is excluded from further processing. The model is being intensively developed toward using various machine learning models and methods in order to reduce the latency and energy consumption of the networks of mobile devices. Particular attention is paid to using models and methods that have been developed for social networks. It is expected that their usage can significantly improve the processes of data collection and processing in mobile IoT networks.

Data Classification Model for Fog-Enabled Mobile IoT Systems

137

Fig. 4 FFT features of repeated and non-repeated data

References 1. Ghaleb, S.M., et al.: Mobility management for IoT: a survey. EURASIP J. Wirel. Commun. Netw. 2016:1–25 (2016) 2. Griego, D., Buff, V., Hayoz, E., Moise, I., Pournaras, E.: Sensing and mining urban qualities in smart cities. In: 2017 IEEE 31st International Conference on Advanced Information Networking and Applications (AINA), Taipei, pp. 1004–1011 (2017). https://doi.org/10.1109/ aina.2017.14 3. Al-Shariff, S.M. et al.: Smart transportation system: mobility solution for smart cities. (2019) 4. Bonomi, F., Milito, R., Natarajan, P., Zhu, J.: Fog computing: a platform for internet of things and analytics. In: Bessis, N., Dobre, C. (eds.) Big data and internet of things: a roadmap for smart environments, pp. 169–186. Springer International Publishing (2014) 5. Wadhwa, H., Aron, R.: Fog computing with the integration of internet of things: architecture, applications and future directions. In: 2018 IEEE International Conference on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom), Melbourne, Australia, pp. 987–994 (2018). https://doi.org/10.1109/BDCloud.2018.00144 6. Silla, Jr, Freitas, A.: A survey of hierarchical classification across different application domains. Data Min. Knowl. Discov 22(1–2), 31–72 (2011) 7. Fornacciari, P., Mordonini, M., Tomaiuolo, M.: A case-study for sentiment analysis on twitter. In: Proceeding of the 16th Workshop “From Object to Agents” (WOA15) June 17–19, Naples, Italy 8. Dumais, S., Chen, H.: Hierarchical classification of web content. arXiv: 1807.08825 [cs. LG]

138

A. M. Thaw et al.

9. Alsheikh, M.A., Lin, S., Niyato, D., Tan, H.P.: Machine learning in wireless sensor networks: algorithms, strategies, and applications. IEEE Commun. Surveys Tutorials 16(4), 1996–2018 (2014) 10. Buczak, A.L., Guven, E.: A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun. Surveys Tutorials 18(2), 1153–1176 (2015) 11. Jan, T., Sajeev, A.S.M.: Boosted probabilistic neural network for IoT data classification. In: 2018 IEEE 16th International Conference on Dependable, Autonomic and Secure Computing, 16th International Conference on Pervasive Intelligence and Computing, 4th International Conference on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech) (2018): 408–411 (2018) 12. Jhuang, J.-W., et al.: The efficient data classification using SVMcW for IoT data monitoring and sensing. In: 2019 IEEE International Conference on Consumer Electronics—Taiwan (ICCETW), pp. 1–2, (2019) 13. Alsouda, Y., et al.: A machine learning driven IoT solution for noise classification in smart cities. ArXiv abs/1809.00238: n. pag (2018) 14. Ioannou, C., Vassiliou, V.: Classifying security attacks in IoT networks using supervised learning. In: 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS) 2019:652–658 (2019) 15. Pajouh, H.H., et al.: A two-layer dimension reduction and two-tier classification model for anomaly-based intrusion detection in IoT backbone networks. IEEE Trans. Emerg. Top. Comput. 7(2019):314–323 16. Vishwakarma, R., Jain, A.K.: A honeypot with machine learning based detection framework for defending IoT based botnet DDoS attacks. In: 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI) 2019:1019–1024 (2019) 17. Rady, A., Sabor, N., Shokair, M., El-Rabaie, E.M.: Mobility based genetic algorithm hierarchical routing protocol in mobile wireless sensor networks. In: 2018 International Japan-Africa Conference on Electronics, Communications and Computations (JAC-ECC), Alexandria, Egypt, pp. 83–86 (2018). https://doi.org/10.1109/jec-ecc.2018.8679548 18. Rehman, E., et al.: Energy efficient secure trust based clustering algorithm for mobile wireless sensor network. J. Comp. Netw. Commun. 2017:1630673:1–1630673:8 (2017) 19. Nair, P., Kashyap, I.: Hybrid pre-processing technique for handling imbalanced data and detecting outliers for KNN classifier. In: 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), pp. 460–464 (2019) 20. Anwar, M.I., Khosla, A.K.: Fog classification and accuracy measurement using SVM. In: 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), pp. 198–202 (2018)

Multi-Objective Teaching–Learning-Based Optimization for Vehicle Fuel Saving Consumption Trong-The Nguyen, Hong-Jiang Wang, Rong Hu, Truong-Giang Ngo, Thi-Xuan-Huong Nguyen, and Thi-Kien Dao

Abstract A multi-objective paradigm arises a viable approach to solving complex problems of optimization based on integrating multiple disciplines. This paper sets forth a new multi-objective teaching–learning-based optimization (short for MTLBO) for the issue of automotive fuel consumption. The vehicle’s fuel consumption is related to the cost of traveling by gasoline and the current traffic conditions, e.g., congestion, roads. The least use of fuel and the shortest routes are modeled as the objective functions for candidates’ solutions on the automobile navigation route. A road transportation system is built based on wireless sensor network (WSN)fitted sensor nodes and the car fitted with the global positioning system (GPS). The results of the simulation are contrasted with the approaches of the Dijkstra and the A* algorithm. Results from experiments show that the proposed method increases precision. Keywords Teaching–learning-based optimization · Multi-objective optimization · Vehicle fuel saving consumption

T.-T. Nguyen · H.-J. Wang · R. Hu · T.-K. Dao Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian University of Technology, Fuzhou, China e-mail: [email protected] T.-K. Dao e-mail: [email protected] T.-G. Ngo (B) Faculty of Computer Science and Engineering, Thuyloi~University, 175~Tay~Son, Dong~Da Hanoi, Vietnam e-mail: [email protected] T.-T. Nguyen · T.-X.-H. Nguyen Department of Information Technology, Haiphong University of Manage and Technology, Haiphong, Vietnam e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_12

139

140

T.-T. Nguyen et al.

1 Introduction The multi-objective is about seeking a solution to the problem with more than one objective function [1, 2]. These problems are multi-objective in nature, which makes them complicated and difficult to solve [3]. Two main approaches are used to solve those problems: posterior and priori [4]. In a priori approach, by accumulating the objectives, the multi-objective problem is first converted into one single target. Per target is given a weight based on the importance of the respective objective functions. The downside to this strategy is that to find the Pareto optimum set [5], the algorithm will be run several times over. A subsequent solution operates counter to a priori solution by preserving the multi-objective structure of the problem and finding an optimal environment for Pareto by running the algorithm only once. But that approach requires a high cost of computation. This method is widely used for problem-solving in the real world. Urban planners, automobile developers, and drivers pay more attention to scheduling vehicles and routing vehicles [6]. Saving the fuel for the vehicle is not only a single-objective criterion, such as the shortest distance, but also the other criterion, such as time running, speed, and condition of the road. The best route from a starting position to a destination point requires saving the vehicle’s fuel used. The development in information technology and digital intelligentsia has resulted in the exponential growth in WSN, which finds the associated fuel consumption problems to be a viable solution [7, 8]. The sensor-enabled products’ networks are becoming as common as the daily life of a driver’s vehicle [9]. The metaheuristics algorithm is one of the powerful means of solving problems of optimization. Several areas were applied, e.g., infrastructure, accounting, healthcare communications, intelligent traffic control [3]. A new metaheuristic algorithm in the field of evolutionary algorithms, teaching–learning-based optimization (TLBO) [10], is an approach that works as a population-based algorithm on the theory of teaching and learning. TLBO algorithm has several advantages, such as simple operation and smooth implementation of programming. Unlike the other evolution processing development of the algorithms, however, the TLBO still has many drawbacks in the search process, such as low convergence speed, simple goal, and quick to collapse into local optimum. Despite the TLBO algorithm and its variations, the single-objective optimization obtained outstanding results. Nevertheless, no improvement has been made with the TLBO variants for carrying traffic and using fuel [11]. This paper aims to implement TLBO’s multi-objective optimization based on the Pareto front (namely MTLBO) to save the vehicle’s fuel consumption with complex sensor networks in the region. The parameters for distance and edge movement gasoline were designed to maximize the fuel consumption of vehicles.

Multi-Objective Teaching–Learning-Based Optimization …

141

2 Related Work 2.1 Teaching Learning-Based Optimizer A new optimization algorithm inspired by the learning–teaching cycle theory of a standard class called TLBO (teaching–learning-based optimizer) [10] is a robust algorithm. In TLBO, teacher and learner are modeled in two phases, and learner outcomes (grades) are the results of the algorithm. In the teaching process, the teacher teaches the learners knowledge; this means that the teacher gives the learners knowledge. Yet, the learners can also get information from other participants of the class during the learning process. The algorithm’s two principal phases are defined as follows. 2.1.1

Teaching Phase

In the instructor process, the teaching task is to educate and test learners by providing grades. It means the teacher will improve the class’s mean grades. The teacher determines the points that represent interest. A vector called teaching factor (TF) is a teaching factor influencing the mean of changes in classes. The mean location of the learners is called man, and in the new generation, the best answer is known as teacher. A set of the learner in a class is represented as a vector X (x 1 , x 2 … x N ); i = 1, 2,… N, (N is a number of population). A new learner is updated as follows. t X it+1 = X it + X deff

(1)

where t is the current generation and X deff is a difference among learner members. This difference can be expressed as follows. X deff = rand() × (Teacher − TF × Mean)

(2)

where rand() is a random number within the range [1], TF is a teaching factor that is as a probability randomly step, and Mean indicates the mean ranging grades as the position of the learners. The Teacher is the best one of the decision vector. 2.1.2

Learning Phase

The learners will get the information from two sources in this phase: one from the instructor and the other from the engaging classmates. A student communicates by spontaneously playing the game, debating, or interacting community with the other participant in the class. Let X k and X j be the two selected members randomly in a class, where = j, and k, j ≤ n. The new learner is updated as a mathematical model as follows. If different X k and X j makes objective function evaluation is better than previous,

142

T.-T. Nguyen et al.

  X it+1 = X it + rand() × X kt − X tj

(3)

  X it+1 = X it + rand() × X tj − X kt

(4)

Otherwise

If a new one is better than the existing one, we choose the new solution. The pseudo-codes and flowchart of the TLBO steps are available in reference [10].

2.2 Vehicle Fuel Consumption Optimization The real data of the roads and streets related traffic, e.g., velocity limits, length distances, lanes, vehicle density, and direction, could be recorded and collected in real time by the grid sensors nodes of the WSN application such as the aided traffic monitoring system [12]. Finding out the optimal way from a location to a destination is one of the main functions of the vehicle routing or the direction of a vehicle in the aided traffic monitoring system. The grid sensors nodes included a number of sensor nodes are used to get the traffic information in the application WSN for traffic navigation or monitoring in the area of urban [13, 14]. Figure 1 shows the grid sensor nodes on the

Fig. 1 Grid sensor nodes in the road of urban traffic network

Multi-Objective Teaching–Learning-Based Optimization …

143

road of the urban traffic network. All the local area intersection nodes are marked for navigation and a possible solution is the specific permutation sequence of these nodes, which contain a candidate navigation route. The Euclidean interval between two points pi and pi+1 may be determined as follows:  d( pi , pi+1 ) =



x pi − x pi+1

2

2  + y pi + y pi+1

(5)

where (x, y) is the coordinates of the point, and d is represented as the distance between locations. The path from a starting point to a targeting point is divided into segments. Let L be the length of a path, L = {l1 , l2 , . . . , ln }, where n is a number of segments. The path length moving of a vehicle is approximated as follows: L( p) =

n 

d( pi , pi+1 ),

(6)

i=0

where p is the point in the path of a vehicle moving. Additionally, local area intersection nodes are labeled for navigation and various node series permutation. A candidate navigation route is a possible optimization solution. The objectives for the problem optimization will be presented in the next section.

3 MTLBO for the Vehicle Fuel Consumption The single-objective optimization might not be sufficient to describe the systems in complex grid networks related to vehicle fuel consumption in some instances of functional problems. However, to deal with the limitation of modularity resolution, a multi-objective optimization is suggested, and this can be achieved with remarkable results. A new multi-objective TLBO (namely MTLBO) algorithm based on the Pareto front is therefore proposed in this section for vehicle fuel consumption on the WSN complex grid sensor nodes.

3.1 Pareto-Optimal Solution The multi-objective approach has typically decomposed a problem into many singleobjective solutions, which are often called neighborhood relationship subproblems. At the same time, the subproblems are optimized by using the population-based algorithm to solve the problem. Using the Pareto ranking of optimized subproblems [15], the solution to the sub-problem is optimized. Based on the distances between the weight vectors, neighborhood relations among these subproblems are defined. Use information primarily from the adjacent subproblems obtains a selection of optimal

144

T.-T. Nguyen et al.

solutions. A Pareto solution is a set of optimal solutions to the subproblems. Domination of solution vector X = (X 1 , X 2 , .., X n )T on a vector Y = (Y1 , Y2 , .., Yn )T for optimization (minimization or maximization problems). For example, a minimization problem is if and only if X i ≤ Yi for ∀i ∈ {1, ..., n} and ∃i ∈ {1, ..., n} : X i < Yi . There is not any component of X is greater than the component of Y correspondingly, and at least, one component of X is smaller than Y. The dominating relationship is expressed by the formula. X ≺Y ⇔ X ≺ Y ∨ X = Y

(7)

The dominance can be defined for maximization problems by replacing the symbol of ‘≺’ a range with the symbol of ‘range’ ‘ ’. There is a point X * called a nondominated solution if there is no solution that dominates it. A multi-objective’s Pareto front PF is defined as the set of non-dominated solutions as follows. (8) where S indicated as the solution set with an approximation point could be obtained from the Pareto front whether efficient methods can be used to produce a wide variety of solutions [16]. The optimal solution for Pareto can be used to achieve the multi-objective solutions. The TLBO algorithm is extended by transforming decision space into the objective function space to figure out the specification for the actual functions. The search space of TLBO’s individuals is coverage in front space of Pareto with the related decision function. Let x be the d-dimensional decision vector with room for decision. Let F(x) be an objective function to optimize as a set of the member objective functions on the real vector, e.g., F1 , F2 , . . . , Fu ∈ Y ∈ R h where h is a number of objectives. As a constraint approximation follows, the maximization problem of multi-objective optimization for d-dimensional vectors is presented as. Minimize F(x) = (F1 (x), F2 (x), . . . Fh (x)) Subject to x ∈ [x L , x U ],

(9)

where x L , and x U are the given constraints of lower and upper bounds of the corresponding ions range, respectively. Space x is a vector as {x1 , x2 , .., xu } ∈ X ∈ R d and F(x) is a vector of {F1 , F2 , . . . , Fu } ∈ Y ∈ R h .

3.2 MTLBO for Vehicle Fuel Saving Consumption Traffic information obtained from urban area WSNs is processed and navigated to maximize the least amount of fuel and pollution available [17]. The transport

Multi-Objective Teaching–Learning-Based Optimization …

145

traffic model within our scheme is represented by the sensor nodes grid network. The vehicle routing model is used for candidates in the navigation direction, based on a graph’s node-edge chain. Below are expressed two considered goals the least gasoline consumption and the shortest paths to deal with optimization. The shortness is defined as minimizing of the path length of moving a vehicle from the starting point and the destination in each iteration: Minimize F1 ( p) =

n−1 

di

(10)

i=0

The least use of fuel is related to the expense of edge-travel fuel and the weight of the congestion. The weight of congestion is a statistic that can be measured at the standard period collected from the traffic control network by traffic situation. The edge-travel fuel cost can be estimated in the navigation direction, based on the current traffic situation. Minimize F2 ( p) =

n−1 

ω(di ) × G(di )

(11)

i=0

where ω(di ) is a binary coefficient in the segment di that is determined by if it existed an edge linked nodes, it would be set 1; otherwise, it would be set to 0. G(di ) is the vehicle consumed gasoline in the segment di . Depending on the current traffic condition, the data are aggregated from the sensor nodes of a grid network, e.g., velocity, amount gasoline for the vehicle per unit time (liter), traffic congestion (vehicle density on the segment as the delay time). G(di ) = Length traveling ∗ litre + Delaytime ∗ litre

(12)

MTLBO for vehicle fuel saving consumption is designed by simultaneously obtaining optimization from any objective function with TLBO to find the Paretooptimal solution. By using sampling uniform distributions, the individuals are distributed throughout the search space as equally as possible of the decision space that included the dimension and the objective space. Two described factors can be obtained from the objective function to the optimum solution in minimization. Minimize F(x) = v × F1 + (1 − v) × F2

(13)

where F(x) is the objective function of optimizing fuel consumption for urban area vehicle navigation, and v is the balance weight. The best solution of multi-objective saves Pareto-optimal solutions to S as a set of achieves.

146

T.-T. Nguyen et al.

4 Experimental Results For the simulation, the traffic roads system map is set to the grid network of a given city. Vehicle routing can be modeled as a guided graph G = ( V, E) that V is a set of intersection nodes, and E is a set of roads as the edge of the grid network [18]. Several experiments for the objective function, according to Eq. (13), was performed for various grid networks. Figure 2 displays a random selection of a sample from the segmenting grid of sensor nodes for vehicle fuel consumption. The red dots indicate chaos where it occurs. Figure 3 depicts the GUI simulation for generating the segment of the grid network of sensor nodes. The red points are denoted as the towers of the station as coordinator nodes. Two given motion objects are the source and destination points that produce the estimation paths to be optimized for vehicle fuel consumption. The data retrieved and processed are shown by comparing Figs. 4, 5 and 6, and Table 1. The cost of gasoline is computed as referring to Eq. (12); the density of the red point is the higher, and the time delay is the longer. Setting parameters for the algorithm is to initialize the population size of individuals by generating N p segments, TF (teacher factor), Mean, Teacher as referring to [3]. This parameter setting is the same conditions such as the initial population N p is set o 40 to start the graph, iteration max is set to 300. To evaluate the proposed approach, the experimental results of running the algorithm for the objective function Eq. (11) are compared with other algorithms such as

Fig. 2 Generating the sample of the segmenting grid network of sensor nodes for vehicle fuel consumption randomly

Multi-Objective Teaching–Learning-Based Optimization …

147

Fig. 3 GUI simulation for generating the grid vehicle network and estimation paths of two moving points

Fig. 4 Comparison of the mean of results of the proposed approach with the A* algorithm and Dijkstra for the vehicle gasoline saving consumption

A* algorithm [19] and Dijkstra [20]. Figure 4 shows the comparison of an average of the best values of the proposed approach to the fuel consumption approach with the Dijkstra and A* algorithm. Apparently, the proposed method also delivers better performance in terms of convergence speed than those obtained by Dijkstra and A∗ algorithm methods.

148

T.-T. Nguyen et al. 10 -5

Grid space 2nd Objective of The Shortness Path

0.2

Coordinate Y

0.18

0.16

0.14

0.12

0.05

0.06

0.07

0.08

0.09

Objective space

2.5 2 1.5 1 0.5

2

0.1

4

6

8

10

12

1st Objective of The Least Fuel Consumption

Coordinate X

1.2 1

Feasible Area

0.8 0.6

Optimization Points 0.4 Pareto Front 0.2

2

nd

Objective of the Shortest Paths

Fig. 5 Visualize the space of grid and multi-objective with the least gasoline consumption and the shortest paths functions

0

0

0.2

0.4

0.6

0.8

1

st

1 Objective of the Least Fuel Consumption

Fig. 6 MTLBO’s results in a multi-objective Pareto optimization solution that works with the least fuel consumption and the shortest paths Table 1 Comparison of the proposed solution with the Dijkstra and the A+ algorithm approaches for time and fuel efficiency evaluation with the single consumption goal

Approaches

Ave. time cost(h)

Ave. fuel cost

Dijkstra

1.293

1.31 × 10−2 L

A+ algorithm

1.259

1.30 × 10−2 L

The proposed approach

1.228

1.23 × 102 L

Multi-Objective Teaching–Learning-Based Optimization …

149

Figure 5 indicates a visualization of the space of the grid network and the multiobjectives of the least gasoline consumption and the shortest paths functions. Figure 6 shows the findings of the MTLBO suggested for the car fuel saving issue with Pareto-optimal solution. Table 1 displays the effects of the average less fuel consumption paths obtained from the suggested approach in the compared with the methods as Dijkstra [20] and A∗ Algorithm [19]. Fortunately, the proposed approach for the objective function of the gasoline consumption produces better than the Dijkstra and A* algorithm methods.

5 Conclusion A new multi-objective teaching–learning-based optimization (short for MTLBO) was presented in this paper for the problem of vehicle fuel saving in consumption. Dealing with the functional question is not only one condition limitation but also multiple criteria because of the complexities of the setting. The vehicle’s fuel consumption is related to the cost of traveling by gasoline and the current traffic conditions, e.g., congestion, roads. MTLBO addresses equally well two perceived targets, the least fuel usage and the shortest roads. The simulation results were compared with other methods in the literature, such as, for example, the Dijkstra and the A* algorithm show that the proposed approach is capable of providing increased quality in terms of precision.

References 1. Deb, K.: Multi-objective optimization using evolutionary algorithms: an introduction. Multiobjective Evol. Optim. Prod. Des. Manuf. (2011). (2011003) 2. Nguyen, T.T., Pan, J.S., Dao, T.K.: An improved flower pollination algorithm for optimizing layouts of nodes in wireless sensor network. IEEE Access 7, 75985–75998 (2019). https://doi. org/10.1109/ACCESS.2019.2921721 3. Dao, T.K., Pan, T.S., Nguyen, T.T., Pan, J.S.: Parallel bat algorithm for optimizing makespan in job shop scheduling problems. J. Intell. Manuf. 29, 451–462 (2018). https://doi.org/10.1007/ s10845-015-1121-x 4. Van Veller, M.G.P., Kornet, D.J., Zandee, M.: A posteriori and a priori methodologies for testing hypotheses of causal processes in vicariance biogeography. Cladistics (2002). https://doi.org/ 10.1006/clad.2001.0190 5. Miguel Antonio, L., Coello, C.A.: Coevolutionary multiobjective evolutionary algorithms: survey of the state-of-the-art. IEEE Trans. Evol. Comput. (2018). https://doi.org/10.1109/ TEVC.2017.2767023 6. Pillay, N., Qu, R.: Vehicle routing problems. In: Natural computing series (2018). https://doi. org/10.1007/978-3-319-96514-7_7 7. Pan, J.S., Kong, L., Sung, T.W., Tsai, P.W., Snášel, V.: α-fraction first strategy for hierarchical model in wireless sensor networks. J. Internet Technol. 19, 1717–1726 (2018). https://doi.org/ 10.3966/160792642018111906009

150

T.-T. Nguyen et al.

8. Pan, J.-S., Nguyen, T.-T., Chu, S.-C., Dao, T.-K., Ngo, T.-G.: Network, diversity enhanced ion motion optimization for localization in wireless sensor. J. Inf. Hiding Multimed. Signal Process. 10, 221–229 (2019) 9. Pan, J.S., Kong, L., Sung, T.W., Tsai, P.W., Snášel, V.: A clustering scheme for wireless sensor networks based on genetic algorithm and dominating set. J. Internet Technol. 19, 1111–1118 (2018). https://doi.org/10.3966/160792642018081904014 10. Rao, R.V., Savsani, V.J., Vakharia, D.P.: Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Des. (2011). https://doi. org/10.1016/j.cad.2010.12.015 11. Sarzaeim, P., Bozorg-Haddad, O., Chu, X.: Teaching-learning-based optimization (TLBO) algorithm. In: Studies in computational intelligence (2018). https://doi.org/10.1007/978-98110-5221-7_6 12. Choudhary, A., Gokhale, S.: Urban real-world driving traffic emissions during interruption and congestion. Transp. Res. Part D Transp. Environ. 43, 59–70 (2016). https://doi.org/10.1016/j. trd.2015.12.006 13. Dao T.-K., Pan T.-S., Nguyen T.-T., Chu, S.-C.: A compact artificial bee colony optimization for topology control scheme in wireless sensor networks. J. Inf. Hiding Multimed. Signal Process. . 06, 297–310. 14. Guo, L., Fang, W., Wang, G., Zheng, L.: Intelligent traffic management system base on WSN and RFID. In: CCTAE 2010—2010 International Conference on Computer and Communication Technologies in Agriculture Engineering. pp. 227–230 (2010). https://doi.org/10.1109/ CCTAE.2010.5544797 15. Ngatchou, P., Zarei, A., El-Sharkawi, A.: Pareto multi objective optimization. In: Proceedings of 13th International Conference on Intelligent Systems Application to Power Systems (2005) 16. Zavala, G.R., Nebro, A.J., Luna, F., Coello, C.A.: A survey of multi-objective metaheuristics applied to structural optimization (2014). https://doi.org/10.1007/s00158-013-0996-4 17. Dao, T., Yu, J., Nguyen, T., Ngo, T.: A Hybrid Improved MVO and FNN for identifying collected data failure in cluster heads in WSN. IEEE Access 8, 124311–124322 (2020). https:// doi.org/10.1109/ACCESS.2020.3005247 18. Nguyen, T.-T., Pan, J.-S., Chu, S.-C., Roddick, J.F., Dao, T.-K.: Optimization localization in wireless sensor network based on multi-objective firefly algorithm. J. Netw. Intell. 1, 130–138 (2016) 19. Lamiraux, F., Laumond, J.P.: Smooth motion planning for car-like vehicles. IEEE Trans. Robot. Autom. 17, 498–502 (2001). https://doi.org/10.1109/70.954762 20. Chen, Y.Z., Shen, S.F., Chen, T., Yang, R.: Path optimization study for vehicles evacuation based on Dijkstra algorithm. In: Procedia Engineering, pp. 159–165 (2014). https://doi.org/10. 1016/j.proeng.2014.04.023

A Tripod-Type Walking Assistance for the Stroke Patient P. Kishore, Anjan Kumar Dash, Akhil Pragallapati, Dheerkesh Mugunthan, Aniirudh Ramesh, and K. Dilip Kumar

Abstract This paper presents a tripod-type balancing support which assists the stroke patients during standing and walking. Stroke is the blockage or the damage of veins which supplies blood to the brain. It results in difficulty in speaking and understanding, loss of one-sided mobility. There are various walking aids like canes, crutches, and walkers are available in the market. But these walking aids fail to provide proper stability for the stroke survivors for their basic mobility. And also they always depend on the assistive person for their basic mobility. This leads to an unhappy situation for the patient as well as to the family. Commercially available exoskeleton for stroke patients is very costly, and such that normal person cannot afford that much higher amount. This work aims to provide the necessary stability during walking at a low cost for the stroke patient. Hence, a tripod type of balancing support is fabricated and tested with the stroke patient. From the experiment, it is seen that tripod walking support provides the necessary assistance required for the stroke patient and motivates the patient positively towards walking. Keywords Tripod · Waking aid · Stroke · Ground reaction force

1 Introduction Canes, Crutches, and walkers are some commonly used walking assistive devices for mobility by the disabled and paralytic patients. In the Egyptian civilization sixth dynasty 2830BCE, there is a carving at hirkouf tomb, a man uses a crutch for standing, and it signifies that more than million years these walking aids are used by the ancient people. Crutches are four major types they are armpit, triceps, forearm, and platform. Canes are used to assist forward motion during walking, provide proper balance on the weak side, and reduce the load acting on the weak side. For the patients who need bilateral support walkers are commonly preferred [1]. Walkers are four types P. Kishore (B) · A. K. Dash · A. Pragallapati · D. Mugunthan · A. Ramesh SASTRA Deemed To Be University, Thanjavur, Tamil Nadu, India K. D. Kumar Sanjay Physiotherapy Centre, Thanjavur, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_13

151

152

P. Kishore et al.

they are four-legged walker, two-wheeled walker, three-wheeled walker, and fourwheeled walker. In a legged walker, the patient lifts the walker and places it in the path or direction of walking and advances the steps. In a two-wheeled walker, there are two wheels at the front and two legs at the back. In a three- and four-wheeled walker, there is a seat where the user can rest and walk. These are the commonly used assistive devices and should be designed in ergonomically; otherwise, it leads to injuries for the user. For example, the top of the crutches should never be used as supporting because the supporting load may act on the nerve fibres and blood vessels and damage it. The research concludes that the use of a crutch increases the energy consumption of the user during walking, and also it reduces the walking speed. Crutches alter the gait of the user, and there are various types of crutches which serves for different types of user, and hence, patient needs to use appropriate crutches suggested by the doctor to reduce the overall energy consumption during walking. For the rehabilitation of lower limb patients, the proper guidance should be given for walking based on the upper body strength [2]. Y. G. Jeong et. al. studied the performance of three different walking aids (single-point cane, quad cane, and Hemi walker) cane based on the oxygen consumption during walking of the hemiplegic patient [3]. Based on the result among 20 subjects, he concludes that single-point cane increases the walking speed and reduces the overall oxygen consumption during walking than the quad cane and the Hemi walker. Further, this research needs to be extended based on the stability of these canes during walking. In the past, the experiment also performed on stroke patients with three different conditions during the rehabilitation period [4]. The conditions are walking without aid, walking with a cane, and with quad stick. The electromyography (EMG) sensors are placed on spinal muscles (erector spinae), in quadriceps and hamstring muscles (gluteus maximus, gluteus medius, vastus lateralis, and semitendinosus), and in calf muscles (gastrocnemius and tibialis anterior). From the experiment, it is seen that the use of cane enhances muscle activation during walking especially during the post-stroke rehabilitation period. And also the use of cane after stoke improves the step length and stride length and reduces the walking bade and cadence [4]. Further, the patient can move with circumduction gait by the use of the cane. Among the three different canes Nordic stick, four-point cane, and simple cane with ergonomic handgrip, it is found that the simple cane with ergonomic hand grip improves the walking velocity and distance, and it is suggested during the rehabilitation period for faster recovery [5]. The research on the robotic rehabilitation of stroke survivors is ongoing. Initially, the exoskeletons are used for strength augmentation of military people for lifting the weight, and nowadays, these exoskeletons are used for rehabilitation of the stroke and spinal cord-injured patient. Exoskeletons are anthropometric in design, and the user can wear the robotic suit which helps to augment strength and provide the necessary assistance for walking. The ReWalk is one of the commercially available exoskeletons which is used for regaining the locomotion of the SCI patients. Initially, the research was made before the commercializing the product with 12 SCI patients, and it is seen that all subjects able to walk individuals without any assistance for continuously 5–10 min and covers up to 5–10 m with a walking speed of 0.25 m/min [6]. Another exoskeleton named CHU-EXO which is used to

A Tripod-Type Walking Assistance for the Stroke Patient

153

sit/stand and walk for the paralytic patients. It has three actuators named at the hip, knee, and ankle which is used to produce flexion and extension of various joints [7]. Likewise, there is various exoskeleton which gives mobility for the paralytic patients, and they are Hybrid assistant limb, The Asian Institute of technology—I (ALEX-I), The ReWalk, Walking Support Exoskeleton, Honda Exoskeleton, The Active Leg Exoskeleton, The Sogang University Biomedical Assistive Robot, Rex Rehab, Cyberdyne, The Berkeley Lower Extremity Exoskeleton, MINDWALKER [8]. Among these exoskeletons, some of them are commercialized, and many of them are in the research stage. These commercially available exoskeleton costs higher which varies from Rs. 200,000 to 750,000 in Indian rupees. Hence, only the people who can afford this huge amount can able to buy this exoskeleton, and the majority of people cannot afford this exoskeleton due to a huge cost. Sarah F. Tyson and Louise Rogerson studied the effectiveness in usage of canes, ankle foot orthosis, and slidewalker, and the result shows that walking with cane improves functional mobility after stroke [9]. For the stroke patient with higher severity, the usage of cane, crutches, and walkers is not advisable; due to the absence of one side mobility, they cannot balance using their affected side hand and leg. They have a higher risk of falling. In the case of the walker, they cannot lift the walker with the help of both the hands; hence, they need an assistive device that balances their weight and should move along with the patient with minimal physical effort, and also it should cost affordable. In this work, we have made a tripod-type assistive device for stroke patients for their basic mobility. It rests on the armpit of the patient and provides the necessary assistance during walking. This paper addresses the design and functional analysis of the tripod-type assistive device for stroke patients.

2 Normal Gait and Abnormal Gait Gait is defined as one of the recurring events which happen during walking. It is the period between the incipient contact of the left/right foot on the ground to the left/right foot touches again. Gait consists of two different phases in walking: one is the swing phase, and another one is the stance phase. When the foot touches the ground, it is called a stance phase, and when the foot leaves the ground and advances, it is called the swing phase. The stance phase consists of four events are loading response, mid-stance, terminal stance, and pre-swing, and the swing phase has three events that are initial swing, mid-swing, and terminal swing. These seven phases in walking constitute the gait, and it occurs repetitively during walking. The leg which advances during walking is called a leading leg, and the leg which is behind is called a trailing leg. The weight of the person is shared equally by both the leg, and it is shifted during walking. The two consecutive placement of the same foot is called stride length which has two-step lengths. In the case of pathological gait, one of the step lengths is zero. For example, if the left step is placed at the front to make a step and right leg is placed behind the left leg rather than in front of it, this is called pathological gait, and it is due to various reasons like nervous disorder,

154

P. Kishore et al.

muscle weakness, spinal cord injury (SCI), and cerebral palsy. Reduction of step length characteristically reduces the stride length and reduces the overall distance covered per unit time.

3 Force Plate Analysis The amount of usage of legs during walking and standing can be determined easily with the help of a force plate. Force plates are used to measure the static and dynamic ground reaction forces (GRF) during walking. For a normal person, the GRF is distributed equally among both the legs. In the case of stroke patients, the distribution of GRF varies among the legs. From the varying GRF, one can conclude the level of usage of legs during mobility. We identified a stroke patient who is 65 years old male and affected by stroke in the year 2018. He lost his left side mobility after the stroke, and later, with rigorous physiotherapy training, now he is in hemiparesis stage. His weight is 58Kg. The force plate used in this research work is vernier which can record forces up to 3500 N. Three force plates are placed in order such that the subject is made to walk on the force plate with the help of two assistive persons on both sides. The reading of ground reaction force is recorded in the Lab Quest software. Then, the subject is allowed to stand on the force plate such that each leg lands on a separate force plate, and the readings are recorded. From the graph, as shown in Fig. 1a, it is seen that the leg usage of the affected side (Left side) is lower than the unaffected side. The subject is allowed to walk for a period of 120 s on the three force plate as shown in Fig. 1b. Initially, the subject transfers his entire weight on the unaffected side (right side), and a maximum value of 590 N weight is recorded. During walking, the subject transfers an average weight of 384 N on the unaffected side and an average weight

Right Leg

Left Leg

(a) (b)

Fig. 1 a GRF during walking, b walking on the force plate

A Tripod-Type Walking Assistance for the Stroke Patient

155

of 175 N on the affected side. This plot shows the subject is using his right side for the stability higher than the left side.

4 Tripod Balancing System 4.1 Design of Tripod Balancing Support The commonly used assistive devices like canes, crutches, and walkers are not suitable for the stroke patient since the stroke survivors need a continuous support assistant on the weak side for their basic mobility. Due to the dependency on another person for basic mobility, they cannot even choose their best dress to wear, they cannot even walk outside to sit in sunlight in winter, and they cannot even hear their favourite music, etc., and their pain is very heart moving. Exoskeletons are very costly, and normal people cannot afford that much amount. Hence, it is in need to make an assistive device that assists in their basic mobility. Hence, we made an assistive device which is tripod-type balancing which rests on the armpit of the stroke patient. It has three wheels and moves along with the patient’s intention of moving. This tripod is inspired by balancing a bicycle of the children. This tripod rests on the armpit and is fixed across the chest of the patient with the help of flexible strap such that if the patient moves, tripod also moves along with the patient. The tripod fabricated is height adjustable such that it can be adjusted from 4 to 6 ft. The design of the tripod support, drawn in ADAMS software, is shown in Fig. 2a, b which describes the fabricated tripod balancing support. The aluminium 6061 alloy are used for the fabrication of the tripod. Various slots have been made in the aluminium hollow pipe so that the height of the tripod can be adjusted according to the user needs. The

(a) (b)

Fig. 2 a Tripod design, b fabricated tripod model

156

P. Kishore et al.

flexible strap shown in Fig. 2b is tied around the patient’s chest so that tripod moves along with the patient during walking. The mass of the tripod fabricated is 1.98Kg.

4.2 Static Structural Analysis of the Tripod The static structural analysis is carried out on the tripod model as shown in Fig. 3. The model of the tripod is designed using the Solidworks, and also structural analysis was carried out. The material properties are aluminium with the density of 2700 kg/m2 and mass of 1.98Kg. The standard meshing was used with the nodes and elements are 18,811 and 9578. The load of 68.67 N, that is 7 kg, is applied on the top surface area of the tripod, and the output responses stress and displacement are analysed. From Fig. 3a, it is seen that the maximum and minimum stress values are 1.333e + 03 N/m2 and 1.017e

(a)

Fig. 3 a Stress analysis, b displacement analysis

(b)

A Tripod-Type Walking Assistance for the Stroke Patient

157

+ 06 N/m2 , and from Fig. 3b, the maximum and minimum displacement values are 1–1.1 2 mm. From the result, for the given load of 7Kg, that is 68.67 N, the obtain stress and displacement values are found to be minimum.

4.3 Testing of Tripod with Patient To determine the effectiveness of the tripod, it is necessary to determine how much load is transferred by the patient to the tripod. For that, a load cell is placed below the cushioned surface in tripod support. A load cell is a type of pressure sensor which converts the load acting on it into the current. The applied load is equal to that of the amount of electricity produced. The load cell used in this study can measure the maximum load of 5 kg (50 N). The circled region is shown in Fig. 4a which shows the top wooden plank along with cushioning fastened at one side to the load cell which has been fixed at its base by fastening it to the other wooden plank on the bottom. The thickness of the planks is adjusted to provide for the bending of the load cell under the applied load. The entire set-up is shown in Fig. 4b. Arduino IDE software kid is used to decode the signals from the load cell. The signals from the load cells are weak, and it needs to be amplified. HX711 load amplifier is used to amplify the signals from the load cell and transmit it to the Arduino IDE software kid. Finally, the amount of load acting on the tripod is measured by the load cells, and the readings are displayed in the serial monitor. Figure 5 shows the connections of the load cell, load amplifier, and Arduino kid. Now, the patient is tested with the tripod balancing support. The subject is supported by a tripod that rests under the armpit of the patient. The flexible strap is tied around the chest of the subject so that if the patient moves, the tripod also moves

(a) Fig. 4 a Load cell fixed on the tripod, b tripod-type balancing support with load cell

(b)

158

P. Kishore et al.

Fig. 5 Load cell with amplifier and Arduino

along with the patient. The reading from the load cell is plotted as a graph as shown in Fig. 6. The subject is allowed to walk for 120 s, and the readings are recorded from the load cell. During the usage of tripod support, it is seen that the person motivated positively to walk and improves stability in walking. From Fig. 6, it is seen that a

Fig. 6 Values from the load cell

A Tripod-Type Walking Assistance for the Stroke Patient

159

maximum load of 4Kg, that is 39.24 N force, is shared with the tripod balancing support. This force of 39.24 N is shared among three leg. The stress developed is analysed in Ansys software, and the mathematical model for the stress developed is shown in Eq. (1). F 4F Stress Developed S = 3 = A 3π D

(1)

where F is the force transferred to the tripod, A is the area of the hollow pipe, and D is the difference of outer diameter and inner diameter of the aluminium pipe that is D = D1 2 − D2 2 .

5 Conclusion From the experiment, we conclude that the reading from the force plate represents the distribution of GRF among both the legs. The tripod balancing support is fabricated and tested with the patient, and it is observed that the tripod improves the walking stability for the stroke patients. It eliminates the assistive person needed for their basic mobility and improves the confidence level of the patient during walking. It is cost less compared to commercially available walking aids. The subject transfers a force of 39.24 N on the tripod, and it signifies that the subject is well supported by tripod balancing support. Acknowledgements This work is supported by the Department of Science and Technology (DST) of India (Project No. TDP/BDTD/ 14/2018) under Biomedical Device and Technology Development (BDTD). We thank DST for providing the support and required fund for successfully carrying out the research.

References 1. Edelstein, J.: 36–Canes, Crutches, and Walkers. Elsevier Inc., Fifth Edit (2019) 2. Rasouli, F., Reed, K.B.: Walking assistance using crutches: a state of the art review. J. Biomech. 98, 109489 (2020). (Elsevier Ltd.) 3. Jeong, Y.G., Jeong, Y.J., Myong, J.P., Koo, J.W.: Which type of cane is the most efficient, based on oxygen consumption and balance capacity, in chronic stroke patients? Gait Posture 41(2), 493–498 (2015) 4. Buurke, J.H., Hermens, H.J., Erren-Wolters, C.V., Nene, A.V.: The effect of walking aids on muscle activation patterns during walking in stroke patients. Gait Posture 22(2), 164–170 (2005) 5. Allet, L., et al.: Effect of different walking aids on walking capacity of patients with poststroke hemiparesis. Arch. Phys. Med. Rehabil. 90(8), 1408–1413 (2009) 6. Esquenazi, A., Talaty, M., Packel, A., Saulino, M.: The rewalk powered exoskeleton to restore ambulatory function to individuals with thoracic-level motor-complete spinal cord injury. Am. J. Phys. Med. Rehabil. 91(11), 911–921 (2012)

160

P. Kishore et al.

7. Chen, B., et al.: A wearable exoskeleton suit for motion assistance to paralysed patients. J. Orthop. Transl. 11(March), 7–18 (2017) 8. Aliman, N., Ramli, R., Haris, S.M.M.: Design and development of lower limb exoskeletons: a survey. Robot. Auton. Syst. 95, 102–116 (2017). (Elsevier B.V.) 9. Tyson, S.F., Rogerson, L.: Assistive walking devices in nonambulant patients undergoing rehabilitation after stroke: the effects on functional mobility, walking impairments, and patients’ opinion. Arch. Phys. Med. Rehabil. 90(3), 475–479 (2009)

Data Routing with Load Balancing Using Ant Colony Optimization Algorithm S. Manjula, P. Manikandan, G. Philip Mathew, and V. Prabanchan

Abstract Wireless Sensor Networks (WSNs) are growing field in recent days. For maximizing life time of network, the routing is very important part to be considered. The routing becomes more complex due to number of nodes in the network increases. Sensor nodes in WSNs are constrained in processing power and batteries. These constraints have been enhanced by our proposed method to solve the routing problem i.e., data balancing or load balancing reduces the data path congestion and decreases the energy consumption of every node. However, In case of internet, load balancing is not sufficient to avoid congestion. There is in need of algorithm to provide optimum solution. Ant colony optimization (ACO) technique is used to find the shortest path and it is easier to combine other techniques with it as like data balancing using ACO. The proposed method combines ACO and load balancing technique to find an optimal approach for an efficient data routing. NS2 simulator is used to implement proposed algorithm. From the analysis of results, the proposed one produces better results than other existing algorithms. Keywords Routing · ACO · Data balancing

1 Introduction WSN consists of group of sensors for recording, observing and collecting data for various applications. Communication between sensor nodes plays a primary role in WSN network. Congestion is an important issue in WSN networking which leads to poor performance and low battery lifetime. Load balancing or data balancing is a technique to avoid congestion. It is used to balance the traffic load [1, 2]. Also, Data balancing [3] technique is used to allocate work between nodes or other resources to find effective utilization of resources, increases network lifetime and throughput. It can be applied when the two or multiple path setup for the internet connection can be used at the same time. Load balancing is not sufficient for maximizing life time of network. For finding shortest route path, there is in need of optimization S. Manjula (B) · P. Manikandan · G. P. Mathew · V. Prabanchan Department of ECE, Rajalakshmi Institute of Technology, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_14

161

162

S. Manjula et al.

technique. Various optimization algorithms are available for finding the shortest path in various applications [4]. Among various algorithms, Bio Inspired Stochastic Algorithm (BIA) is an efficient algorithm which is based on biological population in order to help computation of the networks. The optimization, complexities found in natural phenomena are mimic by using this algorithm to solve computationally similar problems. It basically involves the steps of growth, survive and reproduction these process can be used in routing domains [5]. Some of the bio inspired algorithms are referred to examine how the algorithm works in real time basis. Various optimization algorithms such as Chaotic Bat Algorithm (CBA), GWO, ESA and algorithm based on behaviour of bacteria for solving many problems are studied [6]. There are various algorithms used in WSN, which are distance vector routing, genetic algorithm, Artificial Immune System, ACO, Shuffled Frog Leaping Algorithm, Artificial Bee Colony, Fish Swarm Algorithm and Partical Swarm Optimization. Genetic Algorithm is mostly applied for the power system optimization problems, ant colony algorithm is used in robotics, load balancing, shortest path finding and data mining, Artificial Bee Colonies is used for the load balancing and for scheduling problem, Shuffled Frog Leap Algorithm is used in image sectors, Fish Swarm Algorithm is used in geo technical engineering problem, Fire Fly Algorithm is used for random route paths, particle swarm optimization is used for web services [7]. These algorithms are effectively finding the optimal shortest path. ACO algorithm is used to find optimal solution for shortest path tree problem. It has produced better improvements to find shortest path. From the analysis, ACO is an efficient one to apply for shortest path problems. This paper deals with ACO as an optimal solution for shortest path problems. ACO provides best path for different traffic load conditions. The main objective of this work is that reducing the path length. In this paper, ACO technique and data balancing technique are combined and applied for finding the optimal shortest path in congestion situation.

2 Ant Colony Optimization and Its Features In real ants they will segregate a kind of chemical known as pheromone, which can be used for communication, the receiving ants follow the pheromone [8]. The algorithm is as follows: Initially the ants are all originated in colonies and they need to find food for their location and they have to bring the food back to the home. In the beginning the ants move randomly and it will lay down the trail, the will attracts the ant to move towards its position after sometimes the path with the better performing path will be selected based on the strength of pheromone. The path with the poor performance will get evaporated and eventually the ant will follow the optimal path. This algorithm follows the steps such as construction, pheromone update, daemon actions and termination. These steps are repeated till termination will complete. Figure 1 shows process of finding the shortest path.

Data Routing with Load Balancing Using Ant Colony Optimization …

163

Fig. 1 Concept of ACO

ACO works well in solving the discrete combinatorial optimization problem. It has supported the development of the WSN along with it improvement it has enable negligible path in the network. Direction selection: Now consider the ant moving method with and without pheromone. There are lots of direction selection scheme, for example, consider the simple 4-direction selection scheme (Fig. 2).

Fig. 2 Path of the ant sense

164

S. Manjula et al.

When there is no pheromone, the ant just perform random walk, thus the chance of moving to any direction is equal. When there is traces of pheromone, the chance of the ant to take along the path of pheromone trials will be higher for example, 40%. And the chance of taking the other directions will be lower.

3 Proposed Work In ACO, data is transmitted only in one primary shortest path which increases the data congestion which leads to data lose due to stagnation problem in ACO. Stagnation means data gathering in a particular location. If one particular path is used for a long time, the energy consumption of all nodes in that path is increased. For avoiding congestion, reducing energy consumption and finding the optimal shortest paths, ACO algorithm and the data balancing technique are combined as an optimal approach in this work. The advantages of ACO and data balancing are combined at specific to get more than one shortest path among which the data is transmitted to the destination. The ACO creates the routing table for the data transmission, by using the data balancing. The routing table is updated with two paths, primary and secondary. The primary path is the one which is the first reduced (shortest) path and the secondary path is the next nearly reduced path. If the data is passed between nodes, the data will be traversed among these two paths and reaches the destination node which is referred to as data balancing using ACO. In this approach if one path (primary path) is lost due to some traffic, it again updates the routing table with next nearest shortest path so that data can traverse between secondary and tertiary paths. Hence as long as there is the path to reach the destination node the data are continuously routed with same or in different balanced paths so that congestion and stagnation problem is avoided. By balancing the data between two paths, the energy consumption is also reduced.

4 Simulation Results To implement this proposed work, NS2 simulation software is used. In wireless node creation, nodes are in the rounded circles as shown in Fig. 3. Figures 4, 5, 6 and 7 shows data transmission between the wireless nodes using ACO, pink colored node is the source node, yellow circle depicts the neighbor nodes of the source node and circle indicates the range of the wireless data transmission. There are many algorithms used in wireless networks for specific purposes. Some of the algorithms are Artificial Bee colony (ABC), Shuffled Frog Leaping algorithm, Artificial Immune System, Fire fly algorithm, Fish Swarm algorithm etc. The performance in terms of throughput is obtained using ACO and is compared with other algorithms like Destination Sequence Distance Vector Routing (DSDV), Ad-hoc on Demand Distance Vector (AODV), Dynamic Source Routing (DSR). Table 1 shows

Data Routing with Load Balancing Using Ant Colony Optimization …

Fig. 3 Wireless node creation

Fig. 4 Routing using ACO

165

166

Fig. 5 Routing through all the neighboring nodes

Fig. 6 Shortest path

S. Manjula et al.

Data Routing with Load Balancing Using Ant Colony Optimization …

167

Fig. 7 Balancing data between to nearly shortest paths

Table 1 Performance comparison of ACO Simulation time

Throughput (Mbps)

100

6.24

8.21

8.02

9.58

200

7.05

8.14

9.13

10.26

300

9.23

9.68

10.36

11.24

400

9.1

10.25

11.23

12.33

500

9.87

10.15

10.37

11.7

600

9.88

9.76

10.02

11

DSDV

AODV

DSR

ACO

the average throughput comparison of ACO with other algorithms. From the result analysis, ACO performs better than other algorithms. Figure 8 shows that the energy consumed per node for a simple ACO is more than the energy consumed (Joules) per node of data balancing ACO. This proves that data Balancing increases the nodes life time by reducing the energy consumption. It also reduces the effect of congestion.

5 Conclusion The ACO with data balancing is approached for avoiding congestion. The congestion is reduced by balancing the data between primary and secondary paths. Performance of algorithm is studied. The proposed technique has produced better throughput and low energy consumption. Results are compared with other algorithms and proved that ACO is best algorithm to increase the performance. ACO with load balancing

168

S. Manjula et al.

Fig. 8 ACO versus load balancing ACO (LACO)

consumes less energy consumption than ACO. The proposed work is more suitable for maximizing network lifetime.

References 1. Chughtai, O., Badruddin, N., Awang, A., Rehan, M.: Congestion-aware and traffic load balancing scheme for routing in WSNs. Telecommun. Syst. 63(4), 481–504 (2016) 2. Choi, M., Kim, J., Yang, S., Ha, N., Han, K.: Load balancing for efficient routing in wireless sensor networks. In: International Multi-Symposiums on Computer and Computational Sciences, pp. 62–68. Shanghai (2008) 3. Wang, J., Ma, T., Cho, J., Lee, S.: An efficient and load balancing routing algorithm for wireless sensor networks. Comput. Sci. Inf. Syst. 8(4), 991–1007 (2011) 4. Sapundzhi, F.I., Popstoilov, M.S.: Optimization algorithms for finding the shortest paths. Bul. Chem. Commun. 50(B), 115–120 (2018) 5. Dorigo, M., Stützle, T.: Ant colony optimization: overview and recent advances. In: Gendreau M., Potvin JY. (eds) Handbook of Metaheuristics. International Series in Operation Research & Management Science, vol. 146. Springer, Boston, MA (2010) 6. Pazhaniraja, N., Paul, P.V., Roja, G., Shanmugapriya, K., Sonali, B.: A study on recent bioinspired optimization algorithms. In: Fourth International Conference on Signal Processing, Communication and Networking (ICSCN) 2017, Chennai, pp. 1–6 7. Gł˛abowski, M., Musznicki, B., Nowak, P., Zwierzykowski, P.: An algorithm for finding shortest path tree using ant colony optimization metaheuristic. Image Process. Commun. Challenges 5, 317–326 (2014) 8. Mohajerani, A., Gharavian, D.: An ant colony optimization based routing algorithm for extending network lifetime in wireless sensor networks. Wirel. Netw. 22, 2637–2647 (2016)

Speech Emotion Recognition Using Machine Learning Techniques Sreeja Sasidharan Rajeswari, G. Gopakumar, and Manjusha Nair

Abstract Speech emotion recognition system is a discipline which helps machines to hear our emotions from end-to-end. It automatically recognizes the human emotions and perceptual states from speech. This work presents a detailed study and analysis of different machine learning algorithms on a speech emotion recognition system (SER). In prior studies, single database was experimented with the sequential classifiers to obtain good accuracy. But studies have proved that the strength of SER system can be further improved by integrating different deep learning classifiers and by combining the databases. Model generalization is difficult with a languagedependent and a speaker-dependent database. In this study, in order to generalize the model and enhance the robustness of SER system, three databases namely Berlin, SAVEE, and TESS were combined and used. Different machine learning paradigms like SVM, decision tree, random forest, and deep learning models like RNN/LSTM, BLSTM (bi-directional LSTM), and CNN/LSTM have been used to demonstrate the classification. The experimentation result shows that the integration of CNN and LSTM gives more accuracy (94%), when compared to other classifiers. The model performs well in all emotional speech databases used. Keywords Speech emotion recognition · Machine learning · MFCC · Deep learning

1 Introduction Emotions build human relationships and develop social interactions. Research has proved the significance of emotions in molding human interactions. This led to the introduction of a research field “Speech Emotion Recognition” system, where emotions are automatically extracted from speech. In previous studies [1–3], other modalities like visual, physiological, linguistic, facial expressions, gestures, body poses, etc., were considered for extracting the emotions. Emotions can be extracted from S. Sasidharan Rajeswari · G. Gopakumar · M. Nair (B) Department of Computer Science and Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_15

169

170

S. Sasidharan Rajeswari et al.

physiological signals with the help of sensors [1]. It is possible to analyze the cognitive state of an individual from facial expressions. Computer systems can recognize this by capturing the distinct images of facial expression. Nowadays, people use more textual tools for communication. So emotions can also be extracted from text and written methods [2]. Biological signals like EEG and ECG are the other alternatives used for extracting the emotions. The spectral features are extracted by splitting these signals [3]. Automated emotion recognition can also be analyzed by monitoring the change in parameters or electrical pulses in the nervous system. When compared to these biological signals, speech signals are easily available and are more economical. So speech is considered as a pliable modality for emotion recognition. Machine learning is a branch of AI that helps the machine to take a decision with minimal human intervention. It focuses on making predictions using computer and has wide range of applications [4] in different areas. Some of the applications of machine learning are image recognition, speech recognition, virtual personal assistant, online fraud detection, Web search engines, etc. Speech emotion recognition system is a discipline in which emotions are automatically extracted from the underlying speech signals. The field has matured to the extent where it could contribute much in robotics, spoken language processing, and abundance of other applications. The revival of neural networks has revolutionized the field of automatic speech recognition. Some of the traditional machine learning classifiers used to design SER system are Gaussian mixture model (GMM) [5], support vector machines [6], hidden Markov model (HMM) [7], etc. SER system started a continuous learning from speech input with the introduction of deep learning. The first deep learning architecture used for implementing speech emotion recognition system was RNN/LSTM. Several other deep neural nets like restricted Boltzmann machines [8] and inception model [9] were also used for emotion classification. Feature representations were also successfully learned with convolutional neural networks (CNN) [10]. Speech is considered as the modality for recognizing emotions in this study. In this work, two deep learning models were integrated to improve the classification accuracy. The demonstration result shows that CNN/LSTM has an accuracy of 94% with comparatively less model loss. This result has been compared with other machine learning classifiers including the sequential models.

2 Methods The proposed speech emotion recognition (SER) system has been implemented in three major steps. The block diagram of the SER system is shown in Fig. 1. In the first step, to generalize the performance of the model, three databases namely Berlin, SAVEE, and TESS were combined and selected. In the second step, appropriate features were extracted from the raw audio input. Finally, reliable classifiers were

Speech Emotion Recognition Using Machine Learning Techniques

171

Fig. 1 Block diagram of the SER system

designed using suitable machine learning algorithms. The proposed system recognizes five different emotions like neutral, angry, sad, happy, and fear. The categorical approach has been used to classify these emotions.

2.1 Speech Emotional Database For any SER system, an organized database is important for the performance of the model. The speech emotion databases can be categorized as spontaneous speech, acted speech, and elicited speech. In this work, acted speech databases were used to recognize the emotions from speech. The audio inputs in each database were sampled at different frequencies. They may be either speaker dependent or language dependent. So in order to generalize the performance of the models, three different databases were combined and used. The details of these databases are given: Berlin database [11] is a German emotion speech database. It is a widely used speech emotional database in studying SER systems. The database contains about 500 utterances spoken by five females and five males acting in different emotions. Surrey Audio-Visual Expressed Emotion database [12], SAVEE, contains about 480 British English utterances. High-quality audio-visual equipment was used for recording the data. Toronto Emotional Speech Set database [13] was developed by researchers from the department of psychology, Toronto in 2010. The actors from two age groups of young and old participated in this recording.

2.2 Feature Extraction from Speech Extracting the features from a speech input signal has been a real challenge for any SER system. The raw audio input was subjected to preprocessing before extracting the features. For identifying the emotions, features like mel frequency cepstrum coefficient (MFCC) were extracted from the speech. MFCC is the most commonly

172

S. Sasidharan Rajeswari et al.

Fig. 2 MFCC feature extraction

used representation for voice signals because it has the capability to map human perception sensitivity into the corresponding frequencies in mel frequency scale [14]. Here, the audio duration was set to 3 seconds. MFCC feature extraction is shown in Fig. 2. In this study, the initial 54 order of MFCC coefficients were extracted for more features, and the corresponding speech signals were sampled at 44.1 kHz to generalize the audio signals. To extract the MFCC features, the minimum, maximum, and average values of MFCC coefficients were also computed. Apart from MFCC, features like pitch, flatness, chroma, contrast, etc., were also separated. Eventually, a total of 324 features were extracted and given to the respective models for classification.

2.3 Machine Learning Algorithms In this work, SER system is implemented and analyzed with traditional machine learning algorithms like SVM, decision tree, random forest, and neural networks like RNN/LSTM, BLSTM, and CNN/LSTM. Random forest performs both classification and regression task by combining multiple decision trees. Some of the related works using random forest are vocalbased emotion recognition [15] using the database SAVEE and proved to be better than linear discriminant analysis. It has been applied to extract effective acoustic features for emotion recognition from speech-independent database [16]. Decision tree is a simple machine learning classifier which is built by recursive partitioning. Divide and conquer approach is used to split the data into subsets. SVM is well known as a margin classifier and works well for high-dimensional data. It is recognized as an optimal machine learning classifier for audio input signals. In a related work [17], emotion classification was experimented on three different language speech emotion databases, namely Berlin, Japan, and Thai. SVM gave a good accuracy on Japan and Thai because both of them have a record of 1–7 syllables, but Berlin records by sentence. From this study, it was analyzed that SVM finds difficulty in recognizing lengthy speeches. LSTM is a type of RNN designed for sequential data. Due to the vanishing gradient problem, RNNs cannot learn long-term dependencies of data. But LSTM has a memory cell which helps it work on time-series data. The forget gate, update gate, and output gate make it more powerful. Thus, LSTM works better than basic RNN architectures [18]. In the related work [19], an experiment and analysis had been performed on Berlin database. From the result, it was noted that RNN worked well

Speech Emotion Recognition Using Machine Learning Techniques

173

with large amount of data but still suffered from the problem of very long training time. Bi-directional LSTM is an extension of LSTM model used for sequential classification. The predictions can be taken any way in the middle of the sequence, taking information from the entire data. For a full speech, BLSTM trains the data in the forward and backward direction. Another type of deep neural network, CNN, can be used to recognize emotions from the corresponding speech signals. In CNN, the connectivity between neurons closely resembles that in animal visual cortex. Studies have proved that CNNs can learn high-level features with high complexities. They have a pooling layer that changes the joint feature representation into a precise information. Pooling layer can efficiently handle small frequency changes in voice signals that help CNN to give remarkable contributions in the field of SER system. In a related study, CNN was used to identify emotions from RAVDESS, Berlin, and IEMOCAP databases [20]. In this study, CNN(1D) and LSTM models have been integrated to magnify the performance of SER system. Input to 1D CNN is a 324 dimensional vector. The architecture includes three convolutional layers on the front end, followed by LSTM layer with a dense layer on the output. The CNN model helps in multilevel feature abstraction, and LSTM model helps in interpretation. In the end, the classification task is done by a dense layer. From the result, it is analyzed that the combination model gives better classification result when compared to traditional machine learning and sequential classifiers.

3 Results and Discussions The classification results are analyzed and discussed in this section. The proposed SER system used a categorical approach for emotion classification. Figure 3 shows the learning trend of a model trained with Berlin database and tested with SAVEE and TESS (Table 1). Due to the low accuracy (43.94%) and poor generalization of this model, there arise a need to combine these databases. Further, studies were conducted on the combination of databases to ensure generalization. The raw audio inputs were scaled within a range of (−1,1) before trying the classifiers. The network can learn quickly if the input is normalized to a standard scale. Tenfold cross-validation is the technique used for evaluating the different models. Here, 80% of input data had been taken for training and 20% for testing. Experimental studies are presented in the form of tables, graphs, and confusion matrices. Figure 5 shows the learning trend of CNN/LSTM model for emotion recognition from audio inputs. A comparative study has been performed on the training accuracy and testing accuracy. The demonstration result shows that CNN/LSTM model yields a good accuracy of 94%. From the confusion matrix given in Table 2, it seems that the model has least difficulty in classifying the emotion sad, and there is a notable

174

S. Sasidharan Rajeswari et al.

Fig. 3 Training with Berlin database and testing with SAVEE and TESS. Table 1 Confusion matrix for recognizing emotions with CNN/LSTM Models Emotions Angry Fear Happy Neutral Sad Angry Fear Happy Neutral Sad Precision (%)

125 4 2 1 0 95

1 90 3 2 1 93

6 2 95 1 0 91

2 1 2 101 2 94

0 0 0 2 111 98

Rate(%) 94 92 93 95 95 97

confusion pair with fear/angry and fear/happy. Figure 4 shows model accuracy of bi-directional LSTM and RNN/LSTM. Precision measure helps to find the number of samples correctly classified from a particular class. Table. 2 gives an analysis of precision, recall, and F1 measures of CNN/LSTM model with other machine learning algorithms used. The demonstration results shows that CNN/LSTM gives an accuracy of 94%. The misclassification rate was also analyzed. It was found that CNN/LSTM model has low misclassification rate, thereby improving the classification accuracy of the model. The above model was compared with bi-directional long short-term model (BLSTM) and LSTM. The accuracy seems to be the same for both the models (92%). The study was conducted on machine learning models like support vector machine (SVM) which exhibits an accuracy of 90%. This was further compared with decision tree classifier with an accuracy of 84% and random forest with 79%. A comparison

Emotions

96

93

91

89

70

BLSTM

LSTM

SVM

Decision tree

Random forest

95

84

4

89

91

89

94

Precision Recall

Angry

CNN/LSTM

Models

77

87

90

92

92

95

F1

94

82

87

92

87

91

80

81

9

90

90

92

Precision Recall

Fear

87

82

89

91

88

91

F1

89

75

85

89

86

91

56

82

87

90

91

93

Precision Recall

Happy

69

79

86

90

89

92

F1

Table 2 Comparison of Precision, Recall, and F1 measures (in %) of CNN/LSTM with other models Neutral

67

86

95

87

92

94

98

88

88

97

96

95

Precision Recall

80

87

91

92

94

95

F1

Sad

93

89

92

98

99

99

74

86

95

91

96

97

Precision Recall

F1

82

88

94

95

97

98

Speech Emotion Recognition Using Machine Learning Techniques 175

176

S. Sasidharan Rajeswari et al.

Fig. 4 Model accuracy of bi-directional LSTM and RNN/ LSTM

Fig. 5 Learning trend of CNN/LSTM model with an accuracy of 94%

of the classification accuracy is shown in Fig. 6. The model is more generalized with the combination of different databases. It works well in all emotional speech databases irrespective of language and speaker.

4 Conclusion and Future Work A speech emotion recognition system (SER) has been implemented and experimented with different machine learning techniques like decision tree, random forest, SVM, RNN/LSTM, BLSTM, and CNN/LSTM. It was observed that a combination of CNN and LSTM gives a good classification accuracy (94%) when compared to other

Speech Emotion Recognition Using Machine Learning Techniques

177

Fig. 6 Accuracy comparison of models

classifiers. So it is proved that the strength of a speech emotion recognition system can be enhanced by integrating the classifiers and combining the databases. The robustness of SER system can still be improved by using classifiers like autoencoder LSTM (encoder–decoder LSTM architecture) incorporating the regional language emotional speech databases.

References 1. Wioleta, S.: Using physiological signals for emotion recognition. In: 2013 6th International Conference on Human System Interactions (HSI), Sopot, pp. 556–561 (2013) 2. Lazemi, S., Ebrahimpour-Komleh, H.: ‘Multi-emotion extraction from text based on linguistic analysis. In: 2018 4th International Conference on Web Research (ICWR), Tehran, pp. 28–33 (2018) 3. Bazgir, O., Mohammadi, Z., Habibi, S.A.H.: Emotion recognition with machine learning using EEG signals. In: 2018 25th National and 3rd International Iranian Conference on Biomedical Engineering (ICBME) (2018) 4. Shinde, P.P., Shah, S.: A review of machine learning and deep learning applications. In: 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, pp. 1–6 (2018) 5. Pravena, D., Nandakumar, S., Govind D.: Significance of natural elicitation in developing simulated full blown speech emotion databases. In: Proceedings of IEEE Tech Symposium, IIT Kharagpur, IIT Kharagpur (2016) 6. Poorna, S.S., Nair, G.J.: Multistage classification scheme to enhance speech emotion recognition. Int. J. Speech Technol. 22, 327–340 (2019)

178

S. Sasidharan Rajeswari et al.

7. Le, D., Provost, E.M.: Emotion recognition from spontaneous speech using Hidden Markov models with deep belief networks. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (2013) 8. Cairong, Z., Xinran, Z., Cheng, Z., Li, Z.: A novel DBN feature fusion model for cross-corpus speech emotion recognition. J. Electr. Comput. Eng. 2016, 1–11 (2016) 9. Singh, C., Kumar, A., Nagar, A., Tripathi, S., Yenigalla, P.: Emoception: an inception inspired efficient speech emotion recognition network. In: IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 787–791. Singapore, SG (2019) 10. Zheng, L., Li, Q., Ban, H., Liu, S.: Speech emotion recognition based on convolution neural network combined with random forest. In: Chinese Control And Decision Conference (CCDC), Shenyang, pp. 4143–4147 (2018) 11. Berlin Database of Emotional Speech. Available from: http://www.elra.info/en/catalogues/ catalogue-languageresources 12. Jackson, P., Haq, S.: Surrey Audio-Visual Expressed Emotion (SAVEE) Database. University of Surrey, Guildford, UK (2014) 13. Lok, E.J.: Toronto Emotional Speech Set (TESS) (2013) 14. Kerkeni, L., Serrestou, Y., Mbarki, M., Raoof, K., Mahjoub, M., Cleder, C.: Automatic speech emotion recognition using machine learning. In: Social Media and Machine Learning (2020) 15. Noroozi, F., Sapi´nski, T., Kami´nska, D., Anbarjafari, G.: Vocal-based emotion recognition using random forests and decision tree. Int. J. Speech Technol. 20(2), 239–246 (2017) 16. Cao, W., Xu, J., Liu, Z.: Speaker-independent speech emotion recognition based on random forest feature selection algorithm. In: 36th Chinese Control Conference (CCC). Dalian, 10995– 10998 (2017) 17. Seehapoch, T., Wongthanavasu, S.: Speech emotion recognition using support vector machines. In: 2013 5th International Conference on Knowledge and Smart Technology (KST), Chonburi, Thailand, pp. 86–91 (2013) 18. Hochreiter, S., Schmidhuber, J.: Long Short-Term Memory. Neural Comput. 9(8), 1735–1780 (1997) 19. Lalitha, S., Tripathi, S., Gupta, D.: Enhanced speech emotion detection using deep neural networks. Int. J. Speech Technol. (2018) 20. Issa, D., Demirci, M., Yazici, A.: Speech emotion recognition with deep convolutional neural networks. Biomed. Signal Process. Control 59(101894) (2020)

Feature-Based AD Assessment Using ML Siddheshwari Dutt Mishra and Maitreyee Dutta

Abstract Neuroimaging has revolutionized the world of neuroscience. More specifically, the role of functional imaging in the diagnosis of metabolic diseases like Alzheimer’s is very very significant. It is an era of smart intelligence wherein the machine efficiently and smartly visualizes the given input and produces a valuable output. Thus, the objective of this paper is to classify brain images into two broad classes, namely Alzheimer’s and normal control using machine learning classifiers. Furthermore, the study focuses on comparing the accuracy of classifiers across medical imaging modalities. The dataset consists of MRI, PET, and DTI scans of subset of participants from publically available ADNI dataset. We summarize our results with the findings of most suitable classifier for each modality. Keywords Machine learning (ML) · Alzheimer’s disease (AD) · Normal control (NC) · MRI · PET · DTI

1 Introduction Dementia is a syndrome affecting mental cognitive thinking. Large number of people suffering from dementia are primarily the victims of Alzheimer’s disease. AD is a slow progressive ailment that affects the memory and mental health. Its common symptoms include forgetfulness, memory loss, and behavioral changes [6]. In few cases, AD can prove to be fatal for life. So timely diagnosis of this disease is very important [18]. AD can be identified by performing the autopsy of brain using neuroimaging techniques such as MRI, PET, DTI, and CT scan [29]. These modalities penetrate deep inside the brain and produce anatomical images of brain tissues and orientation. Proper and timely examination of the disease paves the way for correct medication. Hence, in medical science, getting fast computational results within fraction of seconds is a matter of concern. Thus, from the available reputation of machine learning, it is viable to intelligently train the machine to correctly S. Dutt Mishra (B) · M. Dutta Department of CSE, NITTTR, Chandigarh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_16

179

180

S. Dutt Mishra and M. Dutta

predict and classify AD from NC [17, 19]. Olfa Ben Ahmed et al. used the visual indexing framework and pattern recognition analysis based on structural MRI data to differentiate AD and NC [1]. The combined information from multiple data like PET and neuropsychological measure (NM)-based features can also be used to classify AD. Ding et al. proposed a novel classification framework to jointly select features which are then extracted from the VBM analysis and texture analysis to distinguish between AD and NC [10]. Instead of choosing the specific area, considering the entire MRI image followed by CNN also gives better results [3]. Adrien Payan and Giovanni used deep learning methods and 3D CNN to design an algorithm that can predict the AD progression level within the human brain based on MRI scan [22]. Hu et al. extracted features of gray matter of the hippocampus region and applied support vector machine for classification [16]. Zhu et al. selected features based on the optimal weight coefficients and trained two support vector regression models to predict the clinical score of AD assessment and achieved the accuracy of 93.7% for MRI and 91.8% for PET images [30].

2 Related Work A systematic literature review presents an overview of the extensive research being performed for assessment of Alzheimer’s disease using intelligently trained models. Biju et al. extracted decidable features from different modalities and used gray matter (GM) and white matter (WM) as parameters to classify the AD and NC. Based on the variation in size of GM and WM, researcher concluded the stages of AD [5]. Fan et al. also used GM and WM as parameters for AD assessment on the basis of voxel-based intensity and achieved accuracy of 93% [13]. By using the LDS classifier for MRI biomarker from AD to MCI with 309 voxel value using tenfold validation technique, achieved accuracy is 75.6%.[41]. Affonso et al. used two methods to compare the performance, one using texture descriptor as feature extractor, mapped with KNN, NN, SVM, and DT classification techniques and other directly with CNN without taking preprocessing, and best accuracy is 82.11% by KNN and 93.56% by MLP, respectively [21]. Danni Cheng and Manhua Liu proposed a novel classification technique having accuracy of 91.19% based on combination of CNN and BGRU to extract the features of PET images for AD classification [8]. Dyrba et al. proposed a combination of DTI and MRI for detection of AD and considered GM, WM, WMD, and EMF as parameter. They applied WEKA classifier for evaluating the performance of different classifiers and concluded their work with accuracy of 88.5% for multilayer perceptron and 88.6% for multiple kernel SVM [12]. The motivation for this work is achieving accuracy of more than 93% for AD diagnosis using decidable feature extraction and presenting a comparative performance overview of ML classifiers for varied imaging modalities.

Feature-Based AD Assessment Using ML

181

3 Methodology In this study, we have laid emphasis on feature selection and extraction since proper feature extraction paves the way for a highly efficient and computative machine. Training set is formulated using the extracted features which is then fed as input to build the model and classified using ML classifiers. Each classifier outputs a target variable Y∈ (1, 0), where 1 indicates AD and 0 indicates NC. Predicted output is then cross-validated using fivefold validation with the testing set for evaluating the classifier performance. The following section describes the dataset preparation and feature extraction.

3.1 Dataset Preparation Data selection is the foremost step in the preparation of dataset as humongous amount of data is available nowadays not all of which is appropriate and valuable. It is important to select the viable data pertaining to the problem to build an efficient model. Hence, images featuring hippocampus area of brain are chosen without segmentation. MRI, PET, and DTI scans of a subset of participants from publically available ADNI dataset are considered. Dataset consists of 3060 images with 1020 images each from MRI, PET, and DTI. Dataset is split in the ratio of 80:20 to form training and testing sets, respectively. The training and testing datasets are divided into two labeled classes, namely AD and NC, each comprising of 816 and 204 images, respectively.

3.1.1

MRI Scan

Magnetic resonance imaging is a brain imaging technique that uses highly powerful radio waves and strong magnetic field to produce anatomical images of internal human organs and tissues. MRI scan finds wide applicability in the field of neuroimaging owing to its high contrast, excellent spatial resolution, and high availability [26] (Fig. 1).

3.1.2

PET Scan

Positron emission tomographic (PET) is another significant neuroimaging technique. PET uses data gathered by sensors that visualize the radioactivity of compounds in the brain indicating proper functioning and balance of brain tissues. PET scan finds wide applicability in AD diagnosis owing to its capability to easily detect changes in the brain metabolism [8, 9] (Fig. 2).

182

S. Dutt Mishra and M. Dutta

Fig. 1 MRI scans

3.1.3

DTI Scans

DT-MRI also known as DTI which is acronym for diffusion tensor magnetic resonance imaging. DTI is analogous to standard MRI but differs in its peculiar property to characterize the orientational properties of diffusion process of water molecules [2]. It measures the restricted diffusion of water molecules in tissues which enables the radiologists to isolate regions of improper functioning within brain [24] (Fig. 3).

3.2 Feature Extraction Following the dataset preparation and preprocessing, the next significant task is feature extraction and selection [15]. Decidable feature selection is very crucial in ML for fast computation. Our feature vector consists of ten texture features, namely mean, median, mode, standard deviation, GLCM features, LBP energy, and LBP entropy. Since we are considering the entire image without segmentation, Gabor filter is applied to texture features for better spatial localization. Feature vector has been normalized in the range of unit for better performance measures.

Feature-Based AD Assessment Using ML

183

Fig. 2 PET scans

4 Results Performance measures such as precision, recall, and accuracy of machine learning classifiers across imaging modalities MRI, PET, and DTI are given in Tables 1, 2, and 3, respectively. Precision =

Recall =

TP TP + FP

TP TP + FN

where TP = True Positive, FP = False Positive, and FN = False Negative. Following ML classifiers have been considered for evaluation: linear regression (LR), K-nearest neighbor (KNN), CART, naive Bayes (NB), support vector machine (SVM), and multilayer perceptron (MLP). From Table 1, we infer that LR and NB give satisfactory results and achieved accuracy is 98.03%. From Table 2, we deduce that LR classifier gives an accuracy of 98.00%. Similarly from Table 3, we conclude that MLP performs well with an accuracy of 99.01%. Unlike the research done in the past where SVM has proved to be the most efficient classifier, our approach presents a different insight.

184

S. Dutt Mishra and M. Dutta

Fig. 3 DTI scans Table 1 Performance measure of MRI dataset Classifier Precision LR KNN CART NB SVM MLP

0.99 0.77 0.98 0.98 0.96 0.50

Recall

Accuracy (%)

0.97 0.84 0.97 0.99 0.83 1.00

98.03 79.41 97.54 98.03 89.70 50.00

Plant et al. used feature selection algorithms to distinguish between AD and NC using linear SVM classifier and achieved accuracy up to 92% [23]. Yitian Xu et al. applied structural least square twin support vector machine (S-LSTSVM) to classify AD and NC, and achieved accuracy was 92.10% [28]. Sheng et al. classified images by applying SVM and got nearly 91.7% accuracy [25]. Using our defined approach, classifiers LR and NB give better performance in comparison with SVM. Switching to alternate classifier like NB and MLP can also give satisfactory performance.

Feature-Based AD Assessment Using ML Table 2 Performance measure of PET dataset Classifier Precision LR KNN CART NB SVM MLP

0.96 0.70 0.38 0.89 0.79 0.97

Table 3 Performance measure of DTI dataset Classifier Precision LR KNN CART NB SVM MLP

0.51 0.62 1.00 0.99 0.62 0.98

185

Recall

Accuracy (%)

1.00 0.76 0.08 0.98 0.78 0.99

98.00 72.05 47.3 92.89 78.92 97.79

Recall

Accuracy (%)

1.00 0.74 0.99 0.86 0.90 1.00

51.00 64.21 99.26 92.40 67.15 99.01

5 Conclusion From Tables 1,2, and 3, we conclude that linear regression (LR) gives an excellent performance in MRI and PET scans, whereas in DTI scan, multilayer perceptron (MLP) and CART give better results. In terms of intermodality comparison, MRI outperformed the other two in all the classifiers except MLP. We come to the conclusion that naive Bayes (NB) gives promising results across all three modalities with an accuracy of 92.00

References 1. Ahmed, O.B., et al.: Alzheimer’s disease diagnosis on structural MR images using circular harmonic functions descriptors on hippocampus and posterior cingulate cortex. Comput. Med. Imag. Graph. 44, 13–25 (2015) 2. Alexander, A.L., et al.: Diffusion tensor imaging of the brain. Neurotherapeutics 4(3), 316–329 (2007) 3. Awate, G., et al.: Detection of alzheimers disease from MRI using convolutional neural network with tensorflow (2018). arXiv preprint arXiv:1806.10170 4. Bhatkoti, P., Paul, M.: Early diagnosis of Alzheimer’s disease: A multi-class deep learning framework with modified k-sparse autoencoder classification. In: 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), pp. 1–5. IEEE (2016) 5. Biju, K.S., et al.: Alzheimer’s detection based on segmentation of MRI image. Procedia Comput. Sci. 115, 474–481 (2017)

186

S. Dutt Mishra and M. Dutta

6. Bondi, M.W., Edmonds, E.C., Salmon, D.P.: Alzheimer’s disease: past, present, and future. J. Int. Neuropsychol. Soc. JINS 23(9–10), 818 (2017) 7. Cheng, D., Liu, M.: CNNs based multi-modality classification for AD diagnosis. In: 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 1–5. IEEE (2017) 8. Cheng, D., Liu, M.: Combining convolutional and recurrent neural networks for Alzheimer’s disease diagnosis using PET images. In: 2017 IEEE International Conference on Imaging Systems and Techniques (IST), pp. 1–5. IEEE (2017) 9. Ding, Y., et al.: A deep learning model to predict a diagnosis of Alzheimer disease by using 18F-FDG PET of the brain. Radiology 290(2), 456–464 (2019) 10. Yi, D., et al.: Classification of Alzheimer’s disease based on the combination of morphometric feature and texture feature. In: 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 409-412. IEEE (2015) 11. Dolph, C.V., et al.: Deep learning of texture and structural features for multiclass Alzheimer’s disease classification. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2259–2266. IEEE (2017) 12. Dyrba, M., et al.: Combining DTI and MRI for the automated detection of Alzheimer’s disease using a large European multicenter dataset. In: International Workshop on Multimodal Brain Image Analysis, pp. 18–28. Springer (2012) 13. Fan, Y., et al.: Structural and functional biomarkers of prodromal Alzheimer’s disease: a highdimensional pattern classification study. Neuroimage 41(2), 277–285 (2008) 14. Farooq, A., et al.: A deep CNN based multi-class classification of Alzheimer’s disease using MRI. In: 2017 IEEE International Conference on Imaging Systems and Techniques (IST), pp. 1–6. IEEE (2017) 15. Guyon, I., Elisseeff, A.: An introduction to feature extraction. In: Feature Extraction, pp. 1–25. Springer (2006) 16. Hu, K., et al.: Multi-scale features extraction from baseline structure MRI for MCI patient classification and AD early diagnosis. Neurocomputing 175, 132–145 (2016) 17. Islam, J., Zhang, Y.: A novel deep learning based multi-class classification method for Alzheimer’s disease detection using brain MRI data. In: International Conference on Brain Informatics, pp. 213–222. Springer (2017) 18. Islam, J., Zhang, Y.: Early diagnosis of alzheimer’s disease: a neuroimaging study with deep learning architectures. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1881–1883 (2018) 19. Khajehnejad, M., Saatlou, F.H., Mohammadzade, H.: Alzheimer’s disease early diagnosis using manifold-based semi-supervised learning. Brain Sci. 7(8), 109 (2017) 20. Li, F., Cheng, D., Liu, M.: Alzheimer’s disease classification based on combination of multimodel convolutional networks. In: 2017 IEEE International Conference on Imaging Systems and Techniques (IST), pp. 1–5. IEEE (2017) 21. Moradi, E., et al.: Machine learning framework for early MRI-based Alzheimer’s conversion prediction in MCI subjects. Neuroimage 104, 398–412 (2015) 22. Payan, A., Montana, G.: Predicting Alzheimer’s disease: a neuroimaging study with 3D convolutional neural networks (2015). arXiv preprint arXiv:1502.02506 23. Plant, C., et al.: Automated detection of brain atrophy patterns based on MRI for the prediction of Alzheimer’s disease. Neuroimage 50(1), 162–174 (2010) 24. Shaikh, S., Kumar, A., Bansal, A., et al.: Diffusion tensor imaging: an overview. Neurol. India 66(6), 1603 (2018) 25. Sheng, J., et al.: A novel joint HCPMMP method for automatically classifying Alzheimer’s and different stage MCI patients. Behav. Brain Res. 365, 210–221 (2019) 26. Suk, H.-I., et al.: Deep sparse multi-task learning for feature selection in Alzheimer’s disease diagnosis. Brain Struct. Function 221(5), 2569–2587 (2016) 27. Wang, S.-H., et al.: Classification of Alzheimer’s disease based on eight-layer convolutional neural network with leaky rectified linear unit and max pooling. J. Med. Syst. 42(5), 85 (2018)

Feature-Based AD Assessment Using ML

187

28. Xu, Y., et al.: Structural least square twin support vector machine for classification. Appl. Intell. 42(3), 527–536 (2015) 29. Zheng, C., et al.: Automated identification of dementia using medical imaging: a survey from a pattern classification perspective. Brain Inf. 3(1), 17–27 (2016) 30. Zhu, X., Suk, H.-I., Shen, D.: A novel multi-relation regularization method for regression and classification in AD diagnosis. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 401–408. Springer (2014)

Show-Based Logical Profound Learning Demonstrates Utilizing ECM Fuzzy Deduction Rules in DDoS Assaults for WLAN 802.11 D. Sudaroli Vijayakumar and Sannasi Ganapathy

Abstract One wording that is making turns and turns in each division is “data”. Colossal information generation contributed to innovation headways and brilliantly choice-making. Each organization depends totally on the information for making pivotal choices around their trade. If information is stolen, one can make an organization or a person to move concurring to their tunes. In this way, securing the computing foundation for an organization picks up part of significance. A few levels of security measures exist for organizations, but the compromise in terms of intrusion is still winning. If the communication medium is wireless, independent of its layered approach to security, the levels of intrusion faults are tall. Each organization requests an intrusion detection system that can totally secure their computing environment. Such type of intrusion detection system is possible when the system can learn like human and make choices like people. Hence, the utilization of profound and machine learning approaches can enormously help one to construct a stronger intrusion detection system. This paper presents a novel system LASSO-ECM utilizing DEFNIS for anticipating the DoS assaults more precisely. The approach is confirmed utilizing the AWID (Aegan interruption location dataset) with expectation precision of 99% serving to be a productive prescient show for recognizing DoS assaults in WLAN. Keywords Denial of service attacks (DoS) · Least absolute shrinkage and selection operator (LASSO) · Artificial neural network (ANN) · Wireless intrusion detection system (WIDS)

D. S. Vijayakumar (B) School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India e-mail: [email protected] PES University, Bangalore, India S. Ganapathy Vellore Institute of Technology, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_17

189

190

D. S. Vijayakumar and S. Ganapathy

1 Introduction The advancement of network is at its crest pointing to interconnect heterogeneous frameworks together. This time of interconnection gives simple openness, portability, straightforwardness, and adaptability. With its various benefits, remote systems WLAN 802.11 ended up fundamental with the up and coming advances. This interconnection of various gadgets without a doubt produces not as it was huge information but moreover more touchy information. Steady endeavors named as cyberattacks from numerous sources continuously exist to urge hold of the delicate information. Headways in advances contribute toward the victory of cyberattacks [1] requesting a much dependable interruption discovery framework. Unwavering quality in remote interruption location is as it was conceivable on the off chance that the framework can take choices scholarly people on the expanding obscure assaults. The arrange investigation information has massively expanded due to expanded network among numerous gadgets, conveyed computing, etc. On the off chance that the organize examination is to distinguish the dangers viably, the framework ought to have total information of the diverse sorts of assaults and in a position to recognize the covered-up designs with tremendous information. The method of shielding the organizations computing foundation holds more prominent esteem, and this begins with the consideration of shrewdly interruption location frameworks that can protect judgment and privacy. Insights in interruption discovery can be made conceivable with the framework accurately foreseeing the inconsistencies. The genuine battle of the modern-day interruption discovery framework is the response against antagonistic assaults. The WIDS system many times fails to address two cases of attacks: (i) (ii)

False positives: Not an assault is confused as an assault and never entered the organize. False negative: Real assault is compromised and permitted interior the arrange.

Among the two cases, tricking the WIDS framework as specified as untrue negative ought to never happen in case that the framework claims to be brilliantly. This work endeavors to keep the false negative to a least as conceivable, so that the insights within the wireless intrusion location framework are sweet enough. Remote systems are undermined by denying administrations or in terms of giving authorizations. The major classification comes as denial of service attack (DoS), remote to local (R2L) attack, and user to root (U2R). The current interruption location framework compares the already recorded assault designs with the modern design, and if it recognizes any deviation, a choice is made. This handle can make way better choices in case that the location framework is demonstrated superior to attain higher exactness for all the classes of assaults. As a start of building a brilliantly interruption discovery framework, the primary major category of assaults called DoS and DDoS attacks are displayed employing a combination of relapse and profound learning approaches. This system ceaselessly decreases highlights with great expectation precision. DoS assaults are an endeavor

Show-Based Logical Profound Learning Demonstrates Utilizing ECM …

191

to overburden the framework with undesirable activity. Any sort of assault that tries to moderate down the arrange, subsequently the effectiveness of systems comes down to the least, comes beneath the umbrella of denial of service [2]. This assault is common, straightforward, and serves as a beginning point for numerous more assaults [3]. DoS assault in WLAN can be made exceptionally effectively as the transmission happens through open frequencies. In any case, the effect in terms of budgetary misfortunes for an organization is gigantic [4, 5] with DoS related assaults. The demonstrate must make superior expectations for ill-disposed assaults. The appropriation of the Aegan Wi-Fi interruption dataset (AWID) comprises of genuine like follows of both typical and meddling 802.11 activity making the approach more down to earth. AWID dataset given noteworthy comes about on different inquiry conducted in remote innovations as the information is extricated from the genuine WEP ensured 802.11 systems. Since the follows are genuine, the machine learning demonstrate can make superior expectations in the event that received in a genuine arrange. The commitment rotates in different measurements starting with the understanding of the highlights fundamental to term activity as an assault that can work the same within the genuine world. The leftover of this work can be broken as it takes after: The pending segment watches the 802.11 security design and the regions of powerlessness. The following area performs a nitty-gritty examination of the different related works in building an shrewdly interruption location framework. Segment 4 takes account of the highlights fundamental for classifying the activity as a DoS assault with the proposed LASSO-ECM system. In segment 5, the ponder comes about on the proposed system are given. Conclusions and future work are drawn within the last area.

2 WIDS Design—An Attacker’s See The elemental necessity of any security system is to supply administrations to realize keenness, privacy, and accessibility. Any framework can protect security as it was if it has instrument and methods to sanction CIA. This preservation is common independent of the sort of networks (i.e.,) wired or remote. The essential compromise to any security framework comes within the frame of assaults pointing to disturb privacy, judgment, or security. In this way, the arrangement of organize design incorporates an extra security component that can distinguish the dangers accurately and take responsive activities suitably. The gadget that performs this usefulness is generally termed as intrusion detection system. The usefulness of intrusion detection system is once more a layered approach that clarifies the utilization of it alongside the focuses on where the functionalities can be compromised. Although the usefulness of IDS is same in wired and remote systems, the security and keenness require parcel of significance in remote systems as the transmission medium is open discuss. The basic capacities of WIDS are truthful distinguishing proof of dangers. The work looks exceptionally straightforward, be that as it may classifying an activity as assault requires total understanding of the framework. So, understanding the layered

192

D. S. Vijayakumar and S. Ganapathy

capacities of WIDS is pivotal. A commonplace remote arrange comprises of two components: (i)

Station (STA):

Any versatile gadget that can be a portion of WLAN having WLAN 802.11 capability. This may be a tablet, versatile, PDAs, etc. (ii)

Access points (AP):

These are controlling units to all the STAs. The nonexclusive working of APs is to associate the STA to other sort of systems such as wired systems and Web. These two components are basic for communication, without the utilization of get to focus too communication in remote systems is conceivable. This frame of communication is named as peer to peer communication in which the STAs straightforwardly communicate with each other without controller. The wireless intrusion detection system can too be sent centrally or in a disseminated way depending on components like estimate, taken a toll, etc. The nonexclusive shape of a remote interruption location framework basically comprises of sensors that capture all sorts of remote organize activity and bunch the activity based on two conditions. The primary condition forced on the collected activity is checking it for a length. The activity is watched for 2 s to distinguish any changes in arrange behavior. Another condition is to analyze the source from where the activity was created. After these conformances, the collected activity from the sensor is transmitted to the location module, which makes a difference to discover any intrusive behavior which is there within the activity collected. Our work essentially works at this module of the interruption design. In case, the location module recognizes any meddlesome conduct, the data is given over to the countermeasure module that takes suitable activities against these sorts of assaults. This engineering thought is displayed with the chart underneath (Fig. 1).

3 DoS Attacks in WLAN 802.11 Refusal of benefit assaults or disseminated dissent of benefit assaults is the foremost common shape of security assault in wired systems. Remote systems are moreover defenseless to DoS assaults and the sort of effect that can be caused in remote systems is the total misfortune of the organization itself. In remote systems, we cannot physically secure the organization, and so the assault may happen interior the organization or from the exterior. Remote systems are by and large secured by utilizing the WPA variation [6] that have known issues and helpless to vulnerabilities. Each remote arrangement comprises of information outlines as data frames, control outlines as control frames, and administration outlines as management frames. The assurance advertised by the WEP and TKIP exists as it was for the information and control outlines. In this way, the administration outline data gets transmitted

Show-Based Logical Profound Learning Demonstrates Utilizing ECM …

193

Fig. 1 WIDS architecture

in decoded arrange paying the way for assaults. In remote systems, as it was the administration, outlines are helpless to DoS assaults. To realize the superior quality of benefit interior, the systems request successful ways to recognize and recoup from these assaults. This must be managed successfully by the interruption location framework. To prepare an interruption discovery framework with the information of the different sorts of DoS assaults, a fresh understanding of the different shapes of DoS assaults would be supportive. In common, refusal of benefit implies an aggressor point for a target and overpower this target with numerous demands, making the target inaccessible to serve the demands which in turn the organize goes broken. There are different shapes of refusal of benefit assaults, and they are as takes after:

3.1 Application Layer DoS Attacks This variation of DoS assault is commonly found in both wired and remote systems. The foremost common wording utilized to represent this sort of assault is HTTP surge. As the title proposes, assailant tries to surge an application with numerous HTTP demands and the application that in turn debilitates the server capability. This can be done by utilizing the three-way handshake instrument in which the assailant sends an HTTP SYN parcel which is recognized by the target framework utilizing SYN-ACK. With the handshake, the aggressor issues HTTP GET ask for a common application on the target framework. This sort of assaults in remote systems is exceptionally hurtful because it forces a tall computational stack coming about for total inaccessibility of application.

194

D. S. Vijayakumar and S. Ganapathy

3.2 Transport Layer Dos Attacks These sorts of assaults are very harder to recognize as IP spoofing is utilized and the aggressor’s point is to require control of the working framework of the casualty machine. Independent of the three-way handshake component in TCP parcels, there exists half-open associations, and this takes absent all assets distributed for setting associations. This permits the aggressors to spend huge sums of SYN parcels to the casualty. These parcels if it begins from spoofed source address will never send an affirmation or reset for that association in this way overwhelmed with TCP SYN parcels.

3.3 Network Layer DoS Attacks This sort of assault begins with the expansion of clients to remote systems. Arrange layer DoS for the most part points to assault the framework by sending a huge sum of information to a remote arrange. ICMP surge is the foremost common shape of DoS assaults in an organized layer that can expend the complete transfer speed of the organize denying get to the true-blue clients within the arrange.

3.4 Media Access Control (MAC) Attacks Assailant parodies the MAC address of the get to the point and transmits parcels to the complete arrange. Remote systems are inclined to spoofing administration outlines as they are transmitted in decoded shape. The common shapes of MAC layer assaults are authentication/association surge assault.

3.5 Physical Layer DoS Attacks The technique to physically stick the remote gadget is named as physical layer assault. Usually exceedingly powerful in remote systems as the transmission medium is open discussion and an assailant with tall pick up receiving wire and remote client card can make the physical medium to his support. The AWID decreased dataset comprises of 48,484 occurrences of the flooding assault activity and the taking after assaults within the organization which are classified as DoS assaults (Table 1). These sorts of DoS assaults are predominant in remote systems, and we require a stronger instrument through which the remote interruption location framework can learn the over assaults through a component and utilizing that information of

Show-Based Logical Profound Learning Demonstrates Utilizing ECM … Table 1 Instances of DoS in AWID dataset

195

S. No. Attack name

Frame

1.

Deauthentication attack

Management frame

2.

Disassociation attack

Management frame

3.

Deauthentication broadcast attack

Management frame

4.

Disassociation broadcast attack

Management frame

5.

Block ACK flood

Management frame

6.

Authentication request flooding attack

Management frame

7.

Fake power saving attack

Null data frame

8.

RTS flooding attack

Management frame

9.

Beacon flooding attack

Management frame

10.

Probe request flooding attack

Management frame

11.

Probe response flooding attack

Management frame

learning, remote interruption location system takes superior choices in case such sort of assaults reoccurs. This may abdicate a superior interruption location framework that can choose humans.

4 Related Work The objective of making self-learned remote interruption location framework is as it was conceivable in case that a framework can get the ordinary more current sort of assaults. Administered learning, semi-administered learning, and profound learning strategies are required to form such a framework. Indeed, in spite of the fact that these strategies have critical advance in self-driven systems that employments common dialect handling, discourse acknowledgment, etc., commercial utilization of these techniques is to way better anticipate the assaults that are in preparatory organize [7]. Another critical calculate that must be considered is much of the work performed related to assault classification that depends on the KDD glass dataset, a trademark dataset for works related to the interruption location framework. An outstanding overview is worn out [8] for the convenience of machine learningbased approaches in the KDD dataset that is checked on. Through this study, a few of the commonly utilized machine learning calculations in this dataset are distinguished as choice trees and bolster vector machines. This overview indicated the exactness rate accomplished by utilizing these calculations and can be considered as a standard for the heading of work. The exactness gotten from the outlined models intensely depends on the highlights considered, and this work [9] emphasizes the significance of highlight investigation in the KDD glass dataset. The significance of highlights against the classification is dissected and has communicated this proportion as data

196

D. S. Vijayakumar and S. Ganapathy

pick up. Higher the esteem of the data pickups, a highlight appears its significance in classification. Profound learning-based activity classification holds great in terms of classification exactness, and this work [10] proposed a system called profound bundle that included profound learning calculations to get the highlights by itself. This system utilizes the SAE and one-dimensional CNN for taking care of application distinguishing proof and arrange investigation. This work has been carried out utilizing ISCX VPN–non-VPN activity dataset, and this holds critical and comes about to illustrate that profound learning holds superior for activity classification. This serves that when profound learning legitimately utilized can allow way better comes about for assault classification as well. The convenience of fluffy and profound neural systems can upgrade the assault classification precision which is proposed in [11]. In any case, within the handle of combining fluffy and profound, neural may increment the computational complexity, and a technique to diminish such computational complexity is by utilizing the clustering strategy. The clustering procedure serves as a preprocessing step that can have a coordinate effect on computational complexity. For a genuine time, operation computational time holds exceptionally vital, and this has brought down the preparing time from 1.8 to 2 s. The classification exactness is 87%. In KDD container, 99 another striking works utilizing profound learning are displayed in [12]. This work proposes DoS_WGAN engineering that employs punishment values for a slope that can consequently synthesize the eigenvector. The GAN prepared is assessed utilizing the data entropy and Euclidean remove. Five distinctive sorts of assaults are considered, and it is been demonstrated that profound learning outflanks signature-based location without the utilization of payload data [13]. CNN, RNN, and SRNN are utilized on the MAWI dataset, and its execution is measured utilizing the grunt. Grunt has given exceptionally less time for handling, and RNN has yielded superior execution exactness. One of the major issues with this technique is its non-attendance of checking the demonstrate in genuine time. Profound learning shows utilization in recognizing the assaults which is commonly performed, be that as it may, the reasonableness of accessible profound learning systems in classification is investigated in [13]. Here, the three well-known profound learning systems like Theano, tensor stream, and fast.ai are utilized within the dataset, and its execution in approving the assaults is investigated (Table 2). Proficient interruption discovery not as it were points for superior precision values but to the wrong mistake, rate ought to be irrelevant. This work [15] proposes a DeeRaI arrange that understands all the issues related to merging, making beyond any doubt that the developed neural arrange never get stuck up with nearby minima issues. Most of the works are concentrated on profound learning-based approaches and accomplished a great exactness rate. Be that as it may, open issues are winning in terms of the wrong alert rates and the convenience in genuine organize.

Show-Based Logical Profound Learning Demonstrates Utilizing ECM …

197

Table 2 Recent related works References

Dataset

Approach

Problem addressed

Algorithm

[11]

KDD Cup 99

Machine learning

Attack classification

SVM, random forest

[12]

KDD Cup 99

Machine learning

Feature reduction

SVM

[16]

ISCX VPN

Deep learning

Traffic classification

CNN

[17]

CICIDS2017

Deep fuzzy

Attack classification

C-means

[18]

KDD Cup 99

GAN

Attack classification

CNN

[19]

MAWI

GAN

Attack classification

CNN, RNN, SRNN

[20]

CE-CIC-IDS2018

Deep learning

Attack classification

Tensor, Theano (framework)

[21]

NSL-KDD

Deep learning

Attack classification

CUI

5 Proposed System Utilizing Rope, ECM, and DENFIS Todays interruption discovery framework requests more intelligent choice-making with exceptionally less wrong caution rate. The proposed versatile neural fluffy induction framework system incorporates Tether-based strategy for decreasing the highlights in the AWID dataset. Tether is adaptable and sufficient to acknowledge any learning calculation and perform the classification. On the beat of Rope, a profound neural arrange is built, and classification calculation prepared for classifying the DoS assaults is ECM. The learning calculation learns the preparing set as well as the fluffy information base. The classification of the DoS assaults happens based on the prepared set and the deduction rules making the neural organize to require keen choices. Since the classification depends on fluffy rules, the classification is like a human with an impressive decrease within the number of untrue alerts produced.

5.1 LASSO Based Feature Reduction The programmed learning of a machine can be made straightforward if the number of highlights considered for learning is less. The crude AWID decreased preparing dataset comprised of 155 highlights can abdicate way better effectiveness on the off chance that the number of highlights is diminished. With this setting of include lessening, the considered dataset is been preprocessed to recognize the zero fluctuation, lost values, and Pearson coefficient [16]. Slightest outright shrinkage and choice administrator (Tether) utilize L1 regularization innovation that can serve as a developmental demonstration. Tether-based diminishment is especially received in our technique as Rope has the potential of holding any learning calculation on best it, and the highlight decrease can happen depending upon the sort of activity (Fig. 2).

198

D. S. Vijayakumar and S. Ganapathy

Fig. 2 Proposed framework

As the technique of building a shrewdly interruption location framework does not conclude up, it was with the classification of one assault. This highlight diminishment ought to be utilized in classifying the other assaults as well. So, this Tether-based diminishment can serve as a standard lessening technique for any sort of assault taken into thought. The method of include diminishment was carried out through an arrangement of steps in which all the highlights holding zero fluctuation were eliminated by fair employing a simple search and supplant calculation. Exceedingly connected values within the dataset were distinguished utilizing the Pearson relationship coefficient. Rope execution was assessed utilizing the Gini score. All the

Show-Based Logical Profound Learning Demonstrates Utilizing ECM …

199

highlights over the Gini edge esteem were disposed of. This brought about with 68 highlights when coupled with Bayes classifier. A few of the comes about gotten after this include decrease are taken after at results section. To construct a brilliant interruption discovery framework, separation of imperative highlights plays a major part, and this cautious choice of highlights in our system is been carried out through the Rope as this may be advance decreased based on the learning show utilized for classification. Point by point analysis of this show is displayed within the work at [16].

5.2 Explainable Deep Learning Model After the highlight lessening is input, another thought is to classify the DoS assaults, scholarly people. Machine learning-based arrangements are not considered for this classification as we have distinguished untrue caution rates which were generally higher, whereas utilizing back vector machines and other-directed learning calculations. In this way, the approach for classification is profound learning. There are different profound learning-based approaches that have given outstanding comes about in terms of exactness slacking in lessening the untrue alert rates in DoS assaults. This strategy is an endeavor to create the neural organize pickup more information on the classification utilizing the fluffy rules, so that the classification exactness is improved and a more prominent decrease within the number of untrue alert rates.

5.2.1

Deep Neural Network

The computational show chosen for classifying the DoS assaults in remote systems is by nourishing forward neural arrange combined with fluffy induction rules. Nourish forward neural systems are systems that comprise of input layer, covered-up layers, and yield layer. The input layer is developed utilizing the rope decreased 68 highlights. Sixty-eight perceptrons are displayed utilizing the eigenvector. The input vector is characterized as [x1 x2 x3 . . . x68 ], and the number of covered-up layers is chosen based on the number of DoS based assaults within the characterized dataset. Eleven covered-up layers are considered, and the yield work is to distinguish the likelihood of the considered activity which is assault or not. The mathematical notation for the feedforward neural network is as follows: Input: {xi = tra f f ic f eatur es}. Output: 

y i = AC K f l∞d



The yield values may be one among the 11 classifications in DoS assaults that may be reference point flooding, disassociation assault, etc. In this way, the layer

200

D. S. Vijayakumar and S. Ganapathy

sometime recently be the yield layer which comprises 11 layers that in case any one of the perceptrons returns 1 at that point, the ultimate yield layer reacts that activity as assault, else it is indicated as not an assault. Approximation The victory of the built bolster forward neural organize giving positive comes about depends on the estimation work definition. Estimation implies that the input is known as well as the yield is known, and we needed to develop a show that learns this relationship between the input and yield more closely. By and large, the estimation work is characterized by approximating the relationship between the input and yield. This estimation work is by and large spoken to as yˆ_i. Each covered-up layer incorporates a pre-actuation work. The pre-activation at each layer is the weighted entirety of the inputs from the past layer also inclination. The scientific condition for pre-activation at each layer ‘i’ is given by ai (x) = Wi h i−1 (x) + bi h i (x) = g(ai (x)) where ‘g’ is called as the actuation work. Partner parameters together with the input vector esteem for making the neural arrange learn are the estimation work. The related parameters at first set for esteem and the neural organize learn the parameters to create the blunder work to zero, so that there is exceptionally less contrast between the real demonstrate and approximated show. Parameters The approximated demonstrate is guided by the related parameters which learn its weights depending on the input esteem and the learning calculation. Ours demonstrate partners a weight to each of its highlights, and the joining is through the slope plunge algorithm. One more variety or parameter related is the yield gotten from a fluffy set which is additionally bolstered as a learning parameter that can soak up insights within the classification. Loss Function and Learning Algorithm The sort of issue is classification, and the classification is done based on the learning parameters. One of the learning parameters is fluffy induction rules that can be caught on as the probabilities of an event of a certain sort of occasion. The event of such sort of occasions is exceptionally mooing in likelihood. With these focuses, the misfortune work is characterized in terms of entropy. Numerically, entropy is  given as Pi log(Pi ). These entropy values are calculated until it comes to a focalizing point. The meeting point is the point at which the demonstrate has learned the preparing dataset effectively. In this way, the approximated demonstrate looks nearly the same as the genuine show. At that point, the expectation exactness was assessed,

Show-Based Logical Profound Learning Demonstrates Utilizing ECM …

201

considering the number of untrue positives. The standard for the development of the profound learning demonstrates is characterized by input, yield, and partner parameters, one is in the frame of weight, and other from the fluffy set. Misfortune works as entropy, yield work is measured as delicate max, and the learning calculation is angle plummet. The setup for the profound learning show for our issue is characterized. Fuzzy Inference Knowledge Base Having characterized the profound learning structure, another imperative input to our learning show is the fluffy parameters [17–19]. The strategy utilized to determine the fluffy set rules or fluffy information base for our profound learning demonstrate is gotten utilizing the DENFIS framework. DENFIS stands for dynamic evolving neural fuzzy inference system which maybe a fluffy induction framework that can shape clusters in the unlabelled dataset. It shapes the clusters based on advancing clustering strategy ECM. ECM Algorithm Intuition: Calculation makes clusters powerfully, as modern information is displayed. Input: Set of all highlights as input vector. Output: Developing Clusters. Procedure: • The first cluster is made utilizing the primary case within the input vector. If the input vector is characterized as I, the primary cluster is built as I 0. • For each esteem within the input vector (a) (b) (c) (d) (e)

From cluster middle to each input vector, discover the least separate indicated as dm i n . This recently computed separate esteem is utilized to recognize whether the considered input vector ought to come interior this cluster or not. If the computed dm i n esteem is lesser than the cluster span, at that point this more up to date esteem of vector is joined interior the cluster. If the esteem is more noteworthy, a modern cluster must be built considering the least esteem with another input vector. Using the computation conditions, the clusters are made.

End. This calculation guarantees that as the clusters are made, for the existing and recently joined clusters, the position, and estimate change. The gotten clusters from the AWID dataset are as taken after (Fig. 3). After the cluster creation, rules are created for each of the clusters. The rules are produced utilizing the Takagi–Sugeno fluffy run the show. Sugeno-based fluffy induction framework is built as the parameters can be balanced. A few of the highlights distinguished in the AWID dataset for building the deduction rules are tcp.flags.syn,

202

D. S. Vijayakumar and S. Ganapathy

ECM Clusters

10 8 6 4 2 0

0

2

4

6

8

10

12

14

Fig. 3 ECM clusters

tcp.flags.ack, http.usr_agent, and http.request.uri. The rules characterized for these two sorts of assaults holds for other comparable traits as well. 48,484 occasions are learned utilizing the ECM calculation, and 11 clusters are shaped. Among the 11 clusters, 4445 occasions of deauthentication assaults are distinguished. The deauthentication assault targets the client related with an get to point sending a message that this gets to point is inaccessible. This is often one shape of DoS assault in remote systems for which the fluffy rules are built as appeared underneath:

Ifraditap.present.db_antsignal = 0andwlan.mgt.rsn.capabilities.ptska_replay_ counter = NILandwlan.mgt.rsn.capabilities.gtska_replay_counter. then. ouput = radiotap.present.db_antsignal*x + wlan.mgt.rsn.capabilities.ptska_ replay_counter * y + wlan.mgt.rsn.capabilities.gtska_replay_counter*z. where x, y, and z are the learning parameters, and the yield is the summation of the direct combination that gives us result in fluffy values. These values created are utilized as a preparing parameter values for the nourish forward neural network. The anticipated information base to prepare the nourish forward arrange is built with more noteworthy precision and in its common frame, and this was utilized as a preparing parameter within the profound neural organize. Statistical Measures The quality investigation of the built show is measured utilizing different factual measures. The classification shows that recognize an activity as assault or not is measured utilizing the taking after measures: • True positive and false positive:

Show-Based Logical Profound Learning Demonstrates Utilizing ECM …

203

Occasions that are accurately classified and wrongly classified beneath the typical class. • True negative and false negative: Occurrences that are accurately classified and wrongly classified beneath the assault class. Based on these four values, the proficiency of the show is precisely captured utilizing the perplexity framework. Perplexity lattice is fundamentally a 2 * 2 lattice spoken to as underneath: TP

FP

FN

TN

A few of the essential information fundamental to state a show as effective are as taken after: • Recall: For the prepared demonstrate, when it is tried utilizing the testing set continuously the review esteem ought to be higher as this degree signifies the set of all accurately classified values among all the positive values. Numerically, the review is communicated as Recall =

TP . TP + FN

This degree is the one that makes a difference us to distinguish the genuine positive rate. Accuracy All the prepared models were tried with testing set points for higher exactness values. Higher the exactness, brings down the number of misclassifications. Numerically, precision is communicated as Accuracy =

TP + TN TP + TN + FP + FN

• Precision: Within the preparation of testing, from the set of all positive values, it tries to discover the subset that holds rectify positive values. Precision = • False positive rate:

TP TP + FP

204

D. S. Vijayakumar and S. Ganapathy

The technique requires the untrue positive rate to be exceptionally less to claim the demonstration as effective. This degree could be a coordinate degree to recognize the number of misclassified yields. FP . FPR = FP+TN • Receiver operating characteristics (ROC) curve: It is a vital degree that is plotted between the genuine positive rate (TPR) and the wrong positive rate (FPR). FPR is for the most part plotted on the y-axis, whereas TPR takes the x-axis. • Area under the ROC curve: Indicates the estimate of the region beneath the ROC bend. To claim a specific classification, show as effective, AUC esteem ought to be higher, and the esteem can be gotten by utilizing the underneath formulae: A = ∫1 0

TP FP d TP + FN TN + FP

Another form of composite metrics specific to intrusion detection systems is the composite metrics proposed in [20, 21].

6 Experimental Design Python was utilized to carry out all the tests in the windows stage. Rope-based include decrease was executed utilizing the Scikit-learn bundle. The fluffy set information base is displayed utilizing the Theano system. Profound neural systems were built utilizing the tensor stream and keras system. The test cases that are utilized to assess the execution of the dataset are as takes after: Add up to highlight thought and building the information base. • Reduced highlights set and the classification to recognize typical and anomalous traffic. • For a cluster, testing the number of untrue positive rate. To begin with, step in experimentation requested an ideal learning parameter. Distinguishing proof of ideal learning parameters of the profound neural arrange was developed to begin with AWID decreased dataset that included 68 perceptrons at the input layer, and the yield layer comprised of one neuron to indicate either the ordinary design or anomalous design. It contains 11 neurons for recognizing all the assaults related to DoS, and it may be a completely associated neural arrange. Another experimentation is to distinguish the perfect values for the covered-up units. This recognizable proof is done by running for 400 ages with suitable units, and the layer having the 204 units gave us superior classification for assaults. In this way, the

Show-Based Logical Profound Learning Demonstrates Utilizing ECM …

205

profound learning arrange with 204 units was utilized with multi-lesson classification for DoS assaults, and the number of classes was recognized as 11. When the same was attempted with 158 highlights, the number of units was 2985 that fizzled to provide superior location rate. So, the ideal DoS assault classification happened superior with a lesser number of units. With the comes about gotten utilizing the experimentation, perfectly coveredup layer units were set to 204. The number of ages required to get it the execution of covered-up units was not stationary. It is shifted with the number of highlights and the complexity of the learning. Another imperative thought is the esteem that must be set for the learning rate. The learning rate is not steady and not an irregular variable. The learning rate must be chosen with great learning execution. This was chosen by rehashing the explore twice for 400 ages with a extend extending from [ 0.01 to 1.0]. Preparing speed is exceedingly subordinate to the learning rate. The point at which the preparation happened exceptionally quick was watched when the learning rate esteem was near to 0.1. For classification once more, the explore is rehashed by marginally expanding the number of ages. This appeared promising comes about when the learning rate was near to 0.1. So, the learning rate was settled to 0.1. In this way, the learning rate number of covered-up layers and the number of units in each layer were set with numerous trials. Following the subsidiaries for the entropy must be calculated utilizing the angle descent algorithm both within the shape of forward and in reverse proliferation. The parameters for initialization included the input vectors, related weights and predisposition, and the straight combination of each of the perceptron brought about in pre-activation work. At each layer sometime recently, the fifth layer, pre-activation esteem was calculated as a direct combination. Angle plummet was learning the parameters both at the forward and in reverse proliferation, whereas the slope chain run the show is calculated for in reverse proliferation, the comparing conditions for the fluffy rules. The subsidiaries of the fluffy run the show set were computed, and the computational esteem was been passed as a portion of the angle chain run the show. This setup guarantees the incorporation of the information base within the shape of a fluffy set and the dataset. And for each 100 test, taking a toll was computed and displayed.

7 Results and Discussion This neural arrange was built utilizing the information focuses summarized within the underneath organization (Table 3). At first, the profound learning show learned the 10,447 occasions of the deauthentication assault in conjunction with 1,73,200 occasions of the ordinary information without assault, whereas the framework was tried with 4445 occurrences of deauthentication which gave us a precision of 98% with exceptionally tall untrue positive

206 Table 3 Neural network construct

D. S. Vijayakumar and S. Ganapathy Training set

Instances

Testing set

Instances

Deauthentication

10,447

Deauthentication Dissociation

4445 84

Fig. 4 ROC for false positive rate

Table 4 Observed results

Architecture

Precision

Accuracy

Recall

Layer 1

0.96

0.98

0.95

Layer 2

0.95

0.97

0.96

Layer 3

0.97

0.99

0.97

Layer 4

0.97

0.98

0.96

rate. The untrue positive rates affiliation was delineated utilizing the ROC bend, and the designs were observed (Fig. 4). After this perception on higher untrue positive rate, following the input parameter at the side, the fluffy set was prepared, and the yield watched measurably is displayed within the arrangement underneath (Table 4). Impressively the untrue positive rates in this strategy made critical changes in this approach.

8 Conclusion A completely different perspective strategy of integrating the fuzzy and neural network is proposed in this work. The benchmark AWID intrusion dataset is used to showcase the results obtained with the integration of the neuro-fuzzy model. The usage of deep learning neural network guarantees the usage of this proposed methodology in real world network. This adopted method gave better results compared to any other classical machine learning algorithms that are used to reduce the false alarms in intrusion detection system.

Show-Based Logical Profound Learning Demonstrates Utilizing ECM …

207

References 1. Venkatraman, S., Alazab, M.: Use of data visualisation for zero-day Malware detection. Secur. Commun. Netw. 2018(1728303) (2018). [Online]. Available https://doi.org/10.1155/2018/172 8303 2. Nguyen, H.A., Choi, D.: Application of data mining to network intrusion detection: classifier selection model. In: Asia-Pacific Network Operations and Management Symposium. Springer, pp. 399–408 (2008) 3. Zargar, S.T., Joshi, J., Tipper, D.: A survey of defense mechanisms against distributed denial of service (DDOS) flooding attacks. IEEE Commun. Surv. Tutorials 15(4), 2046–2069 (2013) 4. Marzano, A., Alexander, D., Fonseca, O., et al.: The evolution of bashlite and mirai IoT botnets. In: Proceedings of the 2018 IEEE Symposium on Computers and Communications (ISCC) (2018) 5. Kottler, S.: February 28th DDoS incident report (2018). https://github.blog/2018-03-01-ddosincident-report/ 6. Sudaroli Vijayakumar, D., Ganapathy, S.: Machine learning approach to combat false alarms in wireless networks. Comput. Inf. Sci. 11(3) (2018) 7. Tang, M., Alazab, M., Luo, Y., Donlon, M.: Disclosure of cyber security vulnerabilities: time series modelling. Int. J. Electron. Secur. Digit. Forensics, 10(3), 255–275 (2018) 8. Ozgur, A., Erdem, H.: A review of KDD99 dataset usage in intrusion detection and machine learning between 2010 and 2015. PeerJPrePrints, 4(e1954) (2016) 9. Kayacik, H.G., Zincir-Heywood, A.N., Heywood, M.I.: Selecting features for intrusion detection: a feature relevance analysis on KDD 99 intrusion detection datasets. In: Proceedings of the Third Annual Conference on Privacy, Security and Trust, 2005, pp. 12–14 10. Lotfollahi, M., Jafari Siavoshani, M., Shirali Hossein Zade, R., Saberian, M.: Deep packet: a novel approach for encrypted traffic classification using deep learning. Soft Comput. https://doi. org/10.1007/s00500-019-04030-2 (2019). Springer-Verlag GmbH Germany, part of Springer Nature 2019 11. Amosov, O.S., Ivanov, Y.S., Amosova, S.G.: Recognition of abnormal traffic using deep neural networks and fuzzy logic. In: 2019 International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon) (2019). https://doi.org/10.1109/fareastcon.2019.893 4327 12. Yan, Q., Wang, M., Huang, W., Luo, X., Yu, F.R.: Automatically synthesizing DoS attack traces using generative adversarial networks. Int. J. Mach. Learn. Cybern. (2019). https://doi.org/10. 1007/s13042-019-00925-6 13. Basnet R.B., Shash, R., Johnson, C., Walgren, L., Doleck, T.: Towards detecting and classifying network intrusion traffic using deep learning frameworks. J. Internet Serv. Inf. Secur. (JISIS) 9(4), 1–17 (2019) 14. Chockwanich, N., Visoottiviseth, V.: Intrusion detection by deep learning with TensorFlow. In: 2019 21st International Conference on Advanced Communication Technology (ICACT) (2019). doi:https://doi.org/10.23919/icact.2019.8701969 15. Bhuvaneswari Amma, N.G., Selvakumar, S.: Deep radial intelligence with cumulative incarnation approach for detecting denial of service attacks. Neurocomputing 340, 294–308 (2019). doi:https://doi.org/10.1016/j.neucom.2019.02.047 16. Sudaroli Vijayakumar, D., Ganapathy, S.: Feature reduction using lasso hybrid algorithm in wireless intrusion detection system. Int. J. Innov. Technol. Exploring Eng. 8, 1476–1482 (2019) 17. Zhang, Y., Ishibuchi, H., Wang, S.: Deep Takagi–Sugeno–Kang fuzzy classifier with shared linguistic fuzzy rules. IEEE Trans. Fuzzy Syst. 26(3), 1535–1549 (2018) 18. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017) 19. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), , pp. 2921–2929. Las Vegas, NV, USA (2016, June)

208

D. S. Vijayakumar and S. Ganapathy

20. Gu, G., Fogla, P., Dagon, D., Skori, B.: Measuring intrusion detection capability: an information-theoretic approach equivalent terms from ids literature. In: ACM Symposium on Information, Computer and Communications Security, pp. 90–101 (2006). https://doi.org/10. 1145/1128817.1128834 21. Gaffney, J.E., Ulvila J.W.: Evaluation of intrusion detectors: a decision theory approach. In: IEEE Symposium on Security and Privacy, pp. 50–61. S & P (2001). https://doi.org/10.1109/ SECPRI.2001.924287 22. Panigrahi, R., Borah, S., A detailed analysis of CICIDS2017 dataset for designing intrusion detection systems. Int. J. Eng. Technol. 7, 479–482 (2018). https://www.researchgate.net/pub lication/329045441 23. Hansman, S., Hunt, R.: A taxonomy of network and computer attacks. Comput. Secur. 24, 31–43 (2005) 24. Lough, D.: A taxonomy of computer attacks with applications to wireless networks. Ph.D. Dissertation, Virginia Polytechnic Institute, Blacksburg, Virginia (2001) 25. Kim, S.J., Lee, S., Bae, B.: HAS-analyzer: detecting HTTP-based C&C based on the analysis of HTTP activity sets. KSII Trans. Internet Inf. Syst. 8(5), 1801–1816 (2014) 26. Jacob, G., Hund, R., Kruegel, C., et al.: JACKSTRAWS: picking command and control connections from BOT traffic. In: Usenix Conference on Security, p. 29. USENIX Association (2011) 27. Tegeler, F., Fu, X., Vigna, G., et al.: BotFinder: finding bots in network traffic without deep packet inspection. In: Co-Next, pp. 349–360 (2012)

Atmospheric Temperature Prediction Using Ensemble Deep Learning Technique Ashapurna Marndi and G. K. Patra

Abstract For the research of climatology, temperature is the indispensable parameter that plays a significant role in measuring climate changes. Climate change is required to understand the variability in climate and its impact on various activities such as agriculture, solar energy production, travel, climate conditions in extreme cold or hot places, etc. Thus, time and again, researchers have tried to find methods to predict temperature with increasing accuracy. Hitherto many scientists used to perform on high-powered HPC systems using complex and dynamic climatology models to compute temperature at future time stamps. However, in such models, the accuracy plays game adversely with prediction lead time such that sometimes with increasing lead time as need of application, the model fails to result in acceptable outcomes. With advancement of artificial intelligence, nowadays, deep learning technique and, especially, long short-term memory (LSTM) are providing better solution with higher efficiency for time series prediction problems. We have enhanced the base LSTM and introduced certain changes to solve the prediction of time series data, specially temperature in alignment with scientific applications. The experimental outcome of our proposed technique convincingly justifies the logic behind the enhanced technique, and also, it has been compared with existing approaches to be found unmatched. Keywords Ensemble forecasting · Long short-term memory · Artificial intelligence · Temperature prediction · Weather parameter prediction · Multi-level LSTM · Deep learning

A. Marndi (B) · G. K. Patra Academy of Scientific and Innovative Research, Uttar Pradesh, Ghaziabad 201002, India e-mail: [email protected] Council of Scientific and Industrial Research, Fourth Paradigm Institute, Bengaluru, Karnataka 560037, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_18

209

210

A. Marndi and G. Patra

1 Introduction Accurate temperature prediction plays an important role in analyzing climate change. Climate change is required to understand the variability in climate and its impact on societies. Daily temperature prediction is used in several sectors such as agriculture [1], travel, usual outdoor activities, etc., in informal sector and also used in solar grid system, load of power service [2], in commercial heating or freezing system, etc. The impact of temperature prediction makes more significant in case of extreme climate such as places where temperature falls significantly or heat wave rises and makes the living difficult. Temperature prediction is also one of the most important components of weather forecasting, and proper prediction of temperature brings more accuracy in overall weather prediction. However, since the state of temperature depends on various other parameters such as atmospheric pressure, humidity, wind speed apart from geographic locations, seasons of the year, etc., it is not easy to predict temperature with more accuracy at all times for all places. Traditionally, such problems are solved by researchers using statistical models and subsequently using dynamic modeling. However, such things need huge computation power and months of time together to solve. In spite of that, the accuracy in prediction does not come out satisfactory always. With the recent development of deep learning technique in the domain of artificial intelligence (AI) technology, such problems requiring predictive analytics are solved comparatively in less time and bring out more efficient result. In this paper, we have proposed an ensemble method enhanced upon long short-term memory (LSTM) which is a deep learning approach, to predict temperature for seven days ahead.

2 Related Work Since last decade, there are several works done to solve prediction problems of atmospheric parameters using AI techniques. One such approach used artificial neural network (ANN) models to forecast air temperature from 1 to 12 h lead time at one hour intervals [3]. In long range weather prediction, many researchers proposed different methodologies such as using neural network [4], probability theory [5], and ensemble forecasting [6]. In another study, medium range temperature was forecasted [7]. The maximum and minimum temperature prediction problem was proposed in [8] for seven days ahead time using linear regression model and a variation on a functional regression model with input data of past two days. There is also a work on hybrid model [9] which combines dynamical model and statistical approach for weather prediction. There is also an approach using support vector machine (SVM) [10] to predict next day maximum temperature based on daily maximum temperature at particular location.

Atmospheric Temperature Prediction Using Ensemble Deep Learning …

211

Short-term weather forecasting model based on historical data taken at different weather stations was proposed in [11]. There is a hybrid model approach [12] using self-organizing feature map (SOFM) and multi-layer perceptron (MLP) to predict maximum and minimum temperature based on past observation. There is even solution using artificial neural network [13] that forecasts short-term temperature with moderate results. Thus, temperature prediction is one of the important research areas that attracted several scientists to focus on this domain, however, the challenge remains to predict with more accuracy and for longer lead time.

3 Methodology 3.1 Parameters for Model Designing For designing efficient model, we need to consider all input parameters carefully after in-depth analysis of their impacts. To solve a prediction problem involving multiple input parameters, it is very important to consider all aspects that affect output and draw a fine balance of adjustments among them, to derive conclusive results. We have described below all such aspects that can influence the outcome of this experiment. • Input Parameters: Prediction problems are usually solved in three ways: using statistical method, dynamical model, and data driven method. In statistical time series prediction, data in future time stamp is predicted by using the data of same parameter from previous time stamps based on certain hypothesis. In dynamical model, apart from the same parameter data of past time stamps, other influential parameters are also included to derive the solution for the problem. In artificial intelligence approach, though input parameters are considered like in dynamic model, however, among them whom to consider and whom to discard is also determined in this approach automatically based on their impact factor. To solve an atmospheric problem, it is difficult to determine all factors that affect the state of another atmospheric parameter. The state of atmospheric temperature depends upon several factors such as state of other parameters like humidity, pressure, wind speed and direction, rain fall, position of cloud, seasons, and geographical conditions like dry land, sea surface, or snow cover, etc., apart from series of past temperature values. However, it has been experimentally found that humidity, pressure, and wind speed along with past values of temperature are sufficient to predict temperature. The influences of other parameters are probably already captured as part of these four parameters, and they may not impact additionally of much significance. In our experiment, we have considered humidity, pressure, and wind speed along with past values of temperature to predict temperature of future time stamps that has been depicted in Fig. 1 with a schematic diagram.

212 Fig. 1 Schematic diagram of inputs and output of proposed model

A. Marndi and G. Patra

Temperature Pressure Humidity Wind Speed

Model

tn+336

Temperature

t1 tn

• Range of Prediction: Next aspect to consider is the range of prediction which usually depends on requirement of the application. It is well known that accuracy of prediction degrades with increase of prediction lead time. There are also use cases of temperature predictions with higher lead time, and based on that, the accuracy is also compromised. We have considered a medium range prediction of one week ahead which is usually useful in many cases such as cultivation, travel, solar energy production, etc. • Base Model: Next consideration is to select most suitable base model for solving prediction problem. Accuracy of solution basically depends on choosing suitable model. Several researches found challenges for prediction of time series using statistical modeling as well as dynamic modeling. On the contrary, deep learning approach with long short-term memory (LSTM) network gives better suited results for such time series predictions. This is because it has capability to ignore insignificant patterns and thus filters the most suitable patterns leading to better results.

3.2 Long Short-Term Memory (LSTM) The long short-term memory (LSTM) [14] mainly deals with two states, e.g., cell state (Cn ) and hidden state (h n ) which are updated in every time stamp and propagated to next time stamp. These two states are updated in three different mechanisms named as forget gate ( f n ), input gate (i n ), and output gate (on ) as shown in Fig. 2. The forget gate is responsible for discarding unwanted information from the cell state. Input gate is responsible for deciding what new information is to be stored in the

Fig. 2 Architecture of a LSTM network

Atmospheric Temperature Prediction Using Ensemble Deep Learning …

213

cell state and create that new information to be added by tanh layer. In every time stamp, old cell state is updated to new one by removing the information required to be forgot from previous state and adding new information to current state. However, this model is capable of fulfilling its objective to selectively choosing hidden patterns and also forgetting insignificant patterns which lead to have better results. i n = σ (Wi In + Ui h n−1 + bi )

(1)

  f n = σ W f In + U f h n−1 + b f

(2)

on = σ (Wo In + Uo h n−1 + bo )

(3)

Cn = f n ∗ Cn−1 + i n ∗ tanh(Wc In + Uc h n−1 + bc )

(4)

h n = tanh(Cn ) ∗ on

(5)

W f , W i , W o , W c and U f , U i , U o , U c are representation of weight matrices of current and previous time stamps, respectively. The bias vectors corresponding to the gates f n , i n , on , cn are expressed by bf , bi , bo , bc , respectively, and current cell state is denoted by C n . Also, hn−1 is the hidden state of previous time stamp, σ and tanh are the sigmoid and hyperbolic tangent activation functions, respectively. Input to the LSTM network is represented by In . Success of LSTM for prediction of time series data depends on collaborative performance of these gates.

3.3 Proposed Method (LSTMx) We have proposed an improved ensemble method based on LSTM algorithm. Main objective of ensemble method is to combine multiple hypothesis to produce possibly better result. Our ensemble method consists of two significant steps: • Train same or different models with multiple subsets of training data. • Outputs of first level LSTMs should be combined together using an aggregate function to build better model. In our approach, we have prepared six ensemble datasets consist of data of different years or different range of years as training periods as shown in Fig. 3. We have considered six LSTMs which are optimally trained with these ensemble datasets. From the whole dataset of periods 2010–2014, we have considered data during 2010–2013 as training data and that of 2014 as testing data. We have distributed the whole training data in all possible ways to prepare different ensemble datasets. First ensemble dataset consists of four years data of 2010–2013. Second and third ensemble datasets consist of three years data of 2010–2012 and that of 2011–2013.

214

A. Marndi and G. Patra

P

H

W

2010-13

LSTM1

2010-12

LSTM2

2011-13

LSTM3

2010-11

LSTM4

2011-12

LSTM5

2012-13

LSTMx

T

tn+336

LSTM6

t1

n

Fig. 3 Block diagram of multi-level LSTM

Next three ensemble datasets consist of two years data of 2010–2011, 2011–2012, and 2012–2013, respectively. Intention behind distributing the training data in this manner is to involve LSTMs to capture hidden patterns from different depth of data and different range as well. It should bring out possibly best hidden pattern from multiple such subsets of data. At the same time, it might happen that while a pattern found to be best for a particular subset of data, may not perform similar for other set of data. The outputs obtained in first level of LSTMs are fed to the second level LSTM as inputs with a random initial weight. The network gets trained and finally becomes ready to be used for testing phase. The proposed model is compared with one classical model, i.e., autoregressive (AR) model, two data driven models, i.e., extreme learning machine (ELM), and convolutional neural network (CNN) that are discussed in details as follows: Autoregressive (AR) model An autoregressive (AR) model is time varying random process, represented by linear regression model that uses lagged variables as input [15]. Name of method itself suggests that it is regression of variables against itself. Following equation represents order n auto regressive model. xt =

n 

ai xt−i + wt

(6)

i=1

where n represents number of lagging values used in the model. ai and wt represent model coefficient and white noise, respectively.

Atmospheric Temperature Prediction Using Ensemble Deep Learning …

215

Extreme Learning Machine (ELM) Extreme learning machine (ELM) consists of single hidden layer, and the weights in between input layer and hidden layers are decided randomly, but output is determined analytically without tuning the parameters [16]. Convolutional Neural Network (CNN). Convolutional layer: This layer is used for extracting high-level features from data by computing convolution between the kernel and input data matrix. Kernel, basically a matrix of weights, moves across the data to extract relevant features by computing convolution function with input data. Filters are not exactly same as kernel but are formed by computing concatenation of multiple kernels. Stride and padding are two important concepts for calculating feature matrix. Stride specifies number of step movement. Sometime it may happen that size of output is less than input size, in that case generally padding is used. The process of symmetrically adding zeroes to the input matrix is known as zero-padding. It basically helps in adjusting size of the input as per requirement of maintaining size of the input as output size. Pooling Layer: The objective of pooling layer is to reduce the spatial dimensions of the convolved feature by decreasing the number of parameters in process of combining the clusters of neurons at one layer into a single neuron in the next layer. There are two types of pooling available, i.e., max pooling and average pooling. In the max pooling, maximum value and in the average pooling average value of convoluted matrix are considered. Functionalities of all layers are combined together to detect features from the data. One-dimensional convolutional layer is used for time series analysis.

3.4 Data and Study Area Data plays an important role in the data intensive prediction methods. The quality, quantity, and accuracy of data are as important as the methodology. The Council of Scientific and Industrial Research (CSIR), India has established a number of meteorological towers across the country to collect most of the important meteorological parameter. For our studies, we have considered data from a tower located in the city of Bengaluru (earlier called Bangalore), in south India during 2010–2014. Four meteorological parameters such as temperature (t), pressure (p), humidity (h), and wind speed (v) available at 30 min averaged intervals have been used for this study. The observed values are from sensors mounted at 20 m height from the ground level.

216

A. Marndi and G. Patra

3.5 Experimental Setup We have tower data at 30 min intervals for the period of 2010–2014 taken from observation site located at Bengaluru. We have divided whole dataset into data during 2010–2013 as training set and data during 2014 as testing set. We have used six LSTM networks to train six ensembles datasets in level one. Each LSTM is optimally trained with corresponding ensemble dataset. We initiated our experiment with 10 neurons at the first layer and then kept increasing by 10 more neurons in the same layer until our network gave satisfactory result. Once the number of neurons was fixed, we have optimized our network further by adding additional layer starting from second layer until there is no further improvement. We treat the network to be optimal when we find no improvement in performance by addition of further neurons or layers. We have found that our configured network does not perform better by adding more layers and more neurons than 8 hidden layers and 50 neurons in each layer. Hence, the number of hidden layers was fixed at 8 and the number of neurons in each hidden layer at 50. Each LSTM is configured to predict temperature at seven days in advance. Next important parameter is epoch which defines the number of times the training set needs to be trained. The training of LSTMs was carried out for different epoch sizes starting from 10 to 200 with increment of 10 in each iteration. For all the LSTMs, we observed that network is optimized at epoch 180. This finding was evident from the graph shown in Fig. 4 which depicts relationship between number of epoch and

Fig. 4 Training and validation losses of LSTMx

Atmospheric Temperature Prediction Using Ensemble Deep Learning …

217

validation loss. However, to give best result for all possible dataset, we considered the epoch at 200 keeping a small safe margin from optimized value obtained. After fixing epoch with optimal values of loss function, we continued our experiment and trained the model optimally with training data. The training was performed with input sequence of last 7 days, each day having 48 time stamps, i.e., 336 time stamps in total, to predict temperature at 7 days ahead. Then, we tested the trained model with testing data to predict same 7 days ahead. We have compared capability of proposed model with three other models: one is classical statistical model named autoregressive (AR) model and other two are extreme learning machine (ELM) and convolutional neural network (CNN). In AR, seven days ahead temperature is predicted by using past values of temperature. ELM is single hidden layer feed forward network (SLFN) in which neurons in a hidden layer are randomly initiated without further tuning [16]. The experiment was carried out on intel(R) Xeon(R) CPU E3-1203 v3 @ 3.30 GHz with 8 cores and 32 GB RAM. The models were implemented in Python using Keras module of TensorFlow platform.

4 Results and Analysis In our experiment, we have implemented the proposed model using three different modes as discussed earlier. First one is the outcome from the LSTM having best result in first level, and we named it Normal-LSTM. Second one is the average of results obtained from all the LSTMs in first level, named as Avg-LSTM. Then, the third one is the result obtained from the second-level LSTM in our multi-level LSTM approach, named as LSTMx, which is also referred as multi-level LSTM interchangeable in this paper. We have also analyzed the results in various approaches as below.

4.1 Comparison of Predicted Values with Observed The meteorologically observed temperature at Bengaluru for the years 2010–2013 was used for training the models, and the predictions were done for the year 2014. Seven days ahead predictions were carried out for all the modes of the LSTM models and compared with the observed values. Figure 5 depicts a sample predicted output from April 1, 2014, till April 14, 2014. All the models could capture the intra-daily variability of temperature. However, the normal LSTM has a time shift in the prediction, which is not acceptable for a practical purpose, though having a correlation coefficient of close to 0.6. While the Avg-LSTM could do better than the normal-LSTM, it fails to pick the peaks properly. However, in the case of LSTMx, it could very well capture the intra-daily peaks with

218

A. Marndi and G. Patra

Fig. 5 Comparison of one week ahead predicted temperature among normal-LSTM, Avg-LSTM, and LSTMx

a correlation coefficient of 0.86. One of the striking features of these results is that for a multi-parameter model, a correlation coefficient of close to 0.9 is considered very good.

4.2 Comparison Using Evaluation Metrics To understand the improved prediction capability, we have compared the multilevel LSTM results with some of the standard classical methods as well as some well-known data driven methods. We have compared our results with autoregressive model, ELM, and CNN. Accuracy of proposed model is measured in terms of some of the evaluation metrics, namely mean absolute scaled error (MASE), root mean square error (RMSE), mean absolute error (MAE), correlation coefficient (CC), and mean absolute percentage error (MAPE). Summary of all these statistical accuracy measures for proposed model along with AR, ELM, and CNN is presented in Table 1. From Table 1, it is observed that in terms of the MASE, RMSE, MAE, CC, and MAPE, our proposed model, i.e., LSTMx based on multi-level LSTM performs better compared to other three models, i.e., AR, ELM, and CNN. Table 1 Summary of MASE, RMSE, MAE, CC, and MAPE between predicted and observed temperature of different models MASE

RMSE

MAE

CC

MAPE

AR

1.47

3.56

2.82

0.37

27.01

ELM

0.9

2.94

2.41

0.72

11.81

CNN

0.94

2.65

2.35

0.80

10.79

LSTMx

0.51

2.47

1.95

0.86

8.56

Atmospheric Temperature Prediction Using Ensemble Deep Learning …

219

Fig. 6 Daily highest temperature predicted by three modes of LSTMs and compared with observed data for three months

4.3 Comparison of Daily Maximum Temperature To have a better understanding of the performance difference of LSTMx in comparison with the other modes of LSTM, we compared the daily maximum temperatures. Figure 6 depicts the comparison of daily highest temperature for three months predicted using all three modes of LSTM. Though none could capture the variation in highest temperature from the observed dataset exactly, the LSTMx could provide better prediction compared to normal LSTM and the averaged LSTM. Thus, we confirmed our hypothesis that second-level LSTM can be used to fine-tune the results from the first-level LSTMs to finally give a better prediction result. The main innovation in our approach may be described, as it is a multi-level LSTM with distribution of input data in different granularities. The data is being inputted with different granularities such as in first LSTM with complete range of data, for second and third ones, it was for range of three years and then rest were of two years each. With variance of data range, it might have considered the interesting patterns with difference of priority. Essentially, the frequency of occurrences of a particular pattern in the whole input sequence for a LSTM decides the importance of that pattern for that LSTM. Since the sizes of input sequences vary in different LSTMs, though same set of patterns may be used to train them, the outputs from those LSTMs differ even for those input patterns. These outputs are finally consolidated by LSTMx by assigning appropriate weights to build the more fitting outcome.

5 Conclusion Ensemble forecasting is a popular methodology in atmospheric study. However, use of ensemble forecasting using artificial intelligence (AI) technique is relatively new

220

A. Marndi and G. Patra

and rarely been used in solving complex problems. Use of ensemble forecasting by distributing input dataset with different granularities to feed outputs of first-level LSTMs to next level is novel. But as demonstrated through experiment, it is clear that quality and quantity of data and the way data are provided to the AI models play an important role. Since the multi-level LSTM deals with more uncertainties with increase in levels, it is important to fine-tune model network with hyperparameters more effectively to carve out better outcome. On the other hand, with clever use of this technique, there exist possibilities that various uncertainties in an AI-based learning are minimized by considering different ensembles as done in this study. In coming days with advancement of such techniques, it may add more power to AI to solve new types of applications with more complexities.

References 1. Hudson, D., Marshall, A.G., Alves, O.: Intra seasonal forecasting of the 2009 summer and winter Australian heat waves using POAMA. Weather Forecast. 26, 257–279 (2011) 2. Robinson, P.J.: Modeling utility load and temperature relationships for use with long-lead forecasts. J. Appl. Meteor 36, 591–598 (1997) 3. Smith, B.A., McClendon, R.W., Hoogen-boom, G.: Improving air temperature prediction with artificial neural networks. Int. J. Comput. Intell. 3(3), 179–186 (2006) 4. Shrivastava, G., Karmakar, S., Kowar, M., Guhathakurta, P.: Application of artificial neural networks in weather forecasting: a comprehensive literature review. Int. J. Comput. Appl. 51, 17–29 (2012) 5. Sadokov, V.P., Kozelcova, V.F., Kuznecova, N.N.: Probabilistic forecast of warm and cold weather in Belorussia. Proc. Russ. Hydrometcentre 345, 144–154 (2011) 6. Astahova, E.D., Alferov, Y.V.: High performance version of the atmosphere spectral model for deterministic and ensemble weather forecast design using multiprocessor systems. Proc. Russ. Hydrometcentre 342, 118–133 (2008) 7. McCollor, D., Stull, R.: Evaluation of probabilistic medium-range temperature forecasts from the North American ensemble forecast system. Weather Forecast. 24, 3–17 (2009) 8. Holmstromm, M., Liu, D., Vo, C.: Machine learning applied to weather forecasting (2016). http://cs229.stanford.edu/proj2016/report/HolmstromLiuVo-MachineLearningApplie dToWeatherForecasting-report.pdf 9. Krasnopolsky, V.M., Fox Rabinovitz, M.S.: Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction. Neural Netw. 19(2), 122–134 (2006) 10. Radhika, Y., M, Shashi.: Atmospheric temperature prediction using support vector machines. International journal of computer theory and engineering 1(1), 55 (2009) 11. Jakaria, A.H.M., Hossain, M.M., Rahman, M.A.: Smart weather forecasting using machine learning: a case study in tennessee. Conference: ACM Mid-Southeast Conference, At Gatlinburg, TN (2018) 12. Pal, N.R., Pal, S., Das, J., Majumdar K.: SOFM-MLP: a hybrid neural network for atmospheric temperature prediction. IEEE Trans. Geosci. Remote Sens. 41(12), 2783–2791 (2003) 13. Hayati, M., Mohebi, Z.: Application of artificial neural networks for temperature forecasting. World Acad. Sci. Eng. Technol. 28(2), 275–279 (2007) 14. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735(1997) 15. Walker, G.: On periodicity in series of related terms. Proc. R. Soc. Lond. 131, 518–532 (1931)

Atmospheric Temperature Prediction Using Ensemble Deep Learning …

221

16. Lazarevska, E.: Wind speed prediction with extreme learning machine. In: International IEEE Conference 2016 Intelligent Systems (IS). https://doi.org/10.1109/IS.2016.7737415(2016)

Reliability Evaluation of Distribution System Based on Interval Type-2 Fuzzy System Galiveeti Hemakumar Reddy, Akanksha kedia, Shruti Ramakrishna Gunaga, Raju More, Sadhan Gope, Arup Kumar Goswami, and Nalin B. Dev Choudhury

Abstract The accurate estimation of distribution system reliability depends on requisite equipment failure data availability. Equipment failure rate is not deterministic because of lack of data availability and uncertainty of system failures. Failure rate is considered as a fuzzy number to incorporate the uncertainty of equipment failures. In this paper, interval type-2 fuzzy systems (IT2FS) are proposed to handle the twofold uncertainty of equipment failure, i.e., objective uncertainty and subjective uncertainty. Monte Carlo simulation (MCS) is employed to estimate equipment failure probability, and sampling approach is used to evaluate the failure possibility. The membership functions for equipment failure rate are approximated using this failure possibility. Fuzzy importance index (FII) is used to rank the impact of a component failure on reliability. The proposed assessment method is tested and validated on RBTS bus 2 reliability test system. Keywords Monte Carlo simulation · IT2FS · Type-1 fuzzy system · Sampling · Reliability

1 Introduction In the recent days, customers are demanding more reliable power and system operators stressed to maintain the proper reliability levels. The system reliability depends on the equipment failure that connected in the system. Equipment failures are unpredictable and random in nature. Hence, use of average failure rates for reliability G. Hemakumar Reddy (B) · S. Ramakrishna Gunaga MVJ College of Engineering, Bangalore, Karnataka 560067, India A. kedia · A. Kumar Goswami · N. B. Dev Choudhury National Institute of Technology Silchar, Silchar, Assam 780010, India R. More Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh 462003, India S. Gope Mizoram University, Aizawl, Mizoram 796004, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_19

223

224

G. Hemakumar Reddy et al.

evaluation will not be sufficient. The uncertainty in the system failures increases due to the lack of data and data acquisition. Analytical methods are used for prediction of system reliability. But without requisite data availability, it is difficult to predict failure rate properly. To alleviate the calculation error, stochastic methods are implemented to calculate fault rate. MCS is widely used for distribution system reliability assessment [1]. MCS contemplate randomness of system failure and it directly reciprocates the objective uncertainty but fails to consider subjective uncertainty. Distribution system reliability evaluation includes (1) integration of distribution generators (DGs) [2], (2) time-varying loads [3], (3) time-varying failure rate considering equipment aging [4], (4) integration of electric vehicles [5], (5) uncertain but unavoidable rare weather events [6], and (6) consideration of planned outages [7]. Consideration of all these above-mentioned points makes the distribution system and its reliability issue become much more complex. Among all these points mainly DG output, EV charging, time-varying load, and weather impacts are mostly uncertain in nature. Predictive and stochastic approaches to handle these uncertainties are not worthwhile as the system complexity is very high, as well as precise evaluation of reliability issue is not possible. According to Prof. L. A. Zadeh in [8], the system complexity is inversely related with precision of reliability data issue as precise and significance are mutually exclusive by nature. L. A. Zadeh introduced fuzzy method which is more approachable to incorporate the uncertainties in the distribution system equipment for reliability concern. Fuzzy theory is applied for different power system problems such as prioritization of loads [9], load restoration [10], repair crew dispatch system [11], and reliability evaluation [12]. It is been observed from the literature, T1FS is mainly used for reliability analysis. T1FS deals with precise and crisp membership value. T1FS can only handle objective uncertainty, not the subjective uncertainty. So when complexity is high, it fails to handle that uncertainty. To overcome this limitation, Prof. L. A. Zadeh have introduced type 2 fuzzy sets [13] to handle subjective and objective uncertainty together. Authors in [14] are used IT2FS to evaluate failure probability of power transformer. In this present work, IT2FS is used for assessment of distribution system reliability. It is very much effective to beat the limitations of T1FS, and uncertainties are handled more potentially. In IT2FS, the membership function is also fuzzified to handle subjective uncertainty. Subjective uncertainty incorporates randomness in membership grade, i.e., membership grade value is also a fuzzy number. To reduce the calculation error, MCS and sampling method are used to get membership function of failure rate over the approximated values based on expert knowledge. FII is determined to find the sensitivity of the equipment on system failure/outage duration.

Reliability Evaluation of Distribution System …

225

2 Preliminaries on Fuzzy Sets 2.1 Type 1 Fuzzy Sets Fuzzy, the word literally means vagueness and ambiguity. Fuzzy set includes all the elements with varying membership in the set using function-theoretic form, and the fuzzy set is mapped to a universe of membership values. (1) When the universe of discourse X is discrete and finite.      μ A˜ (xi ) (1) A˜ = μ A˜ (x1 ) + μ A˜ (x2 ) + · · · = i

where μ A˜ (xi ) is a membership degree of number xi . (2) When universe of discourse X is continuous and infinite. A˜ =



 μ A˜ (xi )

(2)

The triangular and trapezoidal membership functions are represented by three points (a, b, c) and four points (a, b, c, d), respectively. The varying membership value for different numbers is capable to assess the uncertainty of a random number. In T1FS, the membership values are crisp. It is the major limitation of T1FS. Due to this, T1FS is only able to model the objective uncertainty.

2.2 Interval Type 2 Fuzzy Sets IT2FS is a special case of type-2 fuzzy systems. IT2FS is a combination of minimum two T1FS named as upper and lower boundary membership function (LMF and UMF). The area between the UMF and LMF is called footprint of uncertainty (FOU). The membership degree of any number is in between the limits of LMF and UMF values. It shows that membership grade value is also a fuzzy number. This unique feature of IT2FS overcomes the limitations of T1FS. IT2FS is useful to model both objective and subjective uncertainty. IT2FS membership function is represented by nine points (a, b, c, d, e, f, g, i, h). Here, h is the height of the LMF. The representation of IT2FS is as follows. If the universe of discourse X is discrete and finite.   

μ A˜ (xi ), μ A˜ (xi ) A˜ = (3) i

226

G. Hemakumar Reddy et al.

where, i = 1, 2, 3, . . . , N , μ A˜ (xi ) and μ A˜ (xi ) are membership degrees of number xi for LMF and UMF, respectively. If the universe of discourse X is continuous and infinite. 

˜ μ A˜ (xi ), μ A˜ (xi ) (4) A= i

The complete determination of a IT2FS is done using its FOU. The FOU is determined in terms of LMF and UMF as follows: 

˜ = FOU( A) μ A˜ (xi ), μ A˜ (xi ) (5) x∈X

2.3 Fuzzy Arithmetic Operations Arithmetic operations on fuzzy numbers are formulated depending on the interval of confidence (α-level sets or α-cuts). The α-level sets are recalled from [15]. The interval operators are applied to T1FS using α-planes [15] and same are extended to IT2FS.

 Let FOU( A˜1 ) = ∪∀α α · [a1α , a1α ], [b1α , b1α ] and FOU( A˜2 ) = ∪∀α α · [c1α , c1α ],  [d1α , d1α ] are two perfectly normal IT2FS, then the arithmetic operations are as follows: FOU( A˜1 ) ∗ FOU( A˜2 ) = ∪∀α α · ([a1α , a1α ] ∗ [c1α , c1α ], [b1α , b1α ] ∗ [d1α , d1α ])

(6)

The interval operators are applied on fuzzy numbers A˜1 , A˜2 to calculate the fuzzy number A˜3 is as follows:   A˜3 = A˜1 ∗ A˜2 = ∪∀α α · FOU( A˜1 ) ∗ FOU( A˜2 ) (7)

3 Membership Function Approximation 3.1 Monte Carlo Simulation Monte Carlo simulation method is a computational algorithm developed based on the random sampling. This simulation method is applied to simulate the uncertainty of the system behavior without going to the actual working scenario. In the distribution system, occurrence of failure is non-predictable and uncertain. The lack of failure data availability leads to the more uncertainty in reliability analysis. MCS is adopted

Reliability Evaluation of Distribution System …

227

in this work to deal this uncertainty. The failure probability of different equipment is evaluated considering average failure rates using MCS method. Mean time to failure (MTTF) for each components is simulated using exponential distribution and mathematical formulation is as follows: MTTF = −

1 × T × log(rand) λ

(8)

here, λ = failure rate of component. T = simulation period. rand = random number generated using uniform distribution function. A component is considered as failed during the simulation period when the MTTF is within the simulation period. A large number of samples are required to achieve the accurate results. Mathematically, failure probability of a component is calculated by using Eq. (9 p=

n N

(9)

here, p = component failure probability. n = number of times component is failed within the simulation period. N = number of iterations.

3.2 Sampling Approach The membership grade function (μ) of each component in the system is determined by using sampling approach [16]. Each sample of probability is determined by MCS method as explained in the previous Sect. 3.1. MCS method is able to give different probability value at each time and these variable probability values are distributed as possibility distribution. The possibility distribution is approximated as fuzzy membership function. The accuracy of the fuzzy number approximation depends on the number of samples. The summary of the proposed method for fuzzy membership function approximation is shown in Fig. 1 as a flowchart. The samples of failure probability is shown in Fig. 2. The proposed method is validated by considering failure rate as random number. The failure probabilities are having higher frequency that carries higher membership value while converting into fuzzy membership function. The frequency is directly proportional to membership value. The failure possibility and fuzzy membership approximation for different membership functions are shown in Fig. 3.

228

G. Hemakumar Reddy et al.

Fig. 1 MCS and sampling approach for membership function approximation

Start Input failure data, initialize number of samples K and MCS iterations N, m=0 Define the probability distribution of failure, analysis period T and initialize l=1 and n=0 Determine MTTF using eq (15) no

yes

MCS method

l=l+1

MTTF 0. β β t→t1 ∂t t β − t1

(1)

A Fractional Model to Study the Diffusion of Cytosolic Calcium

587

If the fractal derivative with respect to the space variable is also taken, then (f α , t β ) where α is the fractal dimension for the space variable, then this gives more generalized form of the fractal derivative, i.e., hα (t) − hα (t1 ) ∂hα (t) = lim , (α > 0, β > 0). β β t→t1 ∂t t β − t1

(2)

If h(t) is the fractal derivative of f , i.e., df (t) = h(t). dt β

(3)

On expanding the above, f (t) − f (t1 ) β

t β − t1

= h(t).

(4)

On simplifying, f  (t) = βt β−1 h(t).

(5)

Definition 2.1 Let h(t) be a function which is differentiable in an open interval I. β Then, for this function the fractal integral F0 It of order β is defined as, F β 0 It h(t)

t =β

sβ−1 h(s)ds, β > 0,

(6)

0

the above follows

d F β [ I h(t)] dβ 0 t

= h(t).

Definition 2.2 Let h and its ith-order derivative (i = 1, 2, 3, . . . , n) be continuous in the interval (0, ∞) and then the fractional derivative of order η given by Caputo [11, Eq. 5, p. 530] is defined as,

C

Daη h(t)

=

⎧ ⎨ ⎩

1 (n−η)

t a

hn (τ )dτ , (t−τ )η+1−n

where (.) is the Gamma function.

if n − 1 < η ≤ n, ((η) > 0, n ∈ N ) ∂n h(t), ∂t n

if η = n,

(7)

588

Kritika et al.

3 Mathematical Formulation of the Model For the anomalous subdiffusion of calcium in the cytoplasm, following mathematical model is developed to characterize the process of cytosolic calcium diffusion, ∂ ∂C(x, t) =D β ∂t ∂x



 ∂C(x, t) , (0 < β ≤ 1, 0 ≤ x ≤ 100, t > 0). ∂xβ

(8)

The corresponding boundary and initial conditions are: Initial condition: C(x, 0) = x(1 − x) = g(x), 0 ≤ x ≤ 100 µm.

(9)

Boundary condition: C(0, t) = C(100, t) = 0.

(10)

Earlier, space–time fractional fractal Boussinesq equation is provided by Yadav and Agarwal [27] in the area of groundwater flow problem. Since, we are studying the model for the calcium signaling in cardiac myocytes which is of cylindrical shape and is approximately 100 µm long. In the sequence of increasing importance of fractional operators, in the present work we consider the above model with replacement of the integer-order derivative by the Caputo fractional derivative. The new time fractional model is   ∂C(x, t) ∂ C ρ , (0 < ρ ≤ 1, 0 < β ≤ 1, 0 ≤ x ≤ 100, t > 0), Dt C(x, t) = D β ∂x ∂xβ (11) where ρ denotes the fractional derivative with respect to t and the fractal dimension is denoted by β. The diffusion term is taken in the following form, ∂ ∂xβ



∂C(x, t) ∂xβ

 =

1 − β 1−2β ∂C(x, t) x2−2β ∂ 2 C(x, t) x + , β ∈ (0, 1). (12) β ∂x β2 ∂x2

On using (12) in the (11) C

ρ

Dt C(x, t) = D

x2−2β ∂ 2 C(x, t) 1 − β 1−2β ∂C(x, t) x +D 2 . β ∂x β ∂x2

(13)

2−2β

Substituting D 1−β x1−2β = k and D x β 2 = l, (13) will be, β C

Dtρ C(x, t) = k

∂ 2 C(x, t) ∂C(x, t) +l , 0 < β < 1, ρ ∈ (0, 1]. ∂x ∂x2

(14)

A Fractional Model to Study the Diffusion of Cytosolic Calcium

589

1 The above equation is the fractal–fractional formulation of our problem. m = M and 1 h = N define the grid size in space and time where M and N are positive integers. The numbers xu = um and tv = vh are the grid points for space and time interval [0, x] and [0, t] respectively. Cuv = C(xu , tv ) and gu = g(xu ) correspond to C(x, t) and g(x). Murio [23] gave the discretization for the Caputo fractional derivative, i.e.,

1 ∂ ρ C(xu , tv ) = ∂t ρ (1 − ρ)

tv 0

∂C(xu , p) (tv − p)−ρ dp ∂t

1 = (1 − ρ) i=1 v

ih 

(i−1)h

=

 Cui − Cui−1 + o(h) (vh − p)−ρ dp h

 v  1 Cui − Cui−1 1 + o(h) [(v − i + 1)1−ρ (1 − ρ) 1 − ρ i=1 h

− (v − i)1−ρ ]h1−ρ =

v 1 i 1 (C − Cui−1 )[(v − i + 1)1−ρ (2 − ρ) hρ i=1 u

1 [(v − i + 1)1−ρ (2 − ρ) i=1 v

− (v − i)1−ρ ] +

− (v − i)1−ρ ]o(h2−ρ ).

(15)

Taking, τρ,h =

1 1 , (⇒ τρ,h > 0), 0 < ρ ≤ 1, (2 − ρ) hρ

and

(ρ)

i1−ρ − (i − 1)1−ρ = ωi , i = 1, 2, 3, . . . , m.

(16)

(17)

On using the above substitutions (15) reduces to: ρ ∂ ρ C(xu , tv ) = τρ,h ωi (Cuv−i+1 − Cuv−i ) ρ ∂t i=0 v

1 n1−ρ o(h2−ρ ) (2 − ρ) v ωiρ (Cuv−i+1 − Cuv−i ) + o(h). = τρ,h

+

(18)

i=0

Thus, an approximate expression for the Caputo’s fractional derivative is given by, ρ ∂ ρ C(xu , tv ) = τρ,h ωi (Cuv−i+1 − Cuv−i ). ρ ∂t i=0 v

(19)

590

Kritika et al.

4 Numerical Approximation of the Fractal–Fractional Equation of the Model In this section, with the help of [23], using the Crank–Nicolson method the numerical approximation of (13) is presented; corresponding to the grid points centered at (xu , tv ) = (uk, vh) the approximate solution is obtained as (for u = 1, 2, . . . , N − 1) τρ,h

v

ωiρ (Cuv−i+1 − Cuv−i ) + o(h)

i=0

k v−1 + Cuv−1 ) (C v − Cuv + Cu−1 2d u−1 l v−1 v−1 v v − 2Cuv + Cu+1 + Cu−1 − 2Cuv−1 + Cu+1 ) + o(d 2 ). + 2 (Cu−1 2d =

(20)

The above equation shows consistency of the Crank–Nicolson method. On simplifying, τρ,h

v

ρ

ωi (Cuv−i+1 − Cuv−i )

i=0

k v−1 (C v − Cuv + Cu−1 + Cuv−1 ) 2d u−1 l v−1 v−1 v v − 2Cuv + Cu+1 + Cu−1 − 2Cuv−1 + Cu+1 ) + T (x, t), + 2 (Cu−1 2d =

(21)

where T (x, t) is the truncation term. On solving by the use of the finite difference scheme-based Crank–Nicolson method, the resulting equation will obtain as follows, τρ,h ω1ρ (Cuv − Cuv−1 ) v k ρ v−1 v (Cu−1 ωi (Cuv−i+1 − Cuv−i ) + − Cuv + Cu−1 + Cuv−1 = − τρ,h 2d i=2 + Assuming

l v−1 v−1 v (C v − 2Cuv + Cu+1 + Cu−1 − 2Cuv−1 + Cu+1 ). 2d 2 u−1 k 2d

= ξ,

l 2d 2

(22)

(ρ)

= φ, ω1 = 1 and v = 1 in (22),

1 1 + (τρ,h + ξ + 2φ)Cu1 − φCu+1 − (ξ + φ)Cu−1 0 0 = (τρ,h − ξ − 2φ)Cu0 + (ξ + φ)Cu−1 + φCu+1 .

(23)

A Fractional Model to Study the Diffusion of Cytosolic Calcium

591

For v ≥ 1, v v + (τρ,h + ξ + 2φ)Cuv − φCu+1 − (ξ + φ)Cu−1 v−1 v−1 = (τρ,h − ξ − 2φ)Civ−1 + (ξ + φ)Cu−1 + φCu+1 v (ρ) − τρ,h ωi (Cuv−i+1 − Cuv−i ),

(24)

i=2

with initial conditions Cu0 = g(xu ), u = 1, 2, . . . , N − 1. Now, we will prove the unconditional stability of the fractal–fractional equation of the model by using a method similar to the von Neumann stability analysis which is applied for the stability analysis of the finite difference scheme used to solve linear partial differential equations.

5 Analysis of Stability Theorem 5.1 The fractional Crank–Nicolson scheme for discretization applied on (13) is unconditionally stable for ρ ∈ (0, 1], β ∈ (0, 1]. √ Proof Substituting Cuv = γv eiuωd in (24), here i = −1, γv , ω are real, gives, − (ξ + φ)γv ei(u−1)ωd + (τρ,h + ξ + 2φ)γv eiuωd − φγv ei(u+1)ωd = (τρ,h − ξ − 2φ)γv−1 eiuωd + (−ξ + φ)γv−1 ei(u−1)ωd v ρ + φγv−1 ei(u−1)ωd − τρ,h ωi (γv−i+1 eiuωd − γv−i eiuωd ).

(25)

i=2

On simplifying, −(ξ + φ)γv e−iωd + (τρ,h + ξ + 2φ)γv − φγv eiωd = (τρ,h − ξ − 2φ)γv−1 + (−ξ + φ)γv−1 e−iωd − τρ,h

v ρ ωi (γv−i+1 − γv−i ), i=2

(26)

γv =

[−ξ e−iωd + 2φ cos(ωd ) + τρ,h − ξ − 2φ]γv−1 − τρ,h

v

(ρ) i=2 ωi (γv−i+1 − γv−i )

−ξ e−iωd − 2φ cos(ωd ) + τρ,h + ξ + 2φ

.

(27)

592

Kritika et al.

Since γv is real, (ξ +φ)(τρ,h +2φ) γv−1 + (ξ + φ)[2 − cos(ωd )]) − 2 τρ,h

ρ v )] − 1 + (ξ +2φ)[1−cos(ωd i=2 ωi (γv−i+1 − γv−i ) τρ,h γv = .

2 (ξ +2φ)[1−cos(ωd )] 2 2 1+ + ξ sin (ωd ) τρ,h (28)

2 )] It is observed that 1 + (ξ +2φ)[1−cos(ωd + ξ 2 sin2 (ωd ) ≥ 1 for all the parameters τρ,h ρ, v, ω, d and h. Therefore, it follows that, 1+

2φ cos(ωd ) (2τρ,h 2 τρ,h



 2φ cos(ωd ) (ξ + φ)(τρ,h + 2φ) γ1 ≤ 1 + (2τρ,h + (ξ + φ)[2 − cos(ωd )]) − γ0 , 2 2 τρ,h τρ,h (29) and   (ξ + φ)(τρ,h + 2φ) 2φ cos(ωd ) γv ≤ 1 + (2τρ,h + (ξ + φ)[2 − cos(ωd )]) − γv−1 2 2 τρ,h τρ,h   v (ξ + 2φ)[1 − cos(ωd )] ρ − 1+ ωi (γv−i+1 − γv−i ), (30) τρ,h i=2

For v = 2 we have,   2φ cos(ωd ) (ξ + φ)(τρ,h + 2φ) γ2 ≤ 1 + (2τρ,h + (ξ + φ)[2 − cos(ωd )]) − γ1 2 2 τρ,h τρ,h   (ξ + 2φ)[1 − cos(ωd )] ω2ρ (γ1 − γ0 ). − 1+ (31) τρ,h On repetition of this process for γi , i = 3, 4, . . . , v, 

 (ξ + φ)(τρ,h + 2φ) 2φ cos(ωd ) γv ≤ 1 + (2τρ,h + (ξ + φ)[2 − cos(ωd )]) − γv−1 2 2 τρ,h τρ,h   v (ξ + 2φ)[1 − cos(ωd )] ρ − 1+ ωi (γv−i+1 − γv−i ). (32) τρ,h i=3

The summation part of the above equation is negative; therefore, the inequalities (31) and (32) implies γv ≤ γv−1 ≤ γv−2 · · · ≤ γ0 . Thus, γv = |Cuv | ≤ γ0 = |Cu0 | = ||gu ||, which implies ||Cuv || = ||gu || and hence the unconditional stability of the numerical scheme is proved.

A Fractional Model to Study the Diffusion of Cytosolic Calcium

593

6 Result and Discussion The Crank–Nicolson method is used to analyze the developed model which is incorporating fractional and fractal derivative with respect to time and space variable, respectively. Fractal–fractional order of the newly developed model is analyzed by the Crank–Nicolson method. This analysis is used for doing the required numerical simulation. The graphs are plotted for fixed β = 1 and ρ = 0.6, 0.7, 0.8, 0.9 in Fig. 1 and for ρ = 1 and β = 0.6, 0.7, 0.8, 0.9 in Fig. 2. Figure 1 reveals that calcium staying at higher concentration level in larger region after fired within the cell. Also, on comparing Figs. 1 and 2 respectively corresponds to integral value of β, i.e., (β = 1)and fractional values of β. It is observed that the propagation of the cytosolic calcium concentration is slower and requires longer time in fractional model than in case of classical Fickian diffusion, showing the anomalous character of the diffusive mass transport. The behavior of newly developed model

90

100

80 80

70

C(x,t)

C(x,t)

60 60 40

50 40 30 20

20

10 0 100

80

60

t 40

20

0 0

0

100

80

60

40 x

20

100

80

t

80

70

70

60

60

50

50 C(x,t)

C(x,t)

80

40

30

20

20

10

10

0 90

0 90

60

50

40

t

30

20

10

0

0

10

20

30

40

x

50

60

(c) For ρ = 0.8 and β = 1

20

100

80

60

40

20

00

x

40

30

70

40

(b) For ρ = 0.7 and β = 1

(a) For ρ = 0.6 and β = 1

80

60

70

80

90

80

70

60

50

40

t

30

20

10

0

0

10

20

30

40

x

50

60

70

80

90

(d) For ρ = 0.9 and β = 1

Fig. 1 Graphical representation of the concentration profile of cytosolic calcium corresponding to various values of ρ and fix value of β = 1

594

Kritika et al.

4.5

6

4 3.5

5

C(x,t)

3

4 C(x,t)

2.5 2 1.5

3 2

1 0.5

1 0 100 80

0 90

60

t 40

20 0

10

0

30

20

40

80

70

60

50

90

80

70

60

50

40

t

x

(a) For ρ = 1 and β = 0.6

30

20

10

0

0

10

20

60

50

40

30

x

70

80

90

80

90

(b) For ρ = 1 and β = 0.7 14

10

12 10

6 C(x,t)

C(x,t)

8

4 2

8 6 4

0 100

2

80 60

t 40

20 0

0

10

20

30

40

50

60

x

(c) For ρ = 1 and β = 0.8

70

80

90

0 90

80

70

60

50

40t

30

20

10

0

0

10

20

30

40

x

50

60

70

(d) For ρ = 1 and β = 0.9

Fig. 2 Graphical representation of the concentration profile of cytosolic calcium corresponding to various values of β and fix value of ρ = 1

is depicted in Fig. 3. The simulation for the solution of the fractal–fractional model illustrates that how the concentration of cytosolic calcium changes periodically in the positive x-direction. For the fractal derivative, greater the fractal dimension (β) faster the diffusion of mass, i.e., as the fractional values of β increase the passage of calcium concentration within the cell increases. From simulation, the observed subdiffusion arises in a concentration dependent manner. It remains for the intermediate time and later collapsed back to normal diffusion. The observed subdiffusion occurring in cytoplasm is transient feature. Similar simulations were obtained by Yadav and Agarwal [27] to study the groundwater flow in fractured aquifer.

A Fractional Model to Study the Diffusion of Cytosolic Calcium

60

45 40

50

35

40 C(x,t)

C(x,t)

30 25 20

30 20

15 10

10

5 0 90

80

0 100 70

60

50

40

t

30

20

10

0

0

10

20

30

40

50

60

70

80

80

90

60

t 40

x

20 0

0

10

20

30

40

50

60

70

80

90

x

(b) For ρ = 0.6 and β = 0.6

(a) For ρ = 0.8 and β = 0.8

30

35

25

30 25 C(x,t)

20 C(x,t)

595

15 10

20 15 10

5

5

0 100

0 100

80 60

t 40

20 0

0

10

20

30

40

50

60

70

x

(c) For ρ = 0.9 and β = 0.8

80

90

80 60

t 40

20 0

0

10

20

30

40

50

60

70

80

90

x

(d) For ρ = 0.8 and β = 0.6

Fig. 3 Graphical representation of the concentration profile of cytosolic calcium corresponding to various values of β and ρ

7 Conclusion In this paper, a mathematical model is presented to observe the diffusion of cytosolic calcium and more specifically we observed the anomalous subdiffusive behavior, and hence, introduction of fractal dimension is introduced in the diffusion equation to describe the process. The model is solved by the by C-N method for the results numerical simulation is done. Along with its stability analysis is also done proving that C-N method is unconditionally stable. The fractional model (fractionalized by using the Caputo derivative with respect to time variable) developed in this work explains the anomalous subdiffusion of calcium in the cytosol more accurately because the non-locality of the operator in the sense that the next state of the system depends not only upon its current state but also upon all of its preceding states and in the anomalous diffusion indifferent to normal the movement of molecules have dependency over the previous state. From the fractional model, it is observed that the subdiffusive behavior is transient.

596

Kritika et al.

References 1. Agarwal, R., Jain, S., Agarwal, R.P.: Mathematical modelling and analysis of dynamics of cytosolic calcium ion in astrocytes using fractional calculus. J. Fract. Calc. Appl. 9(2), 1–12 (2018) 2. Agarwal, R., Kritika, Purohit, S.D.: Fractional order mathematical model for the cell cycle of a tumour cell. Fractional Calculus in Medical and Health Science, p. 19. CRC Press, Boca Rotan (2019) 3. Agarwal, R., Purohit, S.D., Kritika: A mathematical fractional model with non singular kernel for thrombin receptor activation in calcium signalling. Math. Methods Appl. Sci. 42, 7160–7171 (2019) 4. Agarwal, R., Kritika, Purohit, S.D., Kumar, D.: Mathematical modelling of cytosolic calcium concentration distribution using non-local fractional operator. Discrete Contin. Dyn. Syst. S (2020, in press) 5. Analytic solution of fractional advection dispersion equation with decay for contaminant transport in porous media. Mat. Vesn. 71, 5–15 (2019) 6. Asif, N.A., Hammouch, Z., Riaz, M.B., Bulut, H.: Analytical solution of a Maxwell fluid with slip effects in view of the Caputo-Fabrizio derivative. Eur. Phys. J. Plus 133, 272 (2018) 7. Atangana, A., Gómez-Aguilar, J.F.: Fractional derivative with no-index law property: application to chaos and statistic. Chaos Solitons Fractals 114, 516–535 (2018) 8. Backx, P.H., De Tombe, P.P., Van Deen, J.H., Mulder, B.J., Ter Keurs, H.E.: A model of propagating calcium-induced calcium release mediated by calcium diffusion. J. Physiol. 27, 963–977 (1989) 9. Bagley, R.L., Torvik, P.: A theoretical basis for the application of fractional calculus to viscoelasticity. J. Rheol. 27, 201–210 (1983) 10. Baleanu, D., Güvenç, Z.B., Machado, J.T.: New Trends in Nanotechnology and Fractional Calculus Applications. Springer, New York (2010) 11. Caputo, M.: Linear models of dissipation whose Q is almost frequency independent. Geophys. J. Int. 13, 529–539 (1967) 12. Chen, W.: Time-space fabric underlying anomalous diffusion. Chaos Solitons Fractals 28, 923–929 (2006) 13. Girard, S., Lückhoff, A., Lechleiter, J., Clampham, D.: Two-dimensional model of calcium waves reproduces the patterns observed in xenopus oocytes. Biophys. J. 61, 509–517 (1992) 14. Gomez, F., Saad, K.: Coupled reaction-diffusion waves in a chemical system via fractional derivative in Liouville-Caputo sense. Rev. Mex. Fis. 64, 539–547 (2018) 15. Jarad, F., Abdeljawad, T., Hammouch, Z.: On a class of ordinary differential equations in the frame of Atangana-Baleanu fractional derivative. Chaos Solitons Fractals 117, 16–20 (2018) 16. Kanno, R.: Representation of random walk in fractal space-time. Phys. A: Stat. Mech. Appl. 248, 165–175 (1998) 17. Kilbas, A.A., Srivastava, H.M., Trujilo, J.J.: Theory and Applications of Fractional Differential Equations, vol. 204. Elsevier, New York (2006) 18. Kumar, D., Singh, J., Baleanu, D.: Numerical computation of a fractional model of differentialdifference equation. J. Comput. Nonlinear Dyn. 11 (2016) 19. Liu, F., Anh, V., Turner, I.: Numerical solution of the fractional order advection-dispersion equation. In: Proceeding of BAIL2002, pp. 159–164 (2002) 20. Liu, F., Anh, V., Turner, I.: Numerical solution of the space fractional Fokker-Planck equation. J. Comput. Appl. Math. 166, 209–219 (2004) 21. Lu, L., Xia, L., Ye, X., Cheng, H.: Simulation of the effect of rouge ryanodine receptors on a calcium wave in ventricular myocytes with heart failure. Phys. Biol. 7, 026005 (2010) 22. Carpinteri, A., Mainardi, F.: Fractals and Fractional Calculus in Continuum Mechanics, pp. 291–348. Springer, Wien (1997) 23. Murio, D.A.: Implicit finite difference approximation for time fractional diffusion equations. Comput. Math. Appl. 56, 1138–1145 (2008)

A Fractional Model to Study the Diffusion of Cytosolic Calcium

597

24. Saha, Ray, S., Bulut, H., Baskonus, H.M., Belgacem, F.B.M.: The analytical solution of some fractional ordinary differential equations by the Sumudu transform method. Abstr. Appl. Anal. 2013, 203875 (2013) 25. Sokolov, I.M., Klafter, J.: Field-induced dispersion in subdiffusion. Phys. Rev. Lett. 97, 140602 (2006) 26. Tan, W., Fu, C., Xie, W., Cheng, H.: An anomalous subdiffusion model for calcium spark in cardiac myocytes. Appl. Phys. Lett. 91, 183901 (2007) 27. Yadav, M.P., Agawrwal, R.: Numerical investigation of fractional-fractal Boussinesq equation. Chaos 29, 013109 (2019)

Signal Processing Techniques for Coherence Analysis Between ECG and EEG Signals with a Case Study Rajesh Polepogu and Naveen Kumar Vaegae

Abstract Each organ of the human body has some synchronism, affiliation and connection to one another. Typically, the biomedical signal electrocardiography (ECG) and electroencephalography (EEG) are considered as the main signals for analysis and handling of circulatory and central nervous system-related anomalies. In this scenario, non-invasive, cost-effective, minimal effort and precise continuous monitoring of aforementioned systems is needed. Stand-alone ECG and EEG signals utilized in the investigation of corresponding abnormalities may lead to insufficient analysis and inadequate results in many situations where the abnormalities are interrelated to both the systems. Both ECG and EEG signals are utilized simultaneously in neurocardiology for study of heart- and brain-related problems. Thus, signal processing of such signals is of utmost importance. Coherence analysis is an important signal processing technique to correlate ECG and EEG signals. In this paper, magnitude squared coherence (MSC) and phase coherence (PC) are used to determine the coherence between the ECG and EEG signals. Also, various mathematical and experimental techniques used to determine the coherence between circulatory and nervous system are discussed concurrently to signify the necessity of adaptive and intelligent signal processing techniques. A case study is presented with statistical results and graphical illustrations to validate the use of signal processing techniques. The results of proposed techniques can be used to construct an intelligent decision-making system for early prediction and detection of various abnormalities of neuro-cardiac systems. Keywords Electrocardiogram (ECG) · Electroencephalogram (EEG) · Intelligent decision systems · Magnitude squared coherence · Phase coherence · Signal processing

R. Polepogu · N. K. Vaegae (B) School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India e-mail: [email protected] R. Polepogu e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_48

599

600

R. Polepogu and N. K. Vaegae

1 Introduction Usually, biomedical signals convey significant information of the functional systems of human beings. Efficient processing of bio-signals provides important analysis of physiological systems clinically. The circulatory system and nervous system are the two most indispensable correlated systems associating human heart and brain [1]. The four major interactions between heart and brain are depicted in Fig. 1 [1]. Furthermore, it is seen that clinically many abnormalities of the heart and the brain cannot be investigated in segregation. So electrocardiography (ECG) and electroencephalography (EEG) signals can be analyzed simultaneously for assessing the functional association between heart and brain. In cardiology, ECG signals are used for examining the functioning of heart and its variations from the standard values. The ECG is a straight forward test for assessment of basic issues of heart. The outcome of an ECG evaluation does not preclude coronary illness. As such, extra tests may be suggested, particularly when the ECG affirms the doubt of an inborn heart imperfection [2, 3]. In neurology, EEG signals are used for assessing the functioning of brain; however, these by themselves are not adequate to investigate a wide range of clutters. Both cardiovascular and neurological issues frequently impact one another in one way or other; for instance, neurologic contemplations are basic in few kinds of coronary illness. Some other prominent correlated issues are cardiovascular embolic stroke, neurological coronary illness and neuro-cardiac syndromes [4, 5]. In literature, various mathematical and experimental techniques are reported for the determination of the functional association between cardiac and brain signals. In 1961, Asahina and Matsui identified the relationship between the sleep EEG and cardiac activity using polygraphically tachograph recordings [6]. In 2003, Ako et al. described the relationship between the electric activity of brain and variations of heart rate during sleep using polysomnography recordings at which they were examined

Fig. 1 Heart–brain interactions [1]

Signal Processing Techniques for Coherence Analysis Between ECG …

601

the oscillations of nervous system [7]. In 2003, Coccagna et al. inspected cardio circulatory disorders and cardio circulatory variations during sleep for normal subjects [8]. In 2008, Khandoker et al. observed interaction between the ECG and sleep EEG signals for disruptive sleep apnea events in presence and absence of arousals [9]. In 2008, Kokonozi et al. proclaimed interaction and complexity between brain and heart rate for sleep deprived subjects using correlation method [10]. In 2009, Lin et al. described coherence analysis between PPG and respiration signals during the mid-2009 using bi-variate AR model. The outcomes of this technique revealed that PPG and respiration signals are in high coherence [11]. In 2010, Yang et al. portrayed about single-channel EEG-EMG coherence investigation that revealed muscle fatigue-associated progressive variations in cortico-muscular coupling. The reason for this examination was to address this issue by measuring single-channel EMG-EEG intelligibility [12]. In 2010, Hu et al. reported the recording reference involvement to EEG activity, phase synchronization and coherence. The experiment demonstrated the considerable effect of the recording reference on the bi-variate measurements [13]. In 2014, Lin et al. reported correlations between the signal complexity of cerebral and cardiac functionalities using multiscale entropy [14]. In epileptic patients, time-variant coherence between changes in heart rate and electric activity of brain signals is carried out [15]. In 2015, Chiu et al. portrayed the multifaceted nature of ECG for anticipating the variations in stress induced alphawaves in patients experiencing cardiovascular catheterization [16]. In 2016, Mensen et al. expressed presence of individual slow waves in sleep can be predicted by ECG using near-infrared spectroscopy (NIRS) recording [17]. In 2017, Ramasamy and Varadan described heart–brain interactions through ECG signals, EEG signals and emotions using music therapy and emotion-based analysis [18]. In 2018, Nakahata and Hagiwara proclaimed about relationship between EEG and ECG using subjective evaluation and EEG 1100 experimental equipment [19]. For the methods in literature, it is noticed that, by investigating relation between ECG and EEG signals of healthy subjects, there was no proper analysis of functional association between the regions of brain to the corresponding heart signal and are not addressed. In this aspect, analysis of functional association between ECG and EEG is investigated such that results can be further used to construct intelligent decisionmaking system for early detection of abnormalities. Section 2 discusses prominent techniques in literature to analyze the relation between heart and brain. In Sect. 3, signal processing-based coherence analysis between heart and brain using MSC/PC is proposed. Section 4 presents a case study to validate the MSC and PC techniques and Sect. 5 summarizes the work.

2 Coherence Analysis Techniques Biomedical signals are used by various scientists and researchers to study functioning of human body and to detect indications of various diseases. ECG signals

602

R. Polepogu and N. K. Vaegae

help in determining heart functioning, EEG signals help in determining brain functioning. ECG and EEG signals are used for measurement of coherence. Coherence is a technique used to study the inter relationship between two signals. The minimum value of coherence is 0, and the maximum value is 1. The values of coherence above 0.5 are considered to be significant. For the coherence study, various signal processing techniques and transforms are available. They are fast Fourier transform (FFT), power spectrum density estimation methods, minimum variance distortionless response (MVDR) method and wavelet transforms. FFT has some limitations related to frequency and time components. Wavelet transform is an effective tool for finding the coherence between two signals and uses continuous wavelet transform (CWT). The large signal is analyzed at the local level by using wavelets. The coefficients of the wavelet show the point of discontinuities in the signal. Wavelets analysis scales the signal and then shifts it and helps in compressing the signal also without any loss of quality.

2.1 Mathematical Techniques In 2008, Khandoker et al. described relationship EEG during sleep and corresponding cardiac signals under obstructive sleep events using power spectrum density estimation techniques. The outcomes of the investigation were correlation between EEG and ECG signals in rapid eye movement sleep is more than that of non rapid eye movement sleep. The effect of presence and absence of arousals is not analyzed [20]. In 2010, Singh et al. calculated correlation among EEG and ECG signals using power spectrum density estimation methods. The Welch method is used for spectral estimation to analyze the functional relation of EEG and ECG signals at a specified frequency. They also calculated mean square coherence value. The values of coherence above 0.5 are considered to be significant [21]. In 2010, Abdullah et al. depicted cross-correlation of heart rate variations and EEG frequency bands for classification of sleep apneas using experiments in univariant Gaussian method. In this analysis, it is found that the cross-correlation analysis of heart rate variations and EEG frequency bands can be used as an important parameter to differentiate various apnoea [22]. In 2011, Goli´nska et al. introduced coherence function in neurology, cardiology and gynecology using Welch method and minimum variance distortionless response (MVDR) method. They have introduced uses of the coherence function in neurology, cardiology and uterine contractions activity studies [23]. In 2012, Singh et al. assessed functional association of heart and brain signals at different respiratory rates utilizing power spectrum density estimation methods. Also they have investigated the functional association of heart and brain signals at a specified frequency and under various frequency ranges based on magnitude squared coherence value. The values of coherence above 0.5 are considered to be significant [24]. In 2014, Chaudhary et al. used auto-regressive methods for the estimation coherence between EEG and ECG signals. Coherence is analyzed by estimating the magnitude squared coherence value. The values of coherence above 0.5 are considered to be significant

Signal Processing Techniques for Coherence Analysis Between ECG …

603

[25]. Minimum variance distortionless response (MVDR) method was proposed by Capon in 1969 [26]. This method in many cases gives more precise results than the Welch method [27–29]. It is based on a specific filter designed to minimalize the power of the output signal [26–30]. More information about the MVDR method can be found in the papers by Benesty et al. [27, 28] and Capon [26].

2.2 Experimental Techniques In 1961, Asahina et al. described a new relation between sleep EEG and circulatory activity is calculated using poly-graphically tachograph recording. The functional relation between EEG during sleep and cardiovascular activities mainly heart beat, blood pressure (BP) and other important parameters during rest or sleep were examined. Also the breath rate, movement of eye and body was tracked poly-graphically [6]. In 2003, Ako et al. revealed relationship among EEG and heart rate variability (HRV) during rest utilizing polysomnography recording, they were analyzed the consistent changes of autonomic apprehensive exercises during rest [7]. In 2008, Khandoker et al. reported coordination among EEG during rest and heart activity under rest apnea occasions considering the presence and absence of feelings of excitement. This determination is depended on clinical side effects and polysomnographic (PSG) results along with EEG results. They were features of the active communication of cardiac systems and cerebrum in rest cluttered breathing occasions in presence or absence of excitement. Also the electric activities of brain and heart during rapid and non-rapid eye movements were inspected for the area of pre-scored apnea events [9]. In 2003, Coccagna and Scaglione detailed cardio circulatory disarranges and rest, cardio circulatory changes during rest in typical subjects have been broadly observed by recording a wide scope of physiological boundaries. These chronicles showed not just that the impact of the autonomic sensory system shifted generally in rest contrasted and alertness, yet that it additionally acted diversely in NREM rest and REM rest [8]. In 2008, Kokonozi et al. presented about the complexity of brain and heart rate under their association in sleep depressed patients using correlation method [10]. In 2009, Lin et al. detailed about the relationship between respiratory activities and photo-plethysmogram. The outcomes exhibit that high coherence exists among breath and PPG signal, whereas the coherence disappears in breath-holding experiments. These outcomes suggest that PPG signal uncovers the respiratory data [11]. In 2010, Yang et al. announced singlepreliminary electroencephalography (EEG) and electromyography (EMG) lucidness investigation uncovers muscle exhaustion related dynamic changes in corticomuscular coupling, The motivation behind this examination was to address this issue by measuring single-preliminary EEG-EMG intelligence and EEG, EMG power dependent on wavelet transform [12]. In 2010, Hu et al. announced the chronicle reference commitment to EEG connection, stage synchrony, and coherence. The trial results show the notable effects of reference on the bivariate measurements [13]. In 2014, Lin et al. revealed correlations between the signal complexities of cerebral

604

R. Polepogu and N. K. Vaegae

and heart electrical action utilizing multi scale entropy investigation [14]. In 2014, Piper et al. revealed about time–variation rationality between pulse changeability and EEG movement in epileptic patients: utilizing multivariate exact mode deterioration (MEMD), tensor decay. It can be expressed that this systematic idea can be employed for the investigation of tvCOH between HRV-LF and EEG envelopes [15]. In 2017, Ramasamy and Varadan announced heart–mind connections through ECG signals, EEG signs and emotions utilizing music treatment and feeling-based analysis [18]. The functional association between ECG and EEG is investigated by proposing two simple signal processing techniques, namely MSC and PC and are presented in the next section. The statistical outcomes of the proposed methods can be further used to construct intelligent decision making system for early detection of abnormalities.

3 Signal Processing-Based Coherence Analysis—Case Study In this paper, the two signal processing techniques are proposed mainly magnitude squared coherence (MSC) and phase coherence (PC) to determine the functional association between heart and brain signals mainly EEG and ECG. These techniques involve in obtaining the statistical parameters mainly maximum and minimum amplitudes, mean, standard deviation and median of the acquired signals. The block diagram of the proposed signal processing technique is illustrated in Fig. 2. The PhysioBank database is used for the selection of ECG and EEG signals [31]. The cross-power spectral density (CPSD) method is used to obtain statistical parameters mainly maximum and minimum amplitudes and mean of acquired signals. The mean of PC/MSC is normalized in the range [0, 1]. The maximum likelihood functional region between the four regions of brain and heart is determined from the normalized mean of the coherence. The source code is developed using MATLAB R2018b [32]. The MSC between two signals is obtained by Cx y ( f ) = 

ECG signal Subject EEG signal

   Px y ( f )2 Px x ( f ) × Pyy ( f )

Signal processing based coherence analysis techniques (MSC, PC)



Stastical Parameters (Mean, Standard Deviation, Median)

(1)

Final decision to evaluate the functional association between heart and brain

Fig. 2 Block diagram of proposed signal processing technique for coherence analysis between heart and brain

Signal Processing Techniques for Coherence Analysis Between ECG …

605

where Pxx (f ) Pyy (f ) Pxy (f )

Fourier transform of autocorrelation function (ACF) of x, Fourier transform of autocorrelation function (ACF) of y and Fourier transform of cross-correlation function (CCF) of x and y

PC is the measure of the phase induced by the one signal to another signal at a particular frequency. It is measured in radians. The PC estimate between the signals x and y is calculated as  |θ ( f )| = tan

−1

  Im Px y ( f )   Re Px y ( f )

(2)

Mean is given as X=

ΣX n

(3)

Standard deviation is represented as  σ =

 2 Σ X−X n

(4)

where n = number of samples.

4 Results Sample data are picked up from healthy persons under the age group of 18–35 years at a sampling rate of 1000 samples/second from the PhysioBank [31]. For our analysis, five samples of ECG and EEG signals are taken into account for the analysis of coherence. The statistical parameters of the five subjects are given in Table 1. Here, the subject is represented by S, ECG signal is denoted as S1 and EEG signal as S2, respectively. The maximum and minimum amplitudes of S1 and S2 signals are used to calculate mean, standard deviation (SD), and median.

4.1 MSC Analysis The MSC analysis is performed by selecting one subject out of five subjects. The investigation of the MSC between the corresponding EEG and ECG signals of the four regions of the brain, namely frontal, central, parietal, and occipital, is presented

606

R. Polepogu and N. K. Vaegae

Table 1 Statistical analysis of five healthy subjects under conscious state S

Signals

Maximum of amplitude (V)

Minimum of amplitude (V)

Mean (V)

Standard deviation (±V)

Median (V)

1

S1

0.09033

−0.00549

0.01157

0.02092

0.00183

S2

0.00305

−0.00092

0.00090

0.00061

0.00092

2

S1

0.09064

−0.00580

0.01179

0.02142

0.00153

S2

0.00488

−0.00183

0.00090

0.00063

0.00092

3

S1

0.09064

−0.00549

0.01219

0.02287

0.00153

S2

0.00397

−0.00122

0.00089

0.00065

0.00092

S1

0.09186

−0.00702

0.01139

0.02227

0.00153

S2

0.00397

−0.00153

0.00087

0.00057

0.00066

S1

0.09186

−0.00641

0.01217

0.02276

0.00183

S2

0.00427

−0.00183

0.00090

0.00063

0.00034

4 5

in the following sections. The ECG and EEG signals of subject-1 are shown in Fig. 3, whereas the parameters for investigation of MSC for the subject-1 are shown in Table 2. In Fig. 4a, it is observed that the mean MSC value is 0.14019 in the region of 0– 35 Hz bandwidth, and one of the maximum value of MSC is 0.99601 is at 0.1 Hz and the other two maximum values of MSC are in the frequency range of 16–21 Hz. In Fig. 4b, it is observed that the mean value of MSC is 0.13861 in the same bandwidth and the maximum value of MSC is 0.99281 is at 0.1 Hz and another maximum is at a

Fig. 3 a, b ECG signal and corresponding EEG (Fp1-Fp2) signal (each signal is sampled at the sampling rate 1000 samples/second and number of samples taken for each signal is 5006) of the subject-1. c, d ECG signal and corresponding EEG (C3-C4) signal. e, f ECG signal and corresponding EEG (P3-P4) signal. g, h ECG signal and corresponding EEG (O1-O2) signal

Signal Processing Techniques for Coherence Analysis Between ECG …

607

Table 2 Parameters of MSC investigation for subject-1 ECG and EEG signals

Parameter

Mean (V)

SD (±V)

Median (V)

Frontal

MSC

0.14019

0.13377

0.11068

Central

0.13861

0.13516

0.10629

Parietal

0.14399

0.14635

0.10466

Occipital

0.15198

0.15084

0.11045

Fig. 4 a MSC between ECG and EEG (Fp1-Fp2) in the band width (0–35 Hz) for subject-1. b MSC between ECG and EEG (C3-C4). c MSC between ECG and EEG (P3-P4). d MSC between ECG and EEG (O1-O2)

frequency 4.9 Hz and last one at 31 Hz. In Fig. 4c, it is observed that the mean value of MSC is 0.14399 and the maximum value of MSC is 0.99142 close to the frequency 0.1 Hz. There are three other maximums as found, one close to the frequency 7 Hz, second one close to the frequency 26 Hz and third one close to the frequency 35 Hz. In Fig. 4d, it is observed that the mean value of MSC is 0.15198 and the maximum value of MSC is 0.99663 close to the frequency 0.1 Hz. The two coherence peaks are one close to the frequency 9.5 Hz and another close to the frequency 35 Hz.

4.2 PC Analysis The PC analysis is performed by selecting one subject out of five subjects. The investigation of the PC between the ECG and corresponding EEG signals of the four distinguished brain regions the frontal, central, parietal, and occipital is presented in the following section. The ECG and EEG signals of subject-1 are shown in Fig. 5, whereas the parameters for investigation are shown in Table 3.

608

R. Polepogu and N. K. Vaegae

Fig. 5 a, b ECG signal and analogous EEG (Fp1-Fp2) signal (each signal is sampled at the sampling rate 1000 samples/second and number of samples taken for each signal is 5006) of the subject-1. c, d ECG signal and analogous EEG (C3-C4) signal. e, f ECG signal and analogous EEG (P3-P4) signal. g, h ECG signal and analogous EEG (O1-O2) signal

Table 3 Parameters of PC investigation for subject-1 ECG and EEG signals

Parameter

Maximum

Minimum

Frontal

PC

1.54025

−1.56089

0.09108

0.86714

0.08964

1.56003

−1.47487

−0.02502

0.92418

0.00092

Parietal

1.55448

−1.55937

0.01754

0.91504

0.00183

Occipital

1.55119

−1.56665

0.08314

0.94767

−0.06879

Central

Mean

Standard deviation

Median

In Fig. 6a, it is noticed that the mean of PC values is 0.09108 in the bandwidth of 0–35 Hz. In Fig. 6b, it can be observed that the mean value of PC is −0.02502, also from the Fig. 6c, it is observed that the mean value is 0.01754 and from Fig. 6d, its value is 0.08314. The mean of PC is identified maximum when it is measured between the ECG signal and corresponding EEG signal collected from the frontal region and occipital region.

5 Conclusions In this paper, various coherence analysis techniques are presented for finding the correlation between electrical activity of heart and brain signals. The necessity of signal processing techniques for coherence analysis is illustrated by a case study. The proposed signal processing technique is magnitude squared coherence and phase

Signal Processing Techniques for Coherence Analysis Between ECG …

609

Fig. 6 a PC between ECG and EEG (Fp1-Fp2) in the band width (0–35 Hz) for subject-1. b PC between ECG and EEG (C3-C4). c PC between ECG and EEG (P3-P4). d PC between ECG and EEG (O1-O2)

coherence. The proposed method is useful for determination of brain and heart abnormalities. The future scope includes coherence analysis between brain and heart at various respiratory rates. Acknowledgements The authors are profoundly thankful to Vellore Institute of Technology, Vellore, India, and V. R. Siddhartha Engineering College, Vijayawada, India.

References 1. McCraty, R.: Exploring the role of the heart in human performance. Sci. Heart 2, 70 (2016) 2. Saini, S.K., Gupta, R.: A review on ECG signal analysis for mental stress assessment. In: 6th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, pp. 915–918 (2019) 3. Billones, R.K.C., Bedruz, R.A.R., Caguicla, S.M.D.: Cardiac and brain activity correlation analysis using electrocardiogram and electroencephalogram signals. In: IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), Baguio City, Philippines, pp. 1–6 (2018) 4. Schomer, D.L., Da Silva, F.L.: Niedermeyer’s electroencephalography: basic principles, clinical applications, and related fields. Lippincott Williams & Wilkins (2012) 5. Zou, R., Shi, W., Tao, J., Li, H., Lin, X., Yang, S., Hua, P.: Neurocardiology: cardiovascular changes and specific brain region infarcts. Bio-Med. Res. Int. (2017)

610

R. Polepogu and N. K. Vaegae

6. Asahina, K., Matsui, R.: Relationship between sleep EEG and circulatory activities. Jpn. J. Physiol. 12(2), 124–128 (1961) 7. Ako, M., Kawara, T., Uchida, S., Miyazaki, S., Nishihara, K., Mukai, J., Hirao, K., Ako, J., Okubo, Y.: Correlation between electroencephalography and heart rate variability during sleep. Psychiatry Clin. Neurosci. 57(1), 59–65 (2003) 8. Coccagna, G., Scaglione, C.: Cardiocirculatory disorders and sleep. In: Sleep, pp. 589–597. Springer, Boston (2003) 9. Khandoker, A.H., Karmakar, C.K., Palaniswami, M.: Interaction between sleep EEG and ECG signals during and after obstructive sleep apnea events with or without arousals. In: Computers in Cardiology, pp. 685–688 (2008) 10. Kokonozi, A.K., Michail, E.M., Chouvarda, I.C., Maglaveras, N.M.: A study of heart rate and brain system complexity and their interaction in sleep-deprived subjects. In: Computers in Cardiology, pp. 969–971 (2008) 11. Lin, Y.D., Liu, W.T., Tsai, C.C., Chen, W.H.: Coherence analysis between respiration and PPG signal by bivariate AR model. World Acad. Sci. Eng. Technol. 53, 847–852 (2009) 12. Yang, Q., Siemionow, V., Yao, W., Sahgal, V., Yue, G.H.: Single-trial EEG-EMG coherence analysis reveals muscle fatigue-related progressive alterations in corticomuscular coupling. IEEE Trans. Neural Syst. Rehabil. Eng. 18(2), 97–106 (2010) 13. Hu, S., Stead, M., Dai, Q., Worrell, G.A.: On the recording reference contribution to EEG correlation, phase synchorony, and coherence. IEEE Trans. Syst. Man Cybern Part B (Cybern.) 40(5), 1294–1304 (2010) 14. Lin, P.F., Lo, M.T., Tsao, J., Chang, Y.C., Lin, C., Ho, Y.L.: Correlations between the signal complexity of cerebral and cardiac electrical activity: a multiscale entropy analysis. PLoS ONE 9(2), e87798 (2014) 15. Piper, D., Schiecke, K., Pester, B., Benninger, F., Feucht, M., Witte, H.: Time-variant coherence between heart rate variability and EEG activity in epileptic patients: an advanced coupling analysis between physiological networks. New J. Phys. 16(11), 115012 (2014) 16. Chiu, H.C., Lin, Y.H., Lo, M.T., Tang, S.C., Wang, T.D., Lu, H.C., Ho, Y.L., Ma, H.P., Peng, C.K.: Complexity of cardiac signals for predicting changes in alpha-waves after stress in patients undergoing cardiac catheterization. Sci. Reports 5, 13315 (2015) 17. Mensen, A., Zhang, Z., Qi, M., Khatami, R.: The occurrence of individual slow waves in sleep is predicted by heart rate. Sci. Reports 6(1), 1–8 (2016) 18. Ramasamy, M., Varadan, V.K.: Study of heart-brain interactions through EEG, ECG, and emotions. In: Nanosensors, Biosensors, Info-Tech Sensors and 3D Systems, International Society for Optics and Photonics, vol. 10167, 101670I (2017) 19. Nakahata, Y., Hagiwara, H.: Relationship between EEG and ECG findings at rest and during brain activity. In: International Conference on Applied Human Factors and Ergonomics, pp. 285–294. Springer, Cham (2017) 20. Khandoker, A.H., Karmakar, C.K., Palaniswami, M.: Interaction between sleep EEG and ECG signals during and after obstructive sleep apnea events with or without arousals. In: Computers in Cardiology, The University of Melbourne, Victoria, Australia, pp. 685–688 (2008) 21. Singh, G., Gupta, V., Singh, D.: Coherence Analysis between ECG signal and EEG signal. Int. J. Electronic. Commun. Technol. 1, 25–28 (2010) 22. Abdullah, H., Maddage, N.C., Cosic, I., Cvetkovic, D.: Cross-correlation of EEG frequency bands and heart rate variability for sleep apnoea classification. Med. Biol. Eng. Comput. 48(12), 1261–1269 (2010) 23. Goli´nska, A.K.: Coherence function in biomedical signal processing: a short review of applications in neurology, cardiology and gynecology. Stud. Logic, Grammar Rhetoric 25(38), 73–82 (2011) 24. Singh, G., Singh, C.: Estimation of coherence between ecg signal and eeg signal at different heart rates and respiratory rates. Int. J. Eng. Innovative Technol. 1(5), 159–163 (2012) 25. Chaudhary, A., Chauhan, B., Singh, G., Jain, M.: Coherence among electrocardiogram and electroencephalogram signals as non-invasive tool of diagnosis. Int. J. Eng. Res. Gen. Sci. 1(2), 3 (2014)

Signal Processing Techniques for Coherence Analysis Between ECG …

611

26. Capon, J.: High-resolution frequency-wave number spectrum analysis. Proc. IEEE 57(8), 1408– 1418 (1969) 27. Benesty, J., Chen, J., Huang, Y.: A generalized MVDR spectrum. IEEE Signal Process. Lett. 12(12), 827–830 (2005) 28. Benesty, J., Chen, J., Huang, Y.: Estimation of the coherence function with the MVDR approach. In: IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, vol. 3, pp. III-III (2006) 29. Zheng, C., Zhou, M., Li, X.: On the relationship of non-parametric methods for coherence function estimation. Sig. Process. 88(11), 2863–2867 (2008) 30. Yapanel, U.H., Hansen, J.H.: A new perceptually motivated MVDR-based acoustic front-end (PMVDR) for robust automatic speech recognition. Speech Commun. 50(2), 142–152 (2008) 31. Physionet: the research resource for complex physiologic signals. www.physionet.org 32. MATLAB R2018b. www.mathworks.com

A Review on Dimensionality Reduction in Fuzzy- and SVM-Based Text Classification Strategies Shalini Puri

Abstract Due to the increased level of the big-sized documents in recent years, dimensionality reduction has become a significant step in the text document processing and classification systems. The high data dimensions in such documents make the classifier learning difficult and also increase the computation time exponentially. Text document classification systems optimally apply various searching and reduction methods on the feature set, thereby reducing the computational overhead, cost and complexity of the system, and also improving the text classification accuracy and execution time. In this direction, this article is designed to analyze several feature reduction and dimensionality reduction parameters of various fuzzy and support vector machine-based text document classification systems. The findings from their comparative studies show that the feature extraction and the support vector machine became the primary choice of the existing systems. Their % usage is observed as 43.3% and 26.08%, respectively, among all other existing methods. Keywords Dimensionality reduction · Feature extraction · Feature selection · Fuzzy analysis · Support vector machine

1 Introduction In today’s world, dimensionality reduction (DR) or attribute reduction has become an essential and integral part of every text document processing and classification system. The feature reduction approaches such as feature extraction and feature selection are primarily used to obtain the relevant and important features in optimizationbased and real-time-based systems. On the other side, feature clustering approach has been evolved as an ideal alternative of the feature selection approach, which reduces the high-dimension data into lower one greatly. To see the role and significance of these approaches in such systems, this article is designed to review and compare their key parameters of DR and classification results with the base of support vector machine (SVM) and fuzzy logic. On one side, the SVM is used as a binary and S. Puri (B) Poornima College of Engineering, Jaipur, Rajasthan, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_49

613

614

S. Puri

multi-class classifier, whereas the fuzzy logic works with uncertainty, membership, controllers and rule framing. With this, the analytical results of this review show the scope and the need of the advancement. This paper is organized as follows. Section 2 discusses an overview of various existing text document classification systems. Section 3 provides two comparative studies and their observations which are based upon DR and classification results of the existing systems. Section 4 shows their analytical results and findings which have been obtained from both studies. Section 5 includes the conclusion and future scope.

2 An Overview of Existing Text Classification Strategies Over three decades, various text document classification strategies using SVM and fuzzy have been introduced, which not only classified the text documents with high accuracy, but also greatly reduced the high dimensions of the data or features into low dimensions. This section describes many such existing algorithms and methods, which achieved the effective DR with the efficient implementation of such systems. The method in [1] included a statistical language-, context-, and frequencyindependent-based approach using English, German and Portuguese languages. This approach extracted the strong concepts from the relevant single-word and multi-word units by first finding the fixed distances between concepts and then classifying them as either a single-word or a multi-word. The text categorizer in [2] used the combination of distributional clustering and learning logic Lsquare. The distributional clustering was used for the text document projection on word clusters, and Lsquare was used for text classification. Ref. [3] used fuzzy C-means (FCM)-based clustering to first reduce the large-sized numerical data into C intervals and then it produced the effective results. It evaluated the fuzziness of granular principals with parametric quantitative index. The five algorithms, Sammon’s algorithm (SAM), principal component analysis (PCA), original neural network (ONN), modified NN (MNN), fuzzy rules extracted by Takagi–Sugeno (FRTS) and fuzzy rules extracted by Mamdani–Assilian (FRMA), were implemented in [4] for text structure preservation. In this, [4] performed the input augmentation with corresponding projected data points; augmented the dataset clustering with FCM; and translated them into fuzzy rules to approximate the Sammon’s nonlinear projection with steepest descent. The unsupervised dimension reduction and text categorization method of [5] encoded the documents into large term-document matrix. Here the dimensions were first reduced using PCA, multi-dimensional scaling (MDS), ISOmetric MAPping (ISOMAP), locally linear embedding (LLE), and Lafon’s diffusion maps (LDM); and then the documents were categorized into different categories. The hybrid Gaussian mixture and k-nearest neighbor (KNN)-based data reduction model [6] selected the non-overlapping slices; constructed the Gaussian model; obtained the kernel matrix; built the generalized

A Review on Dimensionality Reduction in Fuzzy …

615

eigen decomposition; estimated the sub-space; searched the nearest neighbors; and finally applied the mixture model to the classification. The fuzzy logic-based improved keyword extraction framework (FL-IKEF) was designed to reduce the number of keywords by solving synonym, homonym, hyponymy and polysemy (SHHP) [7]. It used the steps of pre-processing; word frequency and noun extraction with Qtag tool; noun comparison; dissimilarity calculation; single word replacement for similar keywords; hyponym words comparison using WordNet; and finally keyword extraction. Another DR-based algorithm [8] first extracted the attributes and then evaluated its impact on the text classification. It extracted the words from news bodies of Reuters-21758 and then projected them onto a new hyperplane having dimensions equal to the number of classes. The fast incremental text classification approach with fuzzy self-constructing feature clustering [9] used three weighting feature extraction approaches such as hard-fuzzy feature clustering (H-FFC), soft-FFC (S-FFC), and mixed-FFC (M-FFC). The worst time complexity of proposed, divisive clustering (DC), information gain (IG), and incremental orthogonal centroid (IOC) methods were found as O (mkp), O (mkpt), O (mp + mlogm) and O (mkpn), respectively. Here m, k, t and n represented original features, clusters, number of iterations, and number of documents, respectively. The weight-based feature extraction method [10] first obtained the synthesized attributes from original ones using transformation matrix and then performed the classification on the converted data. Another unsupervised method [11] extracted the most relevant features using singular value decomposition (SVD). The descriptive feature extraction and classification method [12] extracted and classified the non-noisy efficient features by computing each feature’s contribution. For this, it used Zipf’s law, improved term frequency–inverse document frequency category frequencies (TF/IDF-CF), and statistical analysis of the features’ classified information. Next a fuzzy-rough feature selection method using fuzzy entropy [13] was introduced which produced smaller feature sub-sets and achieved good overall classification accuracy. This work [13] was further extended in [14] with the use of lower approximation dependency and distance metric; thereby using the boundary region objects. Its [13] next advancement [15] used DMQUICKREDUCT along with distance and rough-set dependency values, and it [15] achieved (n * n + n)/2 complexity with n dimensions. It also compared three rough set approaches such as distance-metric-assisted rough-set attribute reduction (DMRSAR), variable precision rough sets (VPRS) and tolerance rough set model (TRSM) to use the similarity relation for data minimization. Here DMRSAR and VPRS were used to include fuzziness in rough set and to utilize the boundary region information. Next method [16] used fuzzy enhancing support vector decision function (fuzzy ESVDF) to select the features; to evaluate their weights using SVDF; and to apply fuzzy inferencing for ranking the features. By including both feature selection and feature extraction, fuzzy rough set-based algorithm [17] measured the relevance and significance quantitatively. Then it obtained the maximum relevance and maximum significance (MRMS) of reduced feature set to finally remove the redundancy. Ref. [17] was extended in [18] by including Interval Type-2 (IT2) fuzzy rough set-based feature selection for MRMS and by obtaining the lower and

616

S. Puri

upper fuzzy equivalence partition matrices. The methodology of [18] was further compared with the Maximal-Relevance, QUICKREDUCT (Max-Dependency and rough sets), fuzzy–rough QUICKREDUCT (Max-Dependency and fuzzy rough sets), neighborhood QUICKREDUCT (Max-Dependency and neighborhood rough sets), minimal-Redundancy Maximal-Relevance (mRMR), fuzzy rough set-based mRMR, and PCA. The hybrid weighted IT2-based fuzzy rough QUICKREDUCT algorithm [19] reduced the features for interval-valued data with rule extraction. It used weights to reduce the unknown information effect. The text document categorizer [20] first computed the features’ importance and then categorized the documents by using four context similarity (cs)-based feature selection methods such as Gini Index cs (GIcs), Document Frequency cs (DFcs), Class Discriminating Measure cs (CDMcs) and Accuracy-based context similarity (Acc2cs). A review on semantics preserving DR [21] discussed five techniques of crisp rough set reduction methods such as Rough-Set Attribute Reduction (RSAR using QUICKREDUCT), Entropy-Based Reduction (EBR), Genetic algorithm-based RSAR (GenRSAR), Ant-based RSAR (AntRSAR), and Simulated annealing-based RSAR (SimRSAR). The text classification with nonlinear DR [22] first evaluated the document similarity by using Geodesic distances; mapped the highdimensional text into low dimensional ISOMAP; and then classified the reduced data using SVM. The fuzzy similarity-based concept mining model for text classification [23] was designed to classify the text documents into pre-defined classes. It first pre-processed the documents on the sentence, document and integrated corpora levels with threshold based feature reduction, and then classified them using SVM. Its extension [24] used feature clustering to obtain more feature reduction. Furthermore, a survey on existing fuzzy similarity-based approaches of [23, 24] was discussed in [25], where it reviewed distinct parameters of existing approaches. These parameters were performance, accuracy, speed, time, storage, etc. It also discussed many methods such as feature clustering, fuzzy association, fuzzy production rules, fuzzy clustering, FCM and fuzzy signatures. Another text categorization algorithm [26] used QHull algorithm, FCM and SVM to eliminate and identify the outliers which were based upon the movement of the convex hull data center. The meaningful features were first selected and reduced to purify the data size, and then these purified cluster centers were used to train the SVM. To classify the mixed integer, nominal and continuous attributes using SVM, the agglomeration FCM-based method [27] first reduced the cluster center randomness using economical validity index for compactness and separation of the clusters, and then it classified them. The kernel fuzzy-based SVM method [28] tuned the kernel parameters with k-fold cross-validation under various kernel functions. Another text classification method [29] pre-processed the documents; extracted the features; reduced them using PCA; and then classified them using neural network. An integrated mechanism of feature selection and fuzzy rule extraction [30] first discarded the bad and indifferent features. It conducted the nonlinear interactions among various features only and also among the features and the tools. Then, it performed the classification.

A Review on Dimensionality Reduction in Fuzzy …

617

It is observed that all these existing systems primarily used feature extraction and feature selection methods to design their fuzzy and SVM-based classification systems. Therefore, it is stated that feature reduction and DR methods are of great significance to effectively design the text document classification systems.

3 Comparative Studies on DR and Classification Results of Existing Strategies Section 2 discussed many algorithms and strategies related to the text document classification systems along with the use of different DR techniques. This section presents two detailed comparative studies based upon those classification systems. The first study analyzes and compares distinct DR-based aspects of the systems, and the second study includes the comparative analysis of their classification methods and experimental results.

3.1 Study I: A Review Based upon Various DR-Based Key Aspects Study I provides the detailed review on various DR techniques and its related aspects which were used in various existing systems. Table 1 depicts the DR type, the DR method, the DR % achieved, and the significant aspects of these algorithms. It is seen that they used several types of DR methods such as feature extraction, feature selection, feature clustering, fuzzy clustering, rule extraction, and Gaussian mixture models. The DR methods used in these systems are FCM clustering, fuzzy rough sets, FCM, PCA, MDS, ISOMAP, LLE, LDM, SVD, Gaussian models, threshold, and their variations. Table 1 depicts them along with their significant aspects. It is observed from Study I that most of the existing works used feature extraction and feature selection methods as the primary attribute reduction methods. Feature clustering methods were also used in some works. So, it is seen here that fuzzybased clustering method reduced the data dimensions greatly than other methods. Their significant aspects included the important related concepts and parameters of the algorithms.

3.2 Study II: A Review on Datasets Used and Classification Results Study II discriminates these existing methods on the basis of classifier usage, datasets, experimental results and different parameters. These results are compared and shown

DR type



Feature clustering

Feature extraction

Feature extraction

DR

Gaussian model

Feature extraction

Ref. No.

[1]

[2]

[3]

[4]

[5]

[6]

[7]

FCM clustering. Good DR

SVM for data augmentation. Good DR

PCA, MDS, ISOMAP, LLE and LDM. Good with dimensions varying between 700 and 3300

FCM. High attribute reduction

FCM clustering (granularity). High reduction

Information bottleneck for word similarity. High attribute reduction

90% single-word and 85% multi-word concepts

DR method and DR achieved (%)

(continued)

Word frequency computation and noun extraction. Use of defuzzification

Varying reduction with PCA and factor analysis. Fast method

Reduced noisy impact. Preserved inter-document relationships. Improved training and testing time. Helps in literature-based discovery or sentiment analysis

Extraction of hyper-spherical clusters. Low-cost DR. More predictable and computationally efficient than Sammon’s method. High outlier detection. Obtained from time-invariant probability distribution. Tuning of FRTS parameters with least square error, fixed antecedent membership, and gradient descent. Tuning of FRMA parameters of antecedent and peak membership using gradient descent

Improved noise robustness and efficiency

Minimum use of aggressive feature selection. No information loss. Use of linear SVM-light for decision surface. Used 120 clusters to compute performance and time

Word: a concept or a non-concept. No use of morpho-syntactic information. Found consistent words’ relative specificity for all languages

Significant aspects

Table 1 Comparing DR types, DR methods, DR % achievement, and significant aspects of existing methods

618 S. Puri

DR type

Feature extraction

Feature extraction

Feature extraction

Feature extraction

Ref. No.

[8]

[9]

[10]

[11]

Table 1 (continued) DR method and DR achieved (%)

PCA-SVD. High attribute reduction

High feature reduction

Incremental fuzzy feature clustering (FFC). Attribute reduction

Good attribute reduction

(continued)

Used SVD uniquely. Ranking the constraints with SVD. Better performance of Euclidean difference-based k-ranked SVD to reduce relative errors. Had minimum reconstruction error. Used surrogate constraint (second norm). Need of 80–90% threshold to select minimum features. Cannot use SVD directly

Non-trial-and-error method to get number of extracted features. Had small computations and fast method. Number of extracted features = Number of document classes

Automatic creation of fuzzy similarity-based clusters. Non-trial-and-error method to get total extracted features. Same contribution of all words in a cluster. Used 1-D Gaussian functions with clusters. Found statistical mean and deviation for each cluster. Threshold-based cluster size with number of clusters. No need of variance in underlying cluster. In FFC, 0 is set as 0.25

Used porter stemmer, bag of words and term weight. No use of eigenvalues, vectors and SVD. Dimension No. = class number

Significant aspects

A Review on Dimensionality Reduction in Fuzzy … 619

DR type

Feature extraction

Feature selection

Feature selection

Feature selection

Feature selection

Feature selection and extraction

Feature selection

Feature selection

Feature selection

Ref. No.

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

Table 1 (continued) DR method and DR achieved (%)

GIcs, DFcs, CDMcs, and Acc2cs. High DR

Same as of [17]

Same as of [17]

Fuzzy rough sets. Good DR

SVD and forward selection. Good DR

Rough sets. Good DR

Distance metric-based rough sets. Good DR

Fuzzy entropy-based rough sets. Good DR

Global and partial extraction. Good attribute reduction

– (continued)

Weight enhanced the prior knowledge and reduced the unknown effects

Obtained small feature set in less time. Multiplicative parameter η = 1.5 and weight parameter ω = 0.5

Obtained small feature set in less time. Used η and r to control the class overlapping and feature set size

Used feature ranking. Computed feature weights and fuzzy inference using SVDF

Used indiscernibility. No exhaustive generation. Rely on data information. TRSM and DMRSAR thresholds: 0.9 and 0.95, respectively

Same as of [13]

Small subset sizes. Improved prediction performance

Use of average 56 features. 1% z-low, -high and document frequency threshold. 90% rate of category-emergence. 0.0005 CF and 0.01 adjustment coefficient in TF/IDF-CF. 500 neighbors and β = 1 for word frequency

Significant aspects

620 S. Puri

DR type

Feature selection

Non-linear DR

Feature extraction

Feature extraction

Fuzzy clustering

Fuzzy clustering

Feature extraction

Feature extraction

Feature selection and rule extraction

Ref. No.

[21]

[22]

[23]

[24]

[26]

[27]

[28]

[29]

[30]

Table 1 (continued) DR method and DR achieved (%)

K-means with Gaussian membership function. Good DR

PCA. High attribute reduction

Kernel fuzzy-based SVM. Good DR

FCM. Good reduction

FCM. Good reduction

Feature clustering. 60%

Threshold method. 40%

ISOMAP. Good DR

Rough sets and fuzzy rough sets. Good DR

Significant aspects

Created minimum fuzzy rules with genetics. Used feature modulator, error function, and gradient descent for learning. Used threshold for modulators. No parameter tuning of rule base

Used vector space models. Determined hyper-box using minimum and maximum points of the hyper-box

Used 10-, 20- and 30-fold cross-validations

If attribute range M-FFC > H-FFC > DC. µaveraged precision (MicroP): H-FFC > S-FFC. µaveraged recall (MicroR): S-FFC > H-FFC. Good M-FFC results with MicroP, MicroR, and MicroF1 RCV1-processing speed. Faster FFC than DC and IOC. High feature extraction made DC and IOC slow. Classification accuracy (%): for 18 features, worst IG (96.95) < S-FFC (98.39) < M-FFC (98.31) < H-FFC (98.13). All 3 FFCs performed well at all times. Total accuracy is 98.83%. Performance: MicroF1: S-FFC > M-FFC > DC > H-FFC. MicroP: 80% to 85% for all methods. Good results of M-FFC with MicroP, MicroR, and MicroF1 Cade12-processing speed: faster FFC than DC and IOC. Classification accuracy (%): worst IG (7.04) for MicroR, S-FFC, and M-FFC. H-FFC was a little bit worse than IG in MicroP, but was good in MicroR. No accuracy for IOC when number of extracted features >22. Performance: S-FFC > M-FFC > H-FFC > DC. All methods were good for micro-accuracy, but no method worked satisfactorily for MicroP, MicroR, and MicroF1. Hard to get feature reduction with Cade12 (continued)

Parameters and classification results

[9]

SVM aggregation. 20 NGs, Reuters Corpus Volume 1 (RCV1) and Cade12

Ref. No. Classifier and data sets

Table 2 (continued)

A Review on Dimensionality Reduction in Fuzzy … 625

20 NGs. Top 10 datasets of Reuters 21578

Two different datasets

Improved KNN. NGs

J48, JRip, partial decision trees. Water 2, water 3, Cleveland, glass, heart, ionosphere, iris, olitos, and wine

Fuzzy classifier QSBA. All datasets of [13] except ionosphere

[10]

[11]

[12]

[13]

[14]

High accuracy

Good accuracy

(continued)

Computed precision and recall. Achieved high classification accuracy and was better than KNN

Good

Parameters and classification results NGs: more accurate and faster than DC and IG. Accuracy (%) and execution time (s) with 20 features: 88.18%. Accuracy < full feature Naive Bayes (88.4%); > DC (78.54%); and >IG (18.34%). Time: 1300.8 s which was better than IG (1337.3 s) and DC (1817.3 s). Accuracy (%) and execution time (s) with >5000 features: DC performance was better than proposed at the cost of time. IG was better than proposed with all features Reuters 21578: accuracy (%) and execution time (s) with 10 features: achieved 83.75% which was 2.52%. 83.75% < full feature Naive Bayes (86.27%); > DC (80.2%); and >IG (55.06%). Time: 492.1 s which was better than IG (513.6 s) and DC (563.3 s). Accuracy (%) and execution time (s) with >200 features: better DC performance than proposed at the cost of time (approx. > 1898.9 s) and better IG than proposed when using full features

Ref. No. Classifier and data sets

Table 2 (continued)

626 S. Puri

SVM, C4.5, and KNN. Kent Ridge Bio-medical repository: satimage, segmentation, isolet, and multiple features of UCI machine learning repository. Breast cancer I, colon cancer, lung cancer, leukemia I, breast cancer II, and leukemia II

SVM, KNN, and C4.5 decision tree. Satimage, segmentation, isolet, multiple features, breast cancer I, colon cancer, lung cancer, leukemia I & leukemia II

SVM. DataBCII and Reuters-21578

Rough sets. 13 text document sets

[17]

[18]

[20]

[21]

(continued)

Execution time-based ordering: RSAR (minimum) < EBR < SimRSAR < AntRSAR < GenRSAR (maximum)

Outperformance of proposed approach. Better than frequency based methods with µ- and macro-F1 measures

Obtained best classification accuracy with SVM and C4.5 in all cases. Better MRMS performance than max-dependency and max-relevance for all cases. Better IT2 fuzzy rough sets performance than IT1 sets for all cases. Achieved best classification accuracy with MRMS feature selection using IT2 fuzzy-rough sets. Classification accuracy (%): KNN: 86.6/12, 86.0/10, 90.2/36; SVM: 86.2/25, 91.8/11, 90.2/42; and C4.5: 86.9/25, 91.1/16, 88.4/19 for satimage, segmentation, and leukemia II, respectively

Achieved best classification accuracy with all three classifiers. Proposed DR was better than PCA, max-relevance, max-dependency and MRMS. Achieved highest accuracy with 10 training–testing sets. Fast method; computationally complex; and based on significant and insignificant features. Used ten-fold cross-validation

Satisfactory performance with low false-positive rate. Time efficient at training and testing phases

SVM. Text documents

[16]

Parameters and classification results

JRip, J48, and partial decision trees. All datasets of [13] except iris. Data Increased classification accuracies by 30% with DR in short run time. size: 47–2000 objects and 7–57 attributes Better performance than PCA. Fast DMRSAR than fuzzy rough feature selection (FRFS) for smaller subset sizes. Low classification accuracy of FRFS and DMRSAR for unreduced data. In many cases, TRSM outperformed FRFS and DMRSAR. Got poor TRSM classification accuracy. In Hausdorff metric: got best DMRSAR results in all cases; obtained 566 s runtime with LED; and failed to get useful information at boundaries

[15]

Ref. No. Classifier and data sets

Table 2 (continued)

A Review on Dimensionality Reduction in Fuzzy … 627

SVM. Demo dataset from osu_svm3.00 and 20 NGs

SVM. Text documents

SVM. Text documents

SVM. Balance, four class, SVM guide and banana of UCI database

SVM. Balance scale, servo, lense, cancer, pima Indian, and heart

Fuzzy SVM. Good datasets

Multi-level fuzzy min-max neural networks. Reuters 21578 and 20 NGs 94% accuracy in Reuters 21578 and 95% in NGs. Less execution time than other supervised methods

Fuzzy. Synthetic datasets. Iris, Wisconsin diagnostic breast cancer, Good and improved accuracy due to feature elimination. 0% wine, heart, ionosphere, sonar, and glass from California Univ., machine misclassification rate in all datasets with ten-fold cross-validation learning DB

[22]

[23]

[24]

[26]

[27]

[28]

[29]

[30]

Significant and good results with nonlinearly separable data. Used kernel-based learning algorithm

Better accuracy of nominal (81.2%) than original data (74%). Classifier accuracy (%) in 6 original datasets: 97, 18, 90, 89.3, 75, and 80; and in 6 nominal datasets: 97, 18, 90, 94.3, 96, and 92

Good classification accuracy. For 4 datasets, accuracy % (normal SVM, FCM-SVM, proposed): (95.19, 96.15, 99.22), (99.48, 98.78, 99.76), (95.35, 94.69, 95.80), and (94.58, 95.11, 95.28). Training time in sec (Normal SVM, FCM-SVM, proposed): (1.79, 3.34, 4.08), (2.97, 5.48, 5.7), (113.02, 79.30, 62.47) and (76.4, 40.73, 27.13)

Better system performance than [23]. 70–75% with high quality

Good system performance

Parameters and classification results 1. Minimum accuracy with minimum dimension: 53%; and training and testing time: 0.609 and 0.108 sec, respectively. 2. Maximum accuracy with maximum dimension: 88%; and training and testing time: 1.47 and 0.391 sec, respectively. In 20 NG: 68.75% minimum accuracy and 74% maximum accuracy. Computation time (Dim = 2): training in 0.529 s and testing in 0.061 s. Computation time (Dim = 100): Training in 2.91 s and testing in 0.142 s. Reduced execution efficiency of classifier. Very much time consuming with increased number of dimensions

Ref. No. Classifier and data sets

Table 2 (continued)

628 S. Puri

A Review on Dimensionality Reduction in Fuzzy …

629

Fig. 1 Percentage usage of feature and dimension reduction methods in existing algorithms

Fig. 2 Percentage usage of different classifiers in existing algorithms

each; and 2.17% usage of classifiers such as random forest, Lsquare, linear, quadratic, naive Bayes, concept-based, LDA, and graph-based each.

630

S. Puri

5 Conclusion and Future Scope This review provided the description and comparative analysis of DR-based methods used in many SVM- and fuzzy-based text document classification systems. It was represented as two detailed studies in terms of many key parameters such as DR, attribute reduction, classification methods, results, and their related concerns. The observations from studies stated that most of the techniques used the feature extraction as the primary reduction method; and the SVM and the fuzzy as the most preferred classifiers. Although these reduction and classification results were found promising, yet there is still scope of designing a highly efficient classification system along with the add-on of high dimension and feature reduction. On the other side, the hybridization of fuzzy and SVM techniques is used to provide very fruitful solutions for the accurate and efficient classification of textual data. In future, this review can be expanded for analyzing the features of images and videos in the bilingual, trilingual, and multilingual environment.

References 1. Ventura, J., Silva, J.: Mining concepts from texts. Procedia Comput. Sci. 9, 27–36 (2012) 2. Al-Mubaid, H., Umair, S.A.: A new text categorization technique using distributional clustering and learning logic. IEEE Trans. Knowl. Data Eng. 18(9), 1–10 (2006) 3. Zhang, H., Pedrycz, W., Miao, D.: Fuzzy granular principal curves algorithm for large data sets. In: Joint IFSA World Congress and NAFIPS Annual Meeting, pp. 956–961. IEEE Press (2013) 4. Pal, N.R., Eluri, V.K., Mandal, G.K.: Fuzzy logic approaches to structure preserving dimensionality reduction. IEEE Trans. Fuzzy Syst. 10(3), 277–286 (2002) 5. Underhill, D.G., McDowell, L.K., Marchette, D.J., Solka, J.L.: Enhancing text analysis via dimensionality reduction. In: International Conference on Information Reuse and Integration, pp. 348–353. IEEE Press (2007) 6. Uhm, D., Jun, S., Lee, S.-J.: A classification method using data reduction. Int. J. Fuzzy Logic Intelligent Syst. 12(1), 1–5 (2012) 7. Sheeba, J.I., Vivekanandan, K.: A fuzzy logic based improved keyword extraction from meeting transcripts. Int. J. Comput. Sci. Eng. 6, 287–299 (2014) 8. Biricik, G., Diri, B., Sönmez, A.C.: A new method for attribute extraction with application on text classification. In: Fifth International Conference on Soft Computing, Computing with Words and Perceptions in System Analysis, Decision and Control, pp. 1–4. IEEE Press (2009) 9. Jiang, J.-Y., Liou, R.-J., Lee, S.-J.: A fuzzy self-constructing feature clustering algorithm for text classification. IEEE Trans. Knowl. Data Eng. 23(3), 335–349 (2011) 10. Jiang, J.-Y., Lee, S.-J.: A weight-based feature extraction approach for text classification. In: Second International Conference on Innovative Computing, Information and Control, pp. 164– 167. IEEE Press (2007) 11. Modarresi, K.: Unsupervised feature extraction using singular value decomposition. Procedia Comput. Sci. 51, 2417–2425 (2015) 12. Li, Y., Sheng, Y., Luan, L., Chen, L.: A text classification method with an effective feature extraction based on category analysis. In: Sixth International Conference on Fuzzy Systems and Knowledge Discovery, pp. 95–99. IEEE Press (2009)

A Review on Dimensionality Reduction in Fuzzy …

631

13. Parthalain, N.M., Jensen, R., Shen, Q.: Fuzzy entropy-assisted fuzzy-rough feature selection. In: International Conference on Fuzzy Systems, pp. 423–430. IEEE Press (2006) 14. Parthalain, N.M., Shen, Q., Jensen, R.: Distance measure assisted rough set feature selection. In: International Fuzzy Systems Conference, pp. 1–6. IEEE Press (2007) 15. Parthalain, N.M., Shen, Q., Jensen, R.: A distance measure approach to exploring the rough set boundary region for attribute reduction. IEEE Trans. Knowl. Data Eng. 22(3), 305–317 (2010) 16. Zaman, S., Karray, F.: Features selection using fuzzy ESVDF for data dimensionality reduction. In: International Conference on Computer Engineering and Technology, pp. 81–87. IEEE Press (2009) 17. Maji, P., Garai, P.: Fuzzy–rough simultaneous attribute selection and feature extraction algorithm. IEEE Trans. Cybern. 43(4), 1166–1177 (2013) 18. Maji, P., Garai, P.: IT2 fuzzy-rough sets and max relevance-max significance criterion for attribute selection. IEEE Trans. Cybern. 45(8), 1657–1668 (2015) 19. Hsiao, C., Chuang, C., Jeng, J., Su, S.: A weighted fuzzy rough sets based approach for rule extraction. In: The SICE Annual Conference, pp. 104–109. IEEE Press (2013) 20. Chen, Y., Han, B., Hou, P.: New feature selection methods based on context similarity for text categorization. In: 11th International Conference on Fuzzy Systems and Knowledge Discovery, pp. 598–604. IEEE Press (2014) 21. Jensen, R., Shen, Q.: Semantics-preserving dimensionality reduction: rough and fuzzy-roughbased approaches. IEEE Trans. Knowl. Data Eng. 16(12), 1457–1471 (2004) 22. Shi, L., Zhang, J., Liu, E., He, P.: Text classification based on nonlinear dimensionality reduction techniques and support vector machines. In: Third International Conference on Natural Computation, pp. 674–677. IEEE Press (2007) 23. Puri, S.: A fuzzy similarity based concept mining model for text classification. Int. J. Adv. Comput. Sci. Appl. 2, 115–121 (2011) 24. Puri, S., Kaushik, S.: An enhanced fuzzy similarity based concept mining model for text classification using feature clustering. In: IEEE Students Conference on Engineering and Systems, pp. 1–6. IEEE Press (2012) 25. Puri, S., Kaushik, S.: A technical study and analysis on fuzzy similarity based models for text classification. Int. J. Data Min. Knowl. Manag. Process 2, 1–15 (2012) 26. Almasi, O.N., Rouhani, M.: Fast and de-noise support vector machine training method based on fuzzy clustering method for large real world datasets. Turk. J. Electr. Eng. Comput. Sci. 24, 219–233 (2016) 27. Bano, S., Bandhekar, S.: Fuzzy clustering based data reduction for improvement in classification. Int. J. Innovative Res. Comput. Commun. Eng. 4(5), 1111–1118 (2016) 28. Arumugam, P., Jose, P.: Recent advances on kernel fuzzy support vector machine model for supervised learning. In: International Conference on Circuits, Power and Computing Technologies, pp. 1–5. IEEE Press (2015) 29. Zobeidi, S., Naderan, M., Alavi, S.E.: Effective text classification using multi-level fuzzy neural network. In: 5th Iranian Joint Congress on Fuzzy and Intelligent Systems, pp. 91–96. IEEE Press (2017) 30. Chen, Y.-C., Pal, N.R., Chung, I.-F.: An integrated mechanism for feature selection and fuzzy rule extraction for classification. IEEE Trans. Fuzzy Syst. 20(4), 683–698 (2012)

Adaptive Fuzzy Algorithm to Control the Pump Inlet Pressure P. Sivakumar, R. S. Sandhya Devi, A. Angamuthu, B. Vinoth Kumar, S. K. AnuShyni, and M. JeenBritto

Abstract The pump inlet pressure is controlled using PID controller. Since the process is non-linear, the performance of the pump inlet pressure control could not be achieved in a straight line as per the requirement. Instead, the pump inlet pressure was controlled with saw tooth pattern during a rapid change in pump flow rate. In case of conventional PID controller, when controller gain (Kp) is very high, there will be a faster rise time leading to an overshoot. If it is very low, the transient dynamic behaviour of the system will be slow. Due to the above limitation, achieving best control was not possible for the non-linear process using conventional PID algorithm. Hence it is proposed to develop an Adaptive Fuzzy control algorithm, which will accept inputs as an error and also rate of change of process variable, and calculate the change in output based on the fuzzy control algorithm. In this project, Adaptive Fuzzy control algorithm is developed and implemented to control the pump inlet pressure to the required value. Keywords PID · Fuzzy logic controller · Fuzzy system · FLC

P. Sivakumar (B) · A. Angamuthu · S. K. AnuShyni Department of Electrical and Electronics Engineering, PSG College of Technology, Coimbatore, India e-mail: [email protected] A. Angamuthu e-mail: [email protected] R. S. Sandhya Devi Department of Electrical and Electronics Engineering, Kumaraguru College of Technology, Coimbatore, India e-mail: [email protected] B. Vinoth Kumar Department of Information and Technology, PSG College of Technology, Coimbatore, India e-mail: [email protected] M. JeenBritto Indian Space Research Organization, Mahendragiri, Tamilnadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_50

633

634

P. Sivakumar et al.

1 Introduction As the industry is developing day by day the demand for control system has exponentially increased to realize faster and accurate control by improving the transient response. Therefore to meet these requirement a special effect to be taken for modeling error or the difference in the actual output from the desired output must be considered while selecting and developing a control theory. From over a decade fuzzy control theory plays a vital role in most of the application branches based on fuzzy logic control. Zadeh’s fuzzy logic is the first fuzzy algorithm by Mamdani introduced in 1924 [1, 2]. As fuzzy control and its application in industrial process literature is growing rapidly making a comprehensive survey, and hence the reference [3–6] is cited for survey purpose. A fuzzy logic controller (FLC) mimics the decision making power of the human brain and natural language based on the fuzzy logic used. According to this perceptive, the FLC is important in making a set of rules according to the situation for the application process. Hence, using an FLC algorithm, expert knowledge based linguistic control strategy is converted to automated control strategy Control system consists of many constraints in an industrial process such as inertia lag, time-varying parameters, nonlinearity, and time delay and so on. To develop the mathematical model for such a system with these constraints or features is very difficult. The system with this disturbance will not provide good results in a conventional PID controller [7, 8]. Hence a new method is introduced combining both fuzzy and PID controller which can deal with these constraints. The decision making power of the human brain is much closer to fuzzy logic. So, it is applicable for the non-linear system which is about to control plant. An interesting part about fuzzy logic is, it does not depend upon the precise mathematical model [9, 10]. There are many advantages in self-adaptive fuzzy control over conventional PID controller [11, 12]. Fuzzy control has minimal overshoot and good anti-interference ability [13, 14]. Here, the proposed model consists of a single tank compressed with both liquid and gas maintained at a certain pressure. The inlet pump is used to insert gas into the tank, the pump act according to the command given by the fuzzy controller in which the rules are loaded or programmed according to the application used. The tank structure is shown in Fig. 1. The Implementation result of a fuzzy system on a micro-PLC. Siemens S7-224 PLC combined with EM235 analog input/output extension unit is used and the accuracy is given by the comparing the implementation of same fuzzy system using PLC and MATLAB model. The accuracy of the implementation is demonstrated by the comparative analysis between the PLC version of the fuzzy system and a MATLAB model with the same structure [15].

Adaptive Fuzzy Algorithm to Control the Pump Inlet Pressure

635

Fig. 1 Tank structure

2 Proposed Method The proposed flow methodology is clearly shown in Fig. 2. Initially, the set point is fixed as per the requirement. The error calculation is done by subtracting the set point and the current output of pressure value. With error as one input and rate of change of final pressure value as another input, a fuzzy controller is developed. A volume of Gas Flow Rate (QG) is calculated based on valve co-efficient value. The gas volume of the tank yet to be filled with gas is calculated using a decreasing slope of about 0.001 m/s. The QG is integrated to obtain the accurate volume of gas present in the tank. Hence pressure in the tank is calculated.

Fig. 2 Block diagram for proposed method

636

P. Sivakumar et al.

Fig. 3 Fuzzy controller—Mamdani type

√  QG ∗ SG ∗ T CV = (819 ∗ P1) QG T SG P1 1 ft3 /h

(1)

gas flow in std cubic ft/h temperature (◦ F + 460) specific gravity of medium = 1 upstream inlet pressure in psia 0.028m3 /h

2.1 Design of Fuzzy Logic Controller While designing FLC, the two inputs considered are: error and rate of change of pressure value as the inputs for fuzzy controller [16–18]. The output variable is taken as the valve position. To calculate error rate and pressure changes Gaussian membership function is used. Further, to measure outflow valve, triangular membership function, Rules are framed and implemented using rule editor. Figure 3 shows the controller actions such as monitoring, valve opening by firing the rules [9].

2.2 Fuzzy Set—Input Characteristics Error Fuzzy variables

Crisp input range

Negative

(0.3–1)

Ok

(0.3–0)

Positive

(0.3–1)

Adaptive Fuzzy Algorithm to Control the Pump Inlet Pressure

637

Fig. 4 Membership function characterizing the input Variable “Error”

Figure 4 Illustrates the Membership function of the inputVariable and Error. Rate of pressure valve change. Fuzzy variables

Crisp input range

Negative

(0.03–0.2)

Ok

(0.03–0)

Positive

(0.03–2)

Figure 5 shows the how membership function is used for characterizing the input variable and Rate of pressure valve change.

Fig. 5 Membership function depicting the input variable “Rate of pressure valve change”

638

P. Sivakumar et al.

Fig. 6 Characterizing the output variable “valve” using membership function

2.3 Fuzzy Set Output Characteristics Valve position Fuzzy variables

Crisp input range

Close fast

[− 100 − 70 − 40]

Close slow

[− 40 − 20 − 5]

No change

[− 5 0 5]

Open slow

[5 20 40]

Open fast

[40 70 100]

Figure 6 shows the Membership function used for characterizing the output variable and valve.

2.4 Rule Editor Graphical rule editor is used to implement interface rules based on input and output variables given in FIS editor [11]. The FLC control rules is shown in Table 1 Table 1 FLC control rules Error

Rate Negative

Ok

Positive

Negative

Close slowly

Close slowly

Close fast

Ok

Open slowly

No change

Close slowly

Positive

Open fast

Open slowly

Open slowly

Adaptive Fuzzy Algorithm to Control the Pump Inlet Pressure

639

In the design of FLC [19], we consider Error as E and rate of pressure valve change as d/dt(r) 1. 2. 3. 4. 5. 6. 7. 8. 9.

Valve closes slowly while both E and d/dt(r) is negative Valve closes slowly while d/dt(r) is ok and E is negative Valve closes fast while d/dt(r) is positive and E is negative Valve opens slowly while d/dt(r) is negative and E is ok No change in the valve position while d/dt(r) is ok and E is ok Valve closes slowly, while d/dt(r) is positive and E is ok Valve opens fast while d/dt(r) is negative and E is positive Valve opens slowly while d/dt(r) is positive and E is ok Valve opens slowly while d/dt(r) is positive and E is positive

3 Fuzzy System—Implementation Assume, a fuzzy system is constructed with two antecedent variables and one consequent parameter. The system is given with a set of rules basically in the form of: Rn : if x1 is A1 ∧ x2 is B j then y is Uk

(2)

where number of fuzzy rules is represented by n. The inputs are represented by variables x 1 and x 2 and system output by variable y [20]. The variable x 1 (antecedent) is connected to three fuzzy sets Ai , and also each of these fuzzy set Ai where i = 1–3, is belongs to membership function μ Ai (x 1 ): R → [0, 1] A membership degree for the variable x 1 with respect to the fuzzy set Ai is produced. For the second antecedent variable x 2 , similar structure is considered (i.e.) it is defined over a universe of discourse consisting of three fuzzy sets Bj where (j = 1, 3) and characterized by respective membership function μ Bi (x 2 ): R → [0, 1] and also for the consequent variable y which is connected with Uk fuzzy sets and its membership function [18, 21]. In Fig. 7 Fuzzy system—General structure is shown. The fuzzy system depicted in Fig. 7 with the rule base (1) can be elucidated as: Y = W y f (x1 W x1 , x2 W x2 )

(3)

Here f represents fuzzy system’s input-output mapping and Wx 1 , Wx 2 , Wyf are the scaling gains of fuzzy variables A commonly used mf is considered for input. That is, three Gaussian shape mf were used, it is specified by two parameters: Gaussian(x : c, σ ) = exp − (x − c)2/2σ2

(4)

640

P. Sivakumar et al.

Fig. 7 The fuzzy system

A Gaussian mf is calculated using c and σ; c denotes the mf’s center while σ represents the mf’s width [15].   Y = x − xmin ymax − ymin /xmax − xmin + ymin

(5)

Therefore, 

 y − ymin xmax − xmin /ymax − ymin + xmin = X

(6)

The algebraic product is used to implement the ‘and’ connective and is given by the μcn is the output value for rule n. Thus the resultant structure of the fuzzy system ensures a linear behavior of this system [22].

4 Flow Chart for Proposed Model The following flow charts are to explain the programming of the proposed method. Initially, the fuzzy logic control is designed according to the requirements with reference to MATLAB FIS [23]. The output from the FLC is given to valve for further process. To obtain the error set point was subtracted from the current process value. Hence the two inputs for FLC are given to it. The crisp output that is the command for the valve is then given to the valve sub-block. The Fuzzy Logic Controller gets the inputs and the desired output to be done based on the inputs. The rules are programmed or preloaded according to the application used. Each set of input is initialized with membership functions, here Gaussian membership functions is used for inputs because the opening of valve represents Gaussian wave shape. And triangular membership functions for outputs due to its linearity as shown in Fig. 8.

Adaptive Fuzzy Algorithm to Control the Pump Inlet Pressure

641

Fig. 8 Creating FLC, calculating the error, FLC command to valve position

4.1 Valve The valve positioning command from FLC was given as input to the valve subsystem. The present position is subtracted with the previous positioning to obtain the actual positioning of the valve as shown in Fig. 9. A transfer function is used to mimic the valve and for accurate response. And by using the previous CV (Co-efferent of a valve) the gas flow rate is calculated using the formula.

642

P. Sivakumar et al.

Fig. 9 QG calculation

Fig. 10 Calculating the empty tank volume

4.2 Tank Tank subsystem is used to determine the available empty volume of the tank that has to be filled with gas as shown in Fig. 10.

4.3 Pressure The Calculated gas volume flow rate through the valve at calculated CV (QG) is integrated, so that the gas already present in the tank is integrated with present inflow gas. Hence by using the empty volume of the tank and integrated gas volume will give the current pressure of the tank as shown in Fig. 11. Open versus co-efficient of valve (CV) graph is the plot. Using this graph polynomial equation is obtained as shown in Fig. 12. The valve less the 14 is negative

Adaptive Fuzzy Algorithm to Control the Pump Inlet Pressure

643

Fig. 11 Final pressure value

Fig. 12 OPEN versus CV graph

hence the valve less than 14 will be considered as zero. And the valve more than or beyond 80 goes more than 150 where the limit is 0–100 so the valve beyond 80 is set as 151.2. To apply this in Simulink MATLAB function block is used to build the program

5 Experimental Results and Analysis The proposed work’s simulation is performed in MATLAB. The proposed adaptive fuzzy based PID control scheme is examined, the transfer function is given to demonstrate the validity of this strategy. The proposed work’s simulation is performed in MATLAB.

644

P. Sivakumar et al.

Fig. 13 Simulation output in PLC

Here, the set point and the valve opening of hardware is represented in the graph. The valve open and close according to error calculated based on the difference between the set-point and process variable. The overshoot appears in the convention PID controller is reduced in the fuzzy controller. The transient behavior is fast compared to conventional PID. The process is done similar to the MATLAB model the result is obtained accordingly as shown in Fig. 13. Comparison between conventional PID and Fuzzy system as shown in Fig. 14.

6 Conclusion For the proposed segmentation, adaptive fuzzy controller performs better than conventional PID controller. In the proposed segmentation the adaptive fuzzy controller performs better compared to the conventional PID controller. The overshoot occurs in PID controller was reduced by fuzzy logic controller. When the rise time is fast it means the error is big, however overshoot is reduced and if reduces further the transient dynamic behavior is not slow. Hence the performance of the non-linear process of pump inlet pressure control is achieved similar to the

Adaptive Fuzzy Algorithm to Control the Pump Inlet Pressure

645

Fig. 14 Comparison between convention PID and fuzzy

straight line as per the requirement. Adaptive strategy implementation for higher order systems considering possible physical constraints and rules can further be studied and extended as our future work/scope of our proposed work. Acknowledgements The authors would like to thank the Indian Space Research Organization, Mahendragiri, Tamil Nadu, India, for providing equipment for testing the algorithm proposed.

References 1. Li, T.H.S., Shieh, M.Y.: Design of a GA-based fuzzy PID controller for non-minimum phase systems. Fuzzy Sets Syst. 111(2), 183–197 (2000) 2. Mamdani, E.H.: Application of fuzzy algorithms for control of simple dynamic plant. Proc. Inst. Electr. Eng. 121(12), 1585–1588. (1974) 3. Chauhan, R.K., Rajpurohit, B.S., Hebner, R.E., Singh, S.N., Longatt, F.M.G.: Design and analysis of PID and fuzzy-PID controller for voltage control of DC micro grid. In: IEEE Innovative Smart Grid Technologies-Asia. pp. 1–6 (2015) 4. Rubaai, A., Castro-Sitiriche, M.J., Ofoli, A.R.: Design and implementation of parallel fuzzy PID controller for high-performance brushless motor drives: an integrated environment for rapid control prototyping. IEEE Trans. Ind. Appl. 44(4), 1090–1098 (2008) 5. Sugeno, M.: An introductory survey of fuzzy control. Inf. Sci. 36(1–2), 59–83 (1985) 6. Premkumar, K., Manikandan, B.V.: Adaptive neuro-fuzzy inference system based speed controller for brushless DC motor. Neuro Comput. 138, 260–270 (2014)

646

P. Sivakumar et al.

7. Banerjee, R: Water level controller by fuzzy logic. Int. J. Innovative Res. Adv. Eng. 2(2), 250–256 (2015) 8. Chen, L., Wang, C., Yu, Y., Zhao, Y.: The research on boiler drum water level control system based on self-adaptive fuzzy-PID. In: Chinese Control and Decision Conference, pp. 1582– 1584 (2010) 9. Xiao, Q., Zou, D., Wei, P.: Fuzzy adaptive PID control tank level, In: 2010 International Conference on Multimedia Communications IEEE, pp. 149–152 (2010) 10. Zeng, M.X., Zhao, J.F., Ouyang, W.: PID parameter fuzzy self-tuning of the motion control system. In: Applied Mechanics and Materials Trans Tech Publications Ltd, vol. 552, pp. 187– 191 (2014) 11. Seghir, A.N., Henni, T., Azira, M.: Fuzzy and adaptive fuzzy PI controller based vector control for permanent magnet synchronous motor. In: 10th IEEE International Conference on Networking, Sensing and control (ICNSC), pp. 491–496 (2013) 12. Jianjun, Z.: Design of fuzzy control system for tank liquid level based on WinCC and MATLAB. In: 13th International Symposium on Distributed Computing and Applications to Business, Engineering and Science, pp. 55–57 (2014) 13. Chiou, J.S., Tsai, S.H., Liu, M.T.: A PSO-based adaptive fuzzy PID-controllers. Simul. Model. Pract. Theory 26, 49–59 (2012) 14. McNeill, D., Freiberger, P.: Fuzzy logic: the revolutionary computer technology that is changing our world. Simon and Schuster (1994) 15. Duka, A.V.: PLC implementation of a fuzzy system. In: The International Conference Interdisciplinarity in Engineering INTER-ENG. EdituraUniversitatii” PetruMaior” din TirguMures. p. 317 (2012) 16. Kim, J.H., Oh, S.J.: A fuzzy PID controller for nonlinear and uncertain systems. Soft. Comput. 4(2), 123–129 (2000) 17. Usha, A., Tech, H.P.M., Narayana, K.L.: Water tank level control system using self-adaptive Fuzzy-PID control. Int. J. Eng. Res. Technol. 3(6) (2014) 18. Wang, G., Song, H., Niu, Q.: Mine water level fuzzy control system design based on PLC, In: IEEE Second International Conference on Intelligent Computation Technology and Automation, vol. 3, pp. 130–133 (2009) 19. Liu, C., Peng, J.F., Zhao, F.Y., Li, C: Design and optimization of fuzzy-PID controller for the nuclear reactor power control. Nucl. Eng. Des. 239(11), 2311–2316 (2009) 20. Aggarwal, V., Mao, M., O’reilly, U.M.: A self-tuning analog proportional-integral-derivative (pid) controller. In: First NASA/ESA Conference on Adaptive Hardware and Systems, IEEE, pp. 12–19 (2006) 21. Duan, X.G., Deng, H., Li, H.X.: A saturation-based tuning method for fuzzy PID controller. IEEE Trans. Industr. Electron. 60(11), 5177–5185 (2013) 22. Lee, C.C.: Fuzzy logic in control systems: fuzzy logic controller. IEEE Trans. Syst. Man Cybern. 20(2), 419–435 (1990) 23. Sivakumar, P., Vinod, B., Devi, R.S.S., Rajkumar, E.R.J.: Real-time task scheduling for distributed embedded system using MATLAB toolboxes. Indian J. Sci. Technol. 8(15), 1–7 (2015)

An Automated Citrus Disease Detection System Using Hybrid Feature Descriptor Bobbinpreet Kaur, Tripti Sharma, Bhawna Goyal, and Ayush Dogra

Abstract An automated system for citrus disease detection relies upon using computer-aided tools. By minimizing manual error in disease detection and diagnosis, the computer-aided diagnostic (CAD) tools have acquired a major role in the resources needed to boost and increase the efficiency in production yield. The CAD tools have the properties of automatically speeding up the system’s decision-making capability on the complex disease data. Since these plant diseases can be devastating in terms of economic loss and loss of nutrition, the need for disease detection is growing at an early stage in order to meet human nutritional needs. The use of these computer-aided methods has enabled reducing the loss of production and early disease detection. This article proposes a machine learning-based methodology to automate the process of disease detection thereby reducing the manual efforts. While designing the complete system, the optimization and improvement at every substage are considered in this article so that the complete system possesses high levels of accuracy and thereby can equip farmers to deploy disease detection in a better way. Keywords Citrus diseases · Image preprocessing · Feature hybridization · Machine learning

1 Introduction Plant species and their extracts serve as an essential factor for the preservation of an ecological equilibrium in nature as well as the economic boost to the mankind. For better utilization of these natural resources, it is necessary to reduce the spread of diseases in the fields [1, 2]. The citrus fruits are rich in so many nutrients that provide health benefits to the humans as well as provide the financial support in the form of exports and the commercial products developed from the citrus fruits. In the last few decades, the citrus production is affected by the spread of diseases and a B. Kaur (B) · T. Sharma · B. Goyal Chandigarh University, Mohali, India A. Dogra Ronin Institute, Montclair, NJ 07043, USA © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_51

647

648

B. Kaur et al.

lot of production wastage occurs due to these diseases. Traditional disease detection methods rely upon appointing an expert to find out the disease by inspecting the field locally and through physical introspection. This process is very much timeconsuming and depends totally upon the knowledge of domain expert. So in the era of modernization and automation, the disease detection methodology is now shifting toward using technology-based solutions. The imaging-based methods have gained a significant acceptance in providing solutions to the farmers and other fields like medical imaging and remote sensing applications [3, 4]. In view of minimizing the spread of disease, we are proposing an automation system that uses the capabilities of machine learning techniques to replace the traditional methods of disease detection [5]. The first and foremost step in this process is availability of dataset that can provide substantial knowledge to the machine learning model and train it with a suitable amount of statistical information [6]. This will make the trained model to make decisions for the testing data and provide the information about the disease present in the fruit. Although a lot many researchers are working toward finding the optimum solution to the disease detection, we tried to address the following issues in making the process a highly accurate one: 1.

2. 3. 4. 5.

The images captured locally from the fields contain high degrees of noise due to atmospheric factors and sensor mechanism. A suitable technique needs to be developed in order to minimize the effect of noise. The accuracy of segmentation stage has significant contribution from the contrast value of the image. Region of interest extraction for localized feature extraction. Extracting appropriate number of features and reducing redundant information in the features. Testing of different machine learning models so as to select one best optimized solution for the problem.

2 Stages of Machine Learning Model for Disease Detection The disease detection process is a multidomain process that combines the methodologies of image processing and artificial intelligence to present the decisions about the type of disease through the acquired images. For the first two stages namely preprocessing stage and segmentation, the input and output both are in the form of images. The feature extraction process has input as image and the output in the form of numerical values of certain attributes. The complete system for designing of system for citrus disease detection is shown in Fig. 1. Various stages need to be optimized for achieving overall accurate system [7, 8]. Optimality can be achieved by using more sophisticated and precise image processing algorithms at various stages and by fine-tuning the process toward achieving an accurate method [9]. The designed system can act as a significant contributor to the practical scenario for application. The effectiveness of the algorithm depends on efficient classifier preparation, which in turn depends on the

An Automated Citrus Disease Detection System …

649

Fig. 1 Automated model for citrus disease detection

high-quality dataset. The dataset must contain images belonging to each calls of disease and a good number of images are required as the data in later stages will be divided into training and testing data [10–12]. The image processing techniques play a crucial role in terms of automating the identification and classification of citrus diseases. Using these identification and classification, strategies based on software allow the program to work more accurately raising the false classification rate. The model needed the images acquired or the dataset available [13, 14]. The conventional pre-existing approach requires a great deal of work to identify the type of disease in terms of yellowing, curling, lesions, etc. In order to prevent disease spreading, a necessary management method must be applied on the basis of decision of type of disease [15]. If the detection stage is not carried out accurately, a false decision at management level to apply a bar will result in spreading of disease [16]. It is important to deploy image processing algorithms to make the process efficient and automated. To be applied for the application, the image processing technique requires the installation of certain hardware and software components [17]. This requirement at installation is a onetime investment. That will include the installation of camera and acquisition equipment. While installing these devices, few factors to consider include lightening condition, angle of capture, and removal of shadows as the efficiency of a complete image processing system depends on the type of image input available [18]. The efficiency of these automation methods had allowed a choice to be made in replacing manual detection methods [19, 20].

650

B. Kaur et al.

3 The Proposed Model This paper proposed a complete model for detection of citrus disease using artificial intelligence. With the view of designing the complete model, we are going to explain the proposed model stage by stage. The training data extracted from the image will be used to test multiple classifiers and then the best classifier is selected to perform testing with the left data considering the output to be an unknown variable (Fig. 2). 1.

Preprocessing stage:

The proposed hybrid model ensuring contrast improvement and noise removal is adaptive median filter and brightness preserving dynamic fuzzy histogram equalization (BPDFHE) [21, 22]. The input RGB image is first decomposed to subsequent R G B planes and then for individual planes adaptive median filtering is applied. The three filtered planes are concatenated back to form one RGB image. Finally, it is processed through BPDFHE for contrast improvement. The resultant obtained in this methodology is having minimum noise and high contrast. 2.

ROI extraction:

For region of interest (ROI) extraction, we have used clustering-based approach that groups the similar pixel and from the clusters [23]. We have used the conventional clustering mechanism through K-means clustering and the resultant ROI will be used

Fig. 2 Proposed model of disease detection system

An Automated Citrus Disease Detection System …

651

Fig. 3 Feature vector construction

for feature extraction to identify the statistical attributes of the lesion area. The region thus extracted will separate the lesion area from the background image. The features thus extracted from the image will form the training data of the machine learning model. 3.

Feature Extraction:

In our model, we are proposing a hybrid feature set comprising of color, texture, and geometry based features. This will ensure all the statistical properties to be available in the training data and ensure appropriate training of the machine learning model. For color-based conversion, we have converted image into different color planes and then features like mean, variance, standard deviation, etc. For texture extraction, GLCM features are used to obtain the spatial relationship of the pixels. The geometric features extracted from the lesion segmented image so that the unique characteristics of the image can be represented [24–27] (Fig. 3). The two different methodologies can be utilized for feature extraction either the leaf images or fruit images to construct the feature vector. Figure 4 elaborates the construction of feature vector by using different types of features. The geometric feature vector extracted will be of size 1*11, i.e., 11 features for a single image, the texture feature vector is comprised of size 1*10 and for color features we have five features for each of the color plane as described in Table 1. The proposed feature vector will act as training data for the classification stage and provide the basis of learning. 4.

Classification Stage:

The machine learning-based algorithm will be trained with the feature vector, and it gains the knowledge through this training data. Once trained with the appropriate data, the testing data can be applied to the classifier to predict the output. In our proposed approach, we are going to train different set of classifiers and finding the optimum solution for the classification stage. K-nearest neighbor (KNN), support

652

B. Kaur et al.

Fig. 4 Hybrid feature vector

Table 1 Proposed feature vector Geometric features

Texture features

Color features

Area

Autocorrelation

RGB

HSV

L*a*b

Aspect ratio

Cross-correlation

Mean

Mean

Mean

Filled area

Dissimilarity

Standard deviation

Standard deviation

Standard deviation

Orientation

Energy

Skewness

Skewness

Skewness

Perimeter

Entropy

Variance

Variance

Variance

Extent

Homogeneity

Root mean square

Root mean square

Root mean square

Solidity

Sum of squares

Major axis

Cluster shade

Minor axis

Cluster prominence

Centroid

Maximum probability

vector machine (SVM), Naïve Bayes (NB), decision tree (DT), and linear discriminant analysis (LDA) models are used for the classification stage and the best solution will be used for test and validation purposes [17, 27, 28].

4 Experimental Setup The implementation of proposed model is done on windows-based PC with Intel core i3 CPU @ 2.10 GHz. The input images taken from the dataset for citrus diseases [6]

An Automated Citrus Disease Detection System …

653

Table 2 Different scenarios for testing classification stage Name

Training versus test distribution

Feature vector size

Scenario 1

80:20

1*66

Scenario 2

60:40

1*66

Scenario 3

80:20

1*45

Scenario 4

80:20

1*11

Scenario 5

80:20

1*10

Table 3 Performance comparison for different preprocessing approaches Technique

MSE

PSNR

SSIM

BRISQUE

NIQE

Contrast stretching

313.3

23.2

0.97

43.2

7.6

CLAHE

413.0

21.9

0.97

43.4

8.9

Morphological (top bottom hat filter)

179.0

25.6

0.96

42.6

8.2

BPDFHE

239.9

24.3

0.94

43.4

7.9

Proposed

161.8

26.0

0.96

43.5

7.8

will be first preprocessed. The result discussion will be done in two phases, firstly the preprocessed image quality will be accessed through suitable performance metrics. The parameters we used for accessing the efficiency of proposed preprocessing stage are mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), blind or referenceless image spatial quality evaluator (BRISQUE), and naturalness image quality evaluator (NIQE). For objective evaluation of classification stage, we are going to measure accuracy of different classifiers in different test setups as represented in Table 2. The testing scenarios as described in Table 2 are designed on the two basis, firstly the whole dataset is divided into different ratios of training and testing secondly to elaborate the importance and role of features in increasing the accuracy of classifier will be obtained through this implementation. The feature vector is made hybrid in order to present most appropriate information to the machine learning model and enables it to make decisions through knowledge obtained. The final classification setup is made as fivefold cross-validation scheme for 100 sample images with balanced distribution in healthy Vs. non-healthy class. The number of predictors is set variable as per the scenarios listed in Table 3 ranging from 66 to 10.

5 Results and Discussions The obtained results will be evaluated through objective metrics as described in experimental setup. The input image is preprocessed using hybrid model of adaptive median filter and BPDFHE. This hybrid model is developed in order to achieve the

654

B. Kaur et al.

Fig. 5 a Input image, b Preprocessed image, and c ROI extracted image

foremost goals of preprocessing, i.e., contrast enhancement and noise minimization. The performance of the preprocessing stage will be adjudged for these two goals through different set of parameters. Figure 5 shows the sample input image, the preprocessed image, and the region of interest (ROI) image. The ROI extracted image is obtained by use of k-means clustering algorithm where the value of k is set to 3. Table 3 presents the values of parameters obtained for the preprocessed image for the sample test image. The proposed model has shown a considerable improvement in the value of PSNR which is a direct metric to identify the image quality in terms of noise removal. The comparative analysis is performed with other existing state-ofthe-art preprocessing models as described in the literature. We have considered the simple linear contrast stretching, contrast limited adaptive histogram equalization, and brightness preserving dynamic fuzzy histogram equalization for presenting the comparative analysis. From the values obtained in Table 3, it can be concluded that the proposed model that the proposed model is outperforming the other preprocessing techniques. Since there is considerable improvement in the value of PSNR, we can directly conclude that the effect of noise is minimized by applying the proposed model. Also, all the other parameters belonging to quality of image have also shown appropriate improvement. Thus, the image obtained is with minimum noise and high in quality in terms of preservance of information content, edge information, and finer details of the image. This preprocessing model resulted in providing suitable input to the segmentation stage for efficient ROI extraction. The second evaluation of the proposed system is performed at the classification stage. We have tested a number of classifiers and the accuracy value obtained in shown in Table 3 (Fig. 6). We have tested the different classifiers with different number and ratios of feature vector data. As evident from Table 4, the NB classifier is giving most appropriate results among all other machine learning models on this dataset. Moreover, the testing through different scenarios depicts that the hybrid approach for feature vector formation has resulted in improvement in the accuracy of the machine learning classifier. The texture feature and geometric feature alone are not sufficient to make effective decisions as there are huge number of false predictions while using the individual feature set.

An Automated Citrus Disease Detection System …

655

Fig. 6 Accuracy of classifiers

Table 4 Performance comparison of classifier in terms of accuracy Classifier Scenario 1 (%) Scenario 2 (%) Scenario 3 (%) Scenario 4 (%) Scenario 5 (%) KNN

92.3

90.1

85.4

78.2

DT

89.7

87.6

81.2

76.4

68.2 66.2

LDA

88.4

85.3

80.7

72.1

70.1

SVM

96.5

95.3

92.1

79.4

71.2

NB

98.6

97.5

93.1

81.3

80.2

6 Conclusion This article proposes a novel approach for highly accurate automated model for citrus disease detection system. The approach is developed in order to segregate healthy and unhealthy fruits using the image database. The images of the fruits are passed through certain image processing-based methodologies to make the information more abstract and clear. Theses processed image is statistically analyzed, and the values of certain attributes are computed. These attributes will thus be passed to the machine learning model for training purposes. The imaging database is divided into two subsets training vs. testing. The training data will enable machine learning model to gain knowledge from the patterns presented through feature vector and thereby making it a trained model. The other set of data, i.e., testing data is applied to the trained model for making predictions about the response class of the image. We have developed a preprocessing model to improve the quality of the image thereby improving the accuracy of the system. A hybrid combination of adaptive median filter and BPDFHE is used at the preprocessing stage. The testing of different classifiers is performed

656

B. Kaur et al.

with different lengths of feature vector and the hybrid feature vector is found to be the optimum approach for feature selection as it addressed the requirements of the machine learning model which is evident from the considerable improvement in the accuracy value.

References 1. Kulkarni, A.H., Patil, A.: Applying image processing technique to detect plant diseases. Int. J. Modern Eng. Res 2(5), 3661 (2012) 2. Fawcett, H.S.: Citrus diseases and their control. 2nd edn. New York: McGraw-Hill Book Company, inc. (1936) 3. Muller, J.-P.: Digital image processing in remote sensing. Digit. Image Process. Remote Sens. (1988) 4. Studholme, C., Hill, D.L.G., Hawkes, D.J.: An overlap invariant entropy measure of 3D medical image alignment. Pattern Recogn. 32(1), 71–86 (1999) 5. Eckert, J.W., Eaks, I.L.: Postharvest disorders and diseases of citrus fruits. Citrus Ind. 5, 179– 260 (1989) 6. https://data.mendeley.com/datasets/3f83gxmv57/2. Accessed 20 June 2019 7. Kumar, M.J., Kumar, G.V.S.R., Reddy, R.V.K.: Review on image segmentation techniques. Int. J. Sci. Res. Eng. Technol. 3(6), 992–997 (2014) 8. Cufi, X., Munoz, X., Freixenet, J., Marti, J.: A review of image segmentation techniques integrating region and boundary information. Adv. Imaging Electron Phys. 120, 1–39 (2003) 9. Naik, D., Shah, P.: A review on image segmentation clustering algorithms. Int. J. Comput. Sci. Inform Technol. 5(3), 3289–3293 (2014) 10. Padol, P.B., Yadav, A.A.: SVM classifier based grape leaf disease detection. In: 2016 Conference on Advances in Signal Processing (CASP), pp. 175–179. IEEE (2016) 11. Barbedo, J.G.A.: Digital image processing techniques for detecting, quantifying and classifying plant diseases. SpringerPlus 2(1), 660 (2013) 12. Shivhare, P., Gupta, V.: Review of image segmentation techniques including pre & post processing operations. Int. J. Eng. Adv. Technol. 4(3), 153–157 (2015) 13. Chang, D.-C., Wu, W.R.: Image contrast enhancement based on a histogram transformation of local standard deviation. IEEE Trans. Med. Imaging 17(4), 518–531 (1998) 14. Hussain, S.A., Hasan, R., Hussain, S.J.: Classification and detection of plant disease using feature extraction methods. Int. J. Appl. Eng. Res. 13(6), 4219–4226 (2018) 15. Suganya, R., Manikandan, A., Balaganesh, S., Mahesh, V.: Detection of plant leaf diseases and growth prediction using image segmentation. Int. J. Pure Appl. Math. 118(20), 329–336 (2018) 16. Ashok, J.M., Tanaji, J.P., Vikas, J.P., Aryan, C.S.: Agricultural plant disease detection and its treatment using image processing 17. Al Bashish, D., Braik, M., Bani-Ahmad, S.: A framework for detection and classification of plant leaf and stem diseases. In: 2010 International Conference on Signal and Image Processing, pp. 113–118. IEEE (2010) 18. Jyotsna, L.N.B., Siri Chandana, P., Sushma, C., Vinuthna, K., Venkata Lakshmi, B.: An efficient classification of plant disease detection using image processing. (2019) 19. Rothe, P.R., Kshirsagar, R.V.: Cotton leaf disease identification using pattern recognition techniques. In: 2015 International Conference on Pervasive Computing (ICPC), pp. 1–6. IEEE (2015) 20. Joshi, P.K., Saha, A.: Detection and classification of plant diseases using soft computing techniques. In: 2019 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), pp. 1289–1293. IEEE (2019)

An Automated Citrus Disease Detection System …

657

21. Sheet, D., Garud, H., Suveer, A., Mahadevappa, M., Chatterjee, J.: Brightness preserving dynamic fuzzy histogram equalization. IEEE Trans. Consum. Electron. 56(4), 2475–2480 (2010) 22. Brownrigg, D.R.K.: The weighted median filter. Commun. ACM 27(8), 807–818 (1984) 23. Dhanachandra, N., Manglem, K., Chanu, Y.J.: Image segmentation using K-means clustering algorithm and subtractive clustering algorithm. Procedia Comput. Sci. 54, 764–771 (2015) 24. Hejazi, M.R., Ho, Y.-S.: A hierarchical approach to rotation-invariant texture feature extraction based on radon transform parameters. In: 2006 International Conference on Image Processing, pp. 1469–1472. IEEE (2006) 25. Ganatra, N., Patel, A.: A survey on diseases detection and classification of agriculture products using image processing and machine learning. Int. J. Comput. Appl. 180(13) (2018) 26. Anami, B.S., Pujari, J.D., Yakkundimath, R.: Identification and classification of normal and affected agriculture/horticulture produce based on combined color and texture feature extraction. Int. J. Comput. Appl. Eng. Sci. 1(3), 356–360 (2011) 27. Song, Y., Diao, Z., Wang, Y., Wang, H.: Image feature extraction of crop disease. In: 2012 IEEE Symposium on Electrical & Electronics Engineering (EEESYM), pp. 448–451. IEEE (2012) 28. Singh, M.K., Chetia, S., Singh, M.: Detection and classification of plant leaf diseases in image processing using MATLAB. Int. J. Life Sci. Res. 5(4), 120–124 (2017)

Clustering High-Dimensional Datasets Using Quantum Social Spider Optimization with DWT Jetti B. Narayana, Satyasai Jagannath Nanda, and Urvashi Prakash Shukla

Abstract In last few years, the quantum mechanics-inspired algorithm quantum particle swarm optimization has become popular among the swarm intelligence researchers. In this algorithm, instead of the Newton-inspired random walk, a type of quantum motion has been used in the search process. Social spider optimization is a relatively new intelligent algorithm inspired from the social behavior of spiders, especially their communications on the web for search of prey and for mutual sexual interaction toward growth of their family. In this paper, a quantum social spider optimization (QSSO) is formulated where the search agents (spiders) are assumed to be bounded in a quantum potential well, through which solution gets attracted toward the global optima. The QSSO has been employed to cluster five low-dimensional and four high-dimensional datasets. To deal with high dimensionality, two-level discrete wavelet transform (DWT) has been applied for feature reduction. The use of transformed domain features with DWT significantly reduces the computational complexity to almost half of that achieved by QSSO, while marginally affecting the accuracy of the algorithm. The statistical results obtained with three validating indices such as accuracy, silhouette index, and run time justify the superior performance of the proposed approach over benchmark APSO, RGA, and original SSO algorithm.

1 Introduction Clustering has emerged as one of the efficient and accurate method to segregate homogeneous elements within a dataset. In digital world, huge volume of data is generated by users which need to be stored and reused to for analytical details [1]. J. B. Narayana · S. J. Nanda (B) · U. P. Shukla Department of Electronics and Communication Engineering, Malaviya National Institute of Technology Jaipur, Jaipur 302017, Rajasthan, India e-mail: [email protected] J. B. Narayana e-mail: [email protected] U. P. Shukla e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_52

659

660

J. B. Narayana et al.

Therefore, clustering of high-dimensional data has received much attention [2, 3]. However, handling massive volume of data still lacks formal categorization and proper property definition. A large number of clustering algorithms such as K-means, fuzzy clustering, density-based clustering, partitioning methods, and so on has been reported in literature. However, prior knowledge of optimal number of clusters is still a fundamental issue in cluster analysis [4]. A nature-inspired algorithm, termed as social spider optimization (SSO), has been proposed based on the natural behavior of spiders living in colonies [5]. The algorithm exhibited remarkable performance in terms of exploration and exploitation during evolution process but shows premature convergence over multimodal fitness domain. Therefore, in 2016, a modified version of SSO, known as MSSO algorithm, has been proposed based on natural gradient search procedure and beta probability distribution [6]. The inclusion of beta distribution in original SSO assists an algorithm to maintain diversity and convergence among obtained solutions. Further, binary version of SSO (BSSO) has been proposed [7]. Furthermore, the BSSO algorithm has proved its efficacy in selecting optimal bands in hyperspectral images [8]. It is observed that nature-inspired clustering algorithms involve large number of agent and iterations, thereby increasing the computational complexity of algorithm. A new approach termed as parallel SSO (P-SSO) is discussed in [9] where parallel movement of male and female spiders are introduced which reduces algorithm’s computational complexity [9]. In different technique, a chaotic SSO (C-SSO) has been suggested which comply robust clustering, using chaotic sequence for spiders initialization [10]. SSO has been extended to handle multi-objective optimization problems (MOOPs) by employing multi-objective SSO (MOSSO) algorithm which involves simultaneous optimization of two–three conflicting objectives [11]. Recently, the quantum-inspired algorithms have gained popularity as they introduce quantum motion in the search process instead of random walk. A quantum-based whale optimization algorithm has been introduced in [12] by incorporating quantumbit characteristic of population to extract wrapper features in high- dimensional datasets. Quantum chaotic cuckoo search algorithm [13] included a chaotic map to boost the global search ability of the algorithm. A quantum grey wolf optimizer-based declustering model is reported by Vijay and Nanda in [14] for analysis of seismic catalogs. Other nature-inspired algorithms hybridized with quantum principle are based on differential evolution [15], harmony search [16, 17], firefly swarms [18], and spider monkey optimization [19, 20]. Inspired by recent trends of research in this paper, a quantum-inspired social spider optimization (QSSO) is reported. The proposed algorithm has the ability to maintain exploration and exploitation property of an effective search by incorporating quantum behavior in position update equation. Here, the local movement attractor is defined and used for position update through characteristic potential well and random variable. The QSSO is used for clustering low and high-dimensional dataset. Two level of compression (level-1 and level-2) using discrete wavelet transform (DWT) is applied to reduce features of high-dimensional datasets.

Clustering High-Dimensional Datasets …

661

Rest of the paper is organized as follow. Section 2 presents a detailed discussion of proposed QSSO algorithm for clustering. Experimental studies on benchmark datasets are presented in Sect. 3. Section 4 demonstrates results and discussion. Conclusions are pointed out in Sect. 5.

2 Proposed QSSO Algorithm for Clustering 2.1 Quantum SSO Algorithm Shukla and Nanda first applied the original SSO algorithm for clustering in [9]. In original SSO algorithm, the entire population is bifurcated into three categories: female, dominant male, and non-dominant male. The number of female spider in the community is large. Communication between all spider happens in the web. The fitness of each spider is represented by their weight. Process of mating is considered as another crucial process, as it promotes the fittest information in one generation get transferred to next generation and thus helps in survival of colonies for long. The following assumption are taken in the original SSO which also remain valid for proposed QSSO given below: 1. 2. 3. 4. 5. 6.

Range of Females in Population 60–90%. Constant swarm size used. Fitness of spiders are considered as their weight. Vibrations on web are communications within. Percentage Factor (Pf ) is female socialization constant of 0.7. Mating is done if a dominant male finds a female in its mating radius.

The quantum carried search process by swarm (instead of Newtonian walk) heads to determination of effective optimal position in search space. It is achieved by occurrence of random search agents at far distance from a position with certain probability [14]. Initially, quantum behave particle swarm optimization (QPSO) is reported in [21] by using quantum mechanics rather than traditional search process. Here, the assumption is that particles travel in D-dimensional Hilbert space which has quantum potential field to ensure sure convergence. In mathematical formulation of original SSO, the equations for female position update is given by

Sfik+1 =

⎧ k S + α × Vi,l × (S l − Sfik ) + β × Vi,g × (S g − Sfik ) ⎪ ⎪ ⎨ fi +δ × (rnd − 0.5) × (Pf ), if rnd < Pf

S k − α × Vi,l × (S l − Sfik ) − β × Vi,g × (S g − Sfik ) ⎪ ⎪ ⎩ fi +δ × (rnd − 0.5) × (Pf ), otherwise

(1)

662

J. B. Narayana et al.

The dominant male position update is k+1 k k = SDi + α × Vi,f × (Sfk − SDi ) + δ × (rnd − 0.5) SDi

(2)

The non-dominant male position update is − → k+1 k+1 k+1 = SNDi + α × ( X W − SNDi ) SNDi

(3)

The S l and S g are local best and global best position of spider. The vibration received l 2 by spider Si from the nearest local best spider S l is Vi,l = Wl e−δ(Si −S ) . Here, Wl is weight of local best spider which is a function of fitness of the spider. Same g 2 manner vibration received from global is Vi,g = Wl e−δ(Si −S ) , and vibration received f 2 from nearest female spider is Vi,f = Wl e−δ(Si −S ) . These are exponentially decaying equations thus helpful for nonlinear position update for spiders. Here, to make the spiders use quantum potential, the median value of female population median(Fpop ) is used as a factor and formulate the local attractor Lk+1 fi ,d for ith female spider at (k + 1) th iteration in d dimension. l k Lk+1 fi ,d = 0.9 × median(Fpop ) + α × Vi,l × (S − Sfi )

+β × Vi,g × (S − g

Sfik )

(4)

+ δ × (rnd − 0.5) ∗ (Pf )

Similarly the local attractor Lk+1 Di,d for ith dominant-male spider is k k Lk+1 Di,d = 0.9 × median(Dpop ) + α × Vi,f × (Sf − SDi )

7

+ δ × (rnd − 0.5)

(5)

where Fpop and Dpop are female and dominant male population. The α, δ, rnd are random variables between [0, 1]. Here, update of females and dominant males are quantumized to make the search space more random and have the probability to occur at far distances. The (rnd − 0.5) serves to make the spider bounded in web and leading to convergence. While considering fundamental concepts of quantum mechanics, every single spider is assumed to have a spin less movement in web with particular field energy. This field pulls the spider according to the equations formulated here. The length of a potential well is considered for moving spiders in bounded state. Thus, position update equations in QSSO is For female spider

Sfik+1

⎧   ⎪ 1 k k k ⎪ ⎨Lfi ,d + rnd × |Sfi − Wf ,d | × ln Rk , if rnd < Pf  fi ,d  = ⎪ ⎪ ⎩Lkfi ,d − rnd × |Sfki − Wfk,d | × ln Rk1 , otherwise fi ,d

(6)

Clustering High-Dimensional Datasets …

663

where Rkfi ,d = 2 × rnd × |Sfki ,d − Wfk,d |

(7)

where average position of female spider is Wfk,d

Fpop 1  k = S Fpop i=1 fi ,d

(8)

Similarly for dominated male spiders

k+1 SDi

⎧   ⎪ k ⎪ | × ln Rk1 , if rnd < Pf ⎨LkDi ,d + rnd × |SDk i − WD,d  Di ,d  = ⎪ k k k ⎪ ⎩LDi ,d − rnd × |SDi − WD,d | × ln Rk1 , otherwise

(9)

Di ,d

where k RkDi ,d = 2 × rnd × |SDk i ,d − WD,d |

(10)

where average position of dominant male spider is k = WD,d

Dpop 1  k S Dpop i=1 Di ,d

(11)

As much of the exploration task of the algorithm are contributed by the female and dominant-male spiders, only their equations are replaced by the quantum position updated equations in QSSO. In the non-dominant male which are responsible for exploitation process, their update equation remains same as in Eq. (3). The update of all three categories of spider positions in the web are shown in Fig. 1.

2.2 Objective Function for Clustering The fitness function taken here is minimization of intra cluster distance between elements of dataset to cluster heads f = minimize

N  K 

2 Euclidean Distance rj,d , ck,d

(12)

j=1 k=1

where datapoint rj,d is an element of dataset RN ,d and cluster head ck,d is an element of CK,d , where K is the total number of clusters.

664

J. B. Narayana et al.

Fig. 1 Movement of spiders on the web using proposed quantum SSO: a female spiders; b dominant male spiders, c non-dominant male spiders

2.3 Feature Reduction with Discrete Wavelet Transform The discrete wavelet transform is applied on each data point of the dataset, and decomposition of features is carried out up to two levels. Daubechies family of wavelet (D4) has been used for decomposition. After first-level decomposition, the feature size get reduced to half and only low-frequency components are kept. Second level is applied on this reduced features which in turn further reduce the feature size

Clustering High-Dimensional Datasets …

665

into half and similar manner low-frequency components are kept for doing clustering. Dataset in time domain RT [n] with n ∈ N samples RT [n] = Level 1: Reduced(R)low [n] = Level 1: Reduced(R)high [n] =

∞  k=−∞ ∞  k=−∞ ∞ 

R[k][n − k]

(13)

R[k][2n − k]

(14)

R[k][2n − k]

(15)

k=−∞

2.4 Algorithms Implementation The implementation of the proposed QSSO, original SSO, and other benchmark comparative clustering algorithms like adaptive particle swarm optimization (APSO), and real-coded genetic algorithm (RGA) are carried out in MATLAB version R2018a on an DELL laptop with Intel core i7 processor, 3.5 GHZ clocked CPU with 16 GB of inbuilt RAM in Windows 10, 64-bit environment. For clustering on high-dimensional datasets, first the DWT based feature reduction is carried out as described in Sect. 2.3. Then, for Original SSO based clustering, the eight steps defined in Sect. 3.1 of paper [9] by Shukla and Nanda is followed. The parameter settings is kept same as defined in Table 4 of [9]. In the proposed QSSO implementation, the Step 5 position update equations are replaced by the proposed QSSO Eqs. (3)–(11) defined in Sect. 2.1.

3 Simulation 3.1 Datasets In this paper, the simulation work is carried out on five low- dimensional (Aggregation, Iris, R15, Wine, Yeast) and four unlabeled high-dimensional datasets (Dim32, Dim256, Dim512, Dim1024). The details about the datasets are included in Table 1. The low- dimensional datasets are collected from UCI repository of machine learning repository (link: https://archive.ics.uci.edu/ml/datasets.php), and the unlabeled four high-dimensional datasets are collected from Joensuu clustering datasets (link: http:// cs.joensuu.fi/sipu/datasets/) namely DIM-sets (high) which is a part of research work referred in [22].

666

J. B. Narayana et al.

Table 1 Datasets used in the simulation study Example Datasets Samples 1 2 3 4 5 6 7 8 9

Aggregation Iris R15 Wine Yeast Dim32 Dim256 Dim512 Dim1024

788 150 600 178 1484 1000 1000 1000 1000

Dimension

Clusters

2 4 2 13 8 32 256 512 1024

7 3 15 3 10 16 16 16 16

3.2 Cluster Validation Techniques In order to compare the performance of proposed QSSO based clustering with existing three algorithms, the following four performance indices are taken in this manuscript: (1) percentage of accuracy, (2) silhouette index, (3) MATLAB scatter plots for visualization of clustered datapoints, (4) convergence plots, and (5) run time to have a look at the computational efficiency associated with the algorithms. As these algorithms are meta-heuristic in nature, the results of repeated runs do not exhibit the same results. Thus, each algorithm is allowed for hundred independent runs, and the average value along with standard deviation of percentage of accuracy, silhouette index, and run time achieved is noted down.

4 Results and Discussions The results obtained with simulation of proposed QSSO and comparative algorithms on five small and four high-dimensional datasets are included in this section. Convergence Curves: The convergence curves obtained with proposed QSSO (Orange line) and that attended by original SSO (Blue line) on the nine datasets for 100 iterations are shown in Fig. 2. It is observed that out of nine cases, in five cases, the convergence achieved by the proposed QSSO is better than original SSO, in two cases (Dim512 and Dim1024 dataset) both the algorithm converge to same point, and in two cases (Aggregation and Dim32), the convergence of original SSO is better. MATLAB Scatter plots: The scatter plots obtained by proposed QSSO based clustering on the nine datasets are shown in Fig. 3. It is observed that clearly separated clustered points in groups are visible in datasets R15, Dim32, Dim256, Dim512, and Dim1024. In Iris, Wine, and Yeast datasets, the obtained clusters are overlapping in nature. In Aggregation dataset, there are seven clusters out of which the proposed

Clustering High-Dimensional Datasets …

667

Fig. 2 Convergence curves of proposed QSSO and SSO based clustering algorithms on datasets: a Aggregation, b Iris, c R15, d Wine, e Yeast, f Dim32, g Dim256, h Dim512, i Dim1024

algorithm can able to identify five clusters. In remaining two clusters, the proposed algorithm is breaking them into parts (this is due to the presence of arbitrary shape clusters in this dataset). Percentage of Accuracy: The accuracy obtained by the proposed QSSO and comparative algorithms over 100 independent runs are presented in Table 2. The best results achieved are highlighted in bold fonts. It is observed that the DWT based twolevel decomposition results are presented only for the for high- dimensional datasets. Among the low-dimensional datasets, out of five in four cases, the obtained results are superior than the comparative algorithms. In high-dimensional datasets in first data 32 dimensional case, the results of QSSO is better, and with decomposition, the percentage of accuracy decreases due to information loss in the DWT spaced features. In 256 dimensional case, the results of DWT level 1 decomposed features have become effective. With further increase in dimensionality to 512 and 1024, the DWT level 2 decomposed features become more effective. It signify that the reduction of features using DWT in high-dimensional data allows the removal of redundant features and provides compressed effective features for clustering.

668

J. B. Narayana et al.

Fig. 3 Obtained clusters with proposed QSSO clustering algorithm: a Aggregation, b Iris, c R15, d Wine, e Yeast, f Dim32, g Dim256, h Dim512, i Dim1024

Silhouette Index: The silhouette index obtained by the proposed QSSO and comparative algorithms over 100 independent runs are presented in Table 3. The silhouette index value ranges from [−1, 1]; here +1 indicates formation of best cluster formed that a dataset can have best separation and tightly packed elements. The −1 is quite opposite value; 0 shows boundary conditions. The best results obtained for silhouette index are marked in bold fonts. Here, similar observations are obtained as noted for percentage of accuracy. In low-dimensional datasets except for Aggregation, the rest results obtained with QSSO are superior. In the high-dimensional datasets in Dim32, result of QSSO with original features is better, in Dim256, the result of QSSO with DWT level 1 decomposed features is superior, and in Dim512 and Dim1024, the DWT level 2 decomposed features are accurate than the other comparative algorithms. Run Time Analysis: The run time obtained by the proposed QSSO and comparative algorithms over 100 independent runs are presented in Table 4. It is observed that the computational time of APSO is lowest due to the involvement of only two simple equations for velocity and position update. The computational time of RGA is higher than APSO due to involvement of crossover, mutation, and reelection scheme.

Clustering High-Dimensional Datasets …

669

Table 2 Mean value and standard deviation of percentage of accuracy obtained over 100 independent runs by proposed QSSO, QSSO-DWT (level 1 and 2) along with comparative algorithms Datasets

APSO

RGA

SSO

QSSO

QSSO-DWT Level 1

Level 2

Aggregation

83.19

83.61

81.22(±1.2)

80.32(±1.1)





Iris

90.33

89.32

90.22(±2.20)

91.32(±2.61)





R15

94.32

93.11

93.36(±2.22)

94.63(±2.36)





Wine

92.36

93.62

91.32(±1.96)

94.52(±1.32)





Yeast

81.32

83.65

81.62(±1.27)

83.62(±2.30)





Dim32

89.32

90.32

94.86(±1.3)

95.81(±1.63)

95.14 (±0.32)

94.92 (±2.24)

Dim256

93.12

89.32

89.35(±1.33)

95.60(±2.31)

95.63 (±1.47)

95.17 (±1.34)

Dim512

93.25

89.47

95.34(±1.64)

95.17(±1.45)

95.87 (±1.54)

97.82 (±1.39)

Dim1024

94.25

94.05

94.62(±1.46)

95.32(±1.69)

95.48 (±1.98)

96.23 (±1.44

Table 3 Mean value and standard deviation of silhouette index obtained over 100 independent runs by proposed QSSO, QSSO-DWT (level 1 and 2) along with comparative algorithms Datasets

APSO

RGA

SSO

QSSO

QSSO-DWT Level 1

Level 2

Aggregation

0.265

0.223

0.324(±0.12)

0.322 (±0.12)





Iris

0.216

0.212

0.322(±0.11)

0.542(±0.12)





R15

0.214

0.211

0.218(±0.01)

0.228(±0.02)





Wine

0.231

0.233

0.322(±0.14)

0.332(±0.10)





Yeast

0.220

0.226

0.221 (±0.02)

0.228(±0.01)





Dim32

0.145

0.304

0.470(±0.11)

0.642(±0.13)

0.607(±0.12)

0.552(±0.01)

Dim256

0.334

0.339

0.446(±0.01)

0.486(±0.11)

0.492(±0.12)

0.486(±0.14)

Dim512

0.312

0.112

0.465(±0.2)

0.543(±0.15)

0.562 (±0.11)

0.639(±0.31)

Dim1024

0.554

0.246

0.452(±0.2)

0.623(±0.11)

0.631(±0.33)

0.642(±0.19)

The computation involved in original SSO is higher than RGA due to bifurcation of the search agents and involvement of nonlinear update equations for each type of search agents. The run time of proposed QSSO is marginally higher than the original SSO due to involvement of complex quantum equations. It is also observed that with application of DWT (level 1), the number of features in the high-dimensional datasets gets reduced to half, thus proportionately the run time of the algorithm is also reduced to almost half. Similarly with application of DWT (level 2), the number of features is further reduced to half over DWT (level 1), and thus, the computational time also further gets reduced to half over DWT (level 1).

670

J. B. Narayana et al.

Table 4 Mean value and standard deviation of run time obtained over 100 independent runs by proposed QSSO, QSSO-DWT (level 1 and 2) along with comparative algorithms Datasets

APSO

RGA

SSO

QSSO

QSSO-DWT Level 1

Level 2

Aggregation

4.32

4.99

56.73(±0.19)

59.11 (±0.22)





Iris

1.14

1.22

10.32(±0.19)

14.24 (±0.23)





R15

5.37

5.84

52.72(±0.21)

57.58 (±0.32)





Wine

3.32

3.86

12.43(±0.11)

16.32 (±0.33)





Yeast

11.21

22.82

100.8(±1.93)

106.70 (±2.34)





Dim32

7.38

8.93

173.1(±0.92)

177.8(±3.21)

88.90 (± 1.34) 42.02 (±2.47)

Dim256

54.25

58.32

1026.1(±1.93) 1030.09 (±2.21)

515.23 (±5.23)

257.52 (±5.21)

Dim512

108.50

121.56

2058.2(±2.26) 2060.20 (±4.42)

1031.10 (±2.23)

515.23 (±3.29)

Dim1024

293.91

335.35

4026.1(±1.94) 4031.52 (±2.31)

2012.32 (±3.21)

1011.22 (±2.65)

5 Conclusion In this paper, a quantum version of SSO algorithm (QSSO) is reported. The algorithm is used for clustering five low-dimensional and four high-dimensional datasets. In the high-dimensional datasets, two-level DWT has been applied to reduce the features. The superior performance of QSSO is reported in terms of higher percentage of accuracy and effective silhouette index. The convergence curves also justify better performance of QSSO over original SSO. However, due to involvement of complex quantum update equations, the run time of proposed QSSO is higher than original SSO. Thus, the proposed QSSO algorithm is suitable to those applications where there is demand of better accuracy with little compromise over computational time. Simulation study also reveal that as the number of features get reduced to half by application of each level of DWT, result in reducing the overall computational time of the algorithm to almost half.

References 1. Jain, A.K.: Data clustering: 50 years beyond k-means. Pattern Recogn. Lett. 31(8), 651–666 (2010) 2. Fahad, A., et al.: A survey of clustering algorithms for big data: taxonomy and empirical analysis. IEEE Trans. Emerg. Topics Comput. 2(3), 267–279 (2014) 3. Kumar, D., Bezdek, J.C., Palaniswami, M., Rajasegarar, S., Leckie, C., Havens, T.C.: A hybrid approach to clustering in big data. IEEE Trans. Cybern. 46(10), 2372–2385 (2015)

Clustering High-Dimensional Datasets …

671

4. Nanda, S.J., Panda, G.: A survey on nature inspired metaheuristic algorithms for partitional clustering. Swarm Evol. Comput. 16, 1–18 (2014) 5. Cuevas, E., Cienfuegos, M., ZaldíVar, D., Pérez-Cisneros, M.: A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst. Appl. 40(16), 6374–6384 (2013) 6. Klein, C.E., Segundo, E.H.V., Mariani, V.C., Coelho, L.d.S.: Modified social-spider optimization algorithm applied to electromagnetic optimization. IEEE Trans. Magn. 52(3), 1–4 (2015) 7. Shukla, U.P., Nanda, S.J.: Dynamic clustering with binary social spider algorithm for streaming dataset. Soft Comput. 23(21), 10717–10737 (2019) 8. Shukla, U.P., Nanda, S.J.: A binary social spider optimization algorithm for unsupervised band selection in compressed hyperspectral images. Expert Syst. Appl. 97, 336–356 (2018) 9. Shukla, U.P., Nanda, S.J.: Parallel social spider clustering algorithm for high dimensional datasets. Eng. Appl. Artif. Intell. 56, 75–90 (2016) 10. Aggarwal, S., Chatterjee, P., Bhagat, R.P., Purbey, K.K., Nanda, S.J.: A social spider optimization algorithm with chaotic initialization for robust clustering. Procedia Comput. Sci. 143, 450–457 (2018) 11. Gupta, R., Nanda, S.J., Shukla, U.P.: Cloud detection in satellite images using multi-objective social spider optimization. Appl. Soft Comput. 79, 203–226 (2019) 12. Agrawal, R.K., Kaur, B., Sharma, S.: Quantum based whale optimization algorithm for wrapper feature selection. Appl. Soft Comput. 89, 106092 (2020) 13. Boushaki, S.I., Kamel, N., Bendjeghaba, O.: A new quantum chaotic cuckoo search algorithm for data clustering. Expert Syst. Appl. 96, 358–372 (2018) 14. Vijay, R.K., Nanda, S.J.: A quantum grey wolf optimizer based declustering model for analysis of earthquake catalogs in an ergodic framework. J. Comput. Sci. 36, 101019 (2019) 15. Deng, W., Liu, H., Xu, J., Zhao, H., Song, Y.: An improved quantum-inspired differential evolution algorithm for deep belief network. IEEE Trans. Instrum. Meas. (2020) 16. Senthilnath, J., et al.: A novel harmony search-based approach for clustering problems. Int. J. Swarm Intell. 2(1), 66–86 (2016) 17. Layeb, A.: A hybrid quantum inspired harmony search algorithm for 0–1 optimization problems. J. Comput. Appl. Math. 253, 14–25 (2013) 18. Ozsoydan, F.B., Baykaso˘glu, A.: Quantum firefly swarms for multimodal dynamic optimization problems. Expert Syst. Appl. 115, 189–199 (2019) 19. Bansal, J.C., Sharma, H., Jadon, S.S., Clerc, M.: Spider monkey optimization algorithm for numerical optimization. Memet. Comput. 6(1), 31–47 (2014) 20. Dey, A., Dey, S., Bhattacharyya, S., Platos, J., Snasel, V.: Novel quantum inspired approaches for automatic clustering of gray level images using particle swarm optimization, spider monkey optimization and ageist spider monkey optimization algorithms. Appl. Soft Comput. 88, 106040 (2020) 21. Mikki, S.M., Kishk, A.A.: Quantum particle swarm optimization for electromagnetics. IEEE Trans. Antennas Propag. 54(10), 2764–2775 (2006) 22. Franti, P., Virmajoki, O., Hautamaki, V.: Fast agglomerative clustering using a k-nearest neighbor graph. IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 1875–1881 (2006)

Intuitive Control of Three Omni-Wheel-Based Mobile Platforms Using Leap Motion Devasena Pasupuleti , Dimple Dannana , Raghuveer Maddi , Uday Manne , and Rajeevlochana G. Chittawadigi

Abstract Autonomous and remotely controlled mobile robots are used extensively in industries and are slowly becoming a norm in day-to-day life. They can be manually controlled using devices such as joysticks or by using mobile phone applications, through Bluetooth or similar technologies. In this paper, the authors propose usage of leap motion device which can track hands of human users. By using the gestures, the human can operate a mobile robot. For the demonstration, the authors propose its usage on a three omni-wheel-based mobile platforms. First, the mathematical model of the robot was formulated and simulated in V-REP software for various types of motion. Then, a physical prototype was developed, which was integrated with both Bluetooth-based mobile phone application and leap motion device. Field trials and survey of 30 people were carried out, where both methods were perceived to be of similar ease of use and intuitiveness. However, the authors feel the results of leap motion control may improve with subsequent usage by the users. Keywords Omni-wheel robot · 3-DOF mobile platform · Leap motion device

1 Introduction A robot with a moving base is called a mobile robot. Autonomous mobile robots (AMRs) are mobile robots that are able to perform tasks in an environment without continuous human guidance. Some primary abilities every AMR must possess are, the ability to move, autonomous drive, and intelligence. Mobile robots are equipped with advanced intelligence and sensory systems. These systems enable them to have wide range of applications and are used in various fields such as industries, surgeries, surveillance, mines, and nuclear environments. The terrestrial mobile robots, a subset of mobile robots, move on ground and are mainly classified as wheeled and legged mobile robots [1]. The legged robots are designed to move on uneven terrain but are slow when compared to wheeled robots and require complex mechanics analysis D. Pasupuleti · D. Dannana · R. Maddi · U. Manne · R. G. Chittawadigi (B) Department of Mechanical Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_53

673

674

D. Pasupuleti et al.

and construction, whereas the wheeled robots can move with greater speeds on even surfaces but cannot be used at uneven terrains, and its construction and mechanics analysis is easier. In general, there are four types of wheels used for mobile robots. They are conventional wheels, ball wheels, Mecanum wheels, and omni-wheels. Conventional wheels are the general wheels which have rotation about axis of wheel and steering. Ball wheels or spherical wheels are capable of rotating about two axes which is a complicated system. Mecanum wheels have rollers along its tread that have independent motion about their axes. The roller axes are generally at 45° or 135° with respect to the axis of the wheel. Omni-wheels are a subset of Mecanum wheels where the roller axes are tangential to the circumference of the wheel tread. Hence, their axes are skew and perpendicular to the axis of the wheel. The roller or barrel rotation is not actuated with the help of any actuators but occurs due to sliding of the wheel. These wheels can rotate about the axis of the wheel and can translate about a perpendicular plane to its axis. The Mecanum wheels are expensive when compared to the omni-wheels. The rollers present on Mecanum wheel rolls have a tendency to rotate about their axes when given motion about the wheel’s axis which does not occur in case of omni-wheel. Considering these advantages of omni-wheels over Mecanum wheels, the authors have used omni-wheels in their mobile robot platform. Also, the mathematical model that relates the rotation of individual motors and wheels to that of the resultant motion of the robot platform is easier for an omni-wheel-based robot than a similar built using Mecanum wheels. For a mobile robot to move, one can choose different sets of active and passive wheels depending on the desired motion and required degrees-of-freedom (DOF). Two-wheeled system can be used but it has difficulties in balancing and has lesser maneuverability. Three-wheeled system is easy to control, has better maneuverability, and it allows the robot to move in all directions. Four-wheeled system is complicated in its control and construction as there are four wheels with motors, whereas it only requires 3 DOF to move anywhere on a floor/plane and orient the robot. Thereby, one constraint equation that relates the rotation of all four wheels is in place. Hence, a three omni-wheel-based robot is ideal for moving on a plane with omni-directional capabilities [2]. However, if the robot has to perform only translation, without any explicit rotation about its vertical axis, only two active omni-wheels are sufficient to achieve the same. A thorough analysis and further proposal by the authors are reported in [3]. In this paper, the authors have used a three-wheeled omni-wheels mounted on the motors on the edges on an equilateral triangle. The layout is fairly standard, whose mathematical model is explained in the next section. Mobile robots can be controlled using various input devices. One can classify the control strategy based on where and how the input device is located. The first is a case where the input device is fixed to some reference frame. The second is a case where the input device can be moved around, thus changing its position and orientation. This is helpful by orienting the device based on the orientation of the mobile robot, so that the user has better intuition in its control. The third is a scenario where the user and the input device are on the mobile platform and hence share same orientation as that of the robot. All these are considered for the analysis of the input devices, as

Intuitive Control of 3 Omni-Wheel Based Mobile Platform Using …

675

explained in the remainder of the paper. The authors propose that Bluetooth-based mobile phone application falls under the second and third cases, whereas the leap motion-based control falls under first and third cases.

2 Kinematic Model In this paper, the authors have used a three omni-wheel configurations, where all the three wheels, namely front (F) wheel, left (L) wheel, and right (R) wheel, are active, i.e., powered with three motors attached to them, respectively. The motors are mounted on a hexagonal-shaped platform such that the axes of the motors are equally spaced with an angular division of 120°, as shown in Fig. 1. When a wheel is rotated, a force is applied on the platform. If the driven wheel is looked at from the center of the platform (radially outwards) and the sense of direction is anticlockwise, the platform tends to move toward the left, when looked from the center toward the wheel considered. Similarly, for clockwise rotation, the platform has a tendency to

(a)Forward

(e) Forward-Left

(b) Backward

(c) Left

(f) Forward-Right

(g) Backward-Left

(i) Clockwise

(j) Anticlockwise

Fig. 1 Motion of the three omni-wheel robots for various motor inputs

(d) Right

(h) Backward-Right

676

D. Pasupuleti et al.

move right. Hence, each wheel can be assigned +1 for counter-clockwise rotation, 0 for no motion, and −1 for clockwise rotation. Consider a coordinate frame XYZ on the ground, X and Y being on the floor and Z vertically upward. The robot has three inputs and has three control variables in the form of translation along X, along Y, and yaw rotation about Z axis. Thus, this a fully determined system with 3 DOF. A frame UVW can be considered on the platform such that its origin is at the centroid of the platform. FF , FL , and FR are the forces applied by the front, left, and right wheels, respectively. When more than one force is acting on the platform, the resultant force and moments due to individual forces have to be considered for the resultant motion of the platform. If only one force acts on the platform, i.e., if only one wheel is powered, the force on the platform acts at one of the edges of the hexagon. This applied force can be considered as an equivalent force-moment system, where equal and parallel force acts through the centroid and a moment on the platform along the vertical axis (W ). Equivalent force has a tendency to move the platform along its direction, whereas the moment has a tendency to rotate the platform above the vertical axis, thus causing a yaw rotation. For a desired path, unwanted yaw rotation is usually not desired and hence powering of only one wheel should be avoided. If two of the wheels are powered, the forces again act at the edge of the hexagonal shapes. Depending on the directions of the wheel rotation, these forces could be leading to each other or not. If they are leading, the equivalent forces and moments at the centroid have to be considered. The forces would be acting at 120° with respect to each other. The vector sum will be direction of the resultant translation. Since the forces are converging, the moments due to both the forces negate or cancel each other. Hence, there is no yaw rotation of the platform. On the contrary, if the forces are not leading to each other, the resultant motion would still be acting at the center. However, the moments due to the forces add up and would have a tendency to rotate the platform to have a yaw rotation. If the three wheels are powered and when looked from the center, if all have same sense of rotation, the vector sum of the forces add up to be zero and will only have a resultant moment which causes only yaw rotation. However, if one of the direction sense is opposite, there is a resultant force acting at the centroid and also a moment. By supplying different torque at the motors, and hence the applied force, one can control to have pure translation, where the moments get canceled out. As evident from the previous paragraphs, not all possible inputs to the motors result in controlled motion along the X and Y directions and rotation about Z axis. Some of the useful motions along with the required inputs at the wheels are mentioned in Table 1. The input force(s) are shown using blue-colored arrows, and the resulting motion of the platform is shown in red color and illustrated in Fig. 1. Note that for left and right motion, the magnitudes of the force FF have to be double than that of FL (whose magnitude is same as that of FR ). Also, the last column in Table 1 will be explained in later sections.

Intuitive Control of 3 Omni-Wheel Based Mobile Platform Using …

677

Table 1 Combination of motor inputs and resultant motion Motion type Forward

Motor F 0

Motor L

Motor R

Char assigned

−1

+1

F

0

+1

−1

G

Left

+2

−1

−1

L

Right

−2

+1

+1

R

Forward-left

+1

−1

0

Z

Forward-right

+1

0

+1

E

Backward-left

+1

0

−1

Q

Backward-right

−1

+1

0

C

Clockwise

−1

−1

−1

N

Anticlockwise

+1

+1

+1

M

Backward

3 V-REP Simulation V-REP, the virtual robot exploration framework,is capable of modeling, modifying, programming, and simulating robots and robotics (sensors, mechanisms, and more) with an integrated development environment. It provides several functions that can be integrated seamlessly and combined with script feature and comprehensive application programming interface (API) when compared to other applications like Gazebo, reported in [4]. V-REP is a versatile robot simulation software that can be used for many varieties of robots, for example, fixed base robots, moving base robots, etc. [5]. The three-wheeled omni-wheel robot explained in the previous section was modeled in autodesk inventor. The body of the robot is circular in shape, and VREP has the ability to automatically detect the three omni-wheels and provides to them the dynamic nature of a revolving joint between them and the body [3]. The aim of the authors is to achieve ten different motions, mentioned in Table 1. The steps for the importing and simulation of the robot in V-REP are illustrated in Fig. 2. The model should be imported to V-REP by selecting “File-Load Model-BrowseOmniRob.STL”. (The authors have used the file name—OmniRob). Once the model has been imported to V-REP, it is seen that the circular body OmniRob acts as the parent, and the three wheels act as children [6]. The program is developed in LUAScript, a high-end programming language which is the in-built script in V-REP [7]. The α, β, γ (Euler angles) for the three wheels are given in Table 1, where by default in V-REP, α and γ are “0” for all the cases, for all the three wheels, and β value is 90°, but for left and right motions, this is found to produce an unwanted rotation. As the β value can be varied between −/2 and −/2, through trial and error, the authors have deduced that the right wheel having β of −/3 and left wheel having β of /3, with the front wheel having a default β value which produces translation in left and right directions with no rotation. Here, slipping is neglected, and only rolling is considered.

678

D. Pasupuleti et al.

Fig. 2 Steps for V-REP simulation

The model of the robot after importing and setting in V-REP is shown in Fig. 3. Though the wheels appear to be disconnected, they are connected with the platform through virtual motors, which give rotation to them. V-REP also has provision to show plots to show the values of certain parameters as the simulation and animation are underway. These data can also be exported for further processing. In V-REP simulation, the minimum and maximum velocity ranges within which the wheel can rotate have to be specified, for each of the scenarios given in Table 1. The plots in V-REP for the absolute X and Y coordinates of the centroid of the platform are obtained and exported as.csv file. These data are then used in MATLAB to obtain plots as shown in Fig. 4. The trace of the centroid for the forward motion is shown in Fig. 4a, and that for the right motion is shown in Fig. 4b. Plots for forward-right translation and for clockwise rotation are shown in Fig. 4c, d. Note that the magnitude of translation for clockwise rotation is very less, implying that the platform rotated with almost zero turning radius. V-REP was also used to validate the simulation of all other motions mentioned in Table [1].

Intuitive Control of 3 Omni-Wheel Based Mobile Platform Using …

Fig. 3 Omni-wheel robot model in V-REP during its simulation

(a) Forward motion

(c) Forward-Right motion

(b) Rightward motion

(d) Clockwiserotation

Fig. 4 Simulation results of V-REP for various motion of the robot

679

680

D. Pasupuleti et al.

4 Physical Prototype A physical prototype of three omni-wheeled robot was built after the validation of the simulation results, and the following components were utilized for its construction. The assembled prototype is shown in Fig. 5. 1.

2.

3. 4.

5.

6.

Arduino Uno: Arduino Uno is a microcontroller board based on the ATMega328P microchip. The power supply given to this board can range between 7 and 20 V. Omni-directional Wheels: Three dual rim omni-directional wheels of 100 mm in diameter were used. Each wheel had eighteen rollers present along its circumference aligned at 20° to each other. DC Motors: Three DC motors of 45 RPM with a rated torque of 4.2 Kg cm were used whose operating voltage is around 12 V. Motor Driver (L298N): L298N is a type of double H driver module which is a high voltage dual full-bridge driver. These are capable of driving motors in forward and backward direction and can control its speed. Bluetooth Module (HC-05): HC-05 module is a Bluetooth serial port protocol module designed for wireless serial communication. This module can communicate over a range of 10 m. Battery: A 12 V rechargable lithium polymer (Li-Po) battery was used to power the two motor drivers (L298N), and a 9 V battery was used to power the Arduino board. In addition, battery connectors and jumpers were used.

The circuit diagram connecting all the electronic components and the details of the connecting pins are given in Fig. 6. The authors propose to control the robot using two methods, i.e., using Bluetooth mobile phone application and by using leap motion device. For both, HC-05 Bluetooth module acts as input to the Arduino controller, Fig. 5 Physical prototype of the three omni-wheel robot platforms

Intuitive Control of 3 Omni-Wheel Based Mobile Platform Using …

681

Fig. 6 Circuit diagram connecting all components and connection details for all pins

Fig. 7 Bluetooth module and its integration with considered two input devices

as shown in Fig. 7. The input character expected through Bluetooth communication is mentioned in the last column of Table 1, for all the desired motion of the platform.

5 Integration with Bluetooth Mobile Phone Application Once the connections are made as per the circuit diagram in Fig. 6, and after uploading the Arduino code, an Android application “Arduino Remote Lite” is installed in a mobile from Google play store. The Bluetooth module (HC-05) on the physical prototype is paired with mobile’s bluetooth, and then, respective changes are made

682

D. Pasupuleti et al.

(a) Arduino Remote LITE mobile application

(b) Paired with robot

Fig. 8 Integration LITE of the robot with Bluetooth mobile phone application

in the settings of the installed application as shown in Fig. 8a. Each button present in the application is assigned with each character which performs the respective motion as assigned in the Arduino code (also mentioned in Table 1). The layout of signal flow from the mobile application to the robotic platform is shown in Fig. 7. When a button is pressed in the application, mobile application acts as transmitter, and on the other end, this data is received by HC-05 Bluetooth module, and it acts as a receiver. This received data is sent to Arduino board which sends required signals to motor drivers which initiate the wheel rotation in respective direction. During trial runs, the working of the physical robot (Fig. 8b) was found to satisfy motions in Table 1.

6 Integration with Leap Motion Device The leap motion controller is an optical hand tracking tool that accurately captures the hand movements. On Microsoft Windows operating system, leap motion control is lightweight, fast, and accurate and can be used to develop various virtual reality or augmented reality-based applications. The controller can track hands in an interactive 3D zone extending up to 60 cm or more. Leap motion‘s program is capable of discerning 27 distinct hand elements including bones and joints and monitoring them even though other parts of the hand obscure them. Similar to usage of Kinect and other sensors with depth sensing capabilities, leap motion has also been used to track human hand and/or gesture and use the information to control robots. One such instance is control of virtual model of an industrial robot to perform pick-and-place task, reported in [8]. Similarly, leap motion has been used for control of wheelchair [9]. The physical prototype of the proposed three omni-wheel robots was also attempted to be controlled using leap motion gesture control. The version used is leap motion beta 3.2.1 which has been integrated with a Windows application developed usingVisual C#. The user has to keep his/her right hand over the leap motion device,

Intuitive Control of 3 Omni-Wheel Based Mobile Platform Using …

(a) User controlling robot

683

(b) Leap Motion device connected using USB

Fig. 9 Integration of the robot with leap motion device

as shown in Fig. 9a. The device shown in Fig. 9b can track the hand and can also know whether a hand is open or closed as a fist. If the hand is open and located directly above the device, the C# application is programmed to send data to the robot to stop all the motors. If the hand is open and is located in the forward direction, with respect to the device, the C# application sends corresponding char value (Table 1) to the Arduino controller to move in the forward direction. Similarly, the relative position of the palm/hand of the user is determined when it is open and corresponding char value is sent to the robot for motion. Whenever the user shows a closed fist, the motors are commanded to stop and the robot does not move any more.

7 User Survey A user survey was conducted to comprehend which method, either Bluetooth mobile phone application or leap motion gesture control is easier to use and few other factors. As a part of the user survey, a maze was used as shown in Fig. 10, with a starting

(a)Maze and Leap motion module

(b)Robot prototype in the maze

Fig. 10 Setup for the user survey on operating the robots using the two proposed methods

684

D. Pasupuleti et al.

Max

and an ending location, moving between them would require zig-zag motion. Totally 30 people participated in the survey, which has been summarized in Fig. 10. Of all the subjects, 15 preferred controlling the robot using the Bluetooth-based mobile phone application while the remaining 15 preferred controlling the robot using the leap motion gesture control concluding that people equally preferred control using both the methods. 24 males and 6 females took the survey with 3 out of 6 females preferring Bluetooth over leap motion, again displaying an equal preference for both modes of control. Similarly, 50% of males preferred leap motion over Bluetooth again portraying an equal preference for both modes of control. The number of people who took the survey between the age of 19–22 years were 20, whereas the remaining 10 people were above 30 years of age. Of the total number of young people between the age 19–22 years, 10 of them preferred Bluetooth while 10 of them preferred leap motion gesture control. Of the total number of middle-aged people over the age of 30, 5 of them preferred Bluetooth control and the rest 5 preferred leap motion gesture control. In all combinations, it was found that the users favored both the methods equally. The sub-topics for which the user survey was taken were ease to handle. How intuitive is it? And the fatigue on the user. Subjects were asked to rate from 1 to 5 their experience with the above topics once they controlled the robot using the two methods. The results of the survey for 30 subjects are shown in Fig. 11. To summarize the survey, Bluetooth mobile phone application was easier to handle and control with less number of difficulties as people are generally more comfortable using mobile phones compared to leap motion gesture control. But leap motion is found to be a more innovative, and new perspective of control which people believed would be more preferred in the future, and hence, 50% of the people preferred leap motion while 50% of them preferred Bluetooth control.

User Survey 5 4 3 2

Min Score

1 0 Ease to handle

Fig. 11 Results of user survey

Bluetooth

Intuitive Leap Motion

Fatigue

Intuitive Control of 3 Omni-Wheel Based Mobile Platform Using …

685

8 Conclusion The authors formulated the mathematical model of a three omni-wheel-based mobile platforms. The robotic platform was simulated in V-REP in order to understand its motion and validate the formulation. The physical prototype of the robot was developed and successfully integrated with mobile phone-based application and leap motion device, both communicating with the robot through Bluetooth technology. A user survey conducted on the control of the robot in a maze revealed that mobile application is more intuitive and user-friendly as the users are generally accustomed to using the mobile phone. However, the leap motion device was also found to be intuitive for repeated usage. In the future scope of the work reported here, the authors would like to place both mobile application and the leap motion device on the mobile platform, much like a wheelchair and let user control using both the methods. The authors feel that leap motion-based control would be more intuitive and easier as compared to the mobile phone application or a joystick on the wheelchair.

References 1. Ben-Ari, M., Mondada, F.: Robots and Their Applications. Elements of Robotics. Springer, Cham (2018) 2. Smith, R: US Patent 4715460: Omnidirectional vehicle base (1987) 3. Manne, U., Maddi, R., Dannana D., Pasupuleti, D., Chittawadigi, R.G.: Two degree-of-freedom omni-wheel based mobile robot platform for translatory motion. In: Proceedings of 1st International and 13th National conference on Industrial Problems on Machines and Mechanisms (2020) [Submitted] 4. Bardaro, G., Cucci, D.A., Bascetta, L., Matteucci, M.: A simulation based architecture for the development of an autonomous all-terrain vehicle. In: Proceedings of International Conference on Simulation, Modeling, and Programming for Autonomous Robots (2014) 5. Freese, M., Singh, S., Ozaki, F., Matsuhira, N.: Virtual robot experimentation platform V-REP: a Versatile 3D robot simulator. In: Proceedings of International Conference on Simulation, Modeling, and Programming for Autonomous Robots (2010) 6. Ercan, H., Boyraz, P.: Design of a modular mobile multi robot system: ULGEN (universalgenerative robot). In: Proceedings of Asia-Pacific Conference on Intelligent Robot Systems (2016) 7. Asama, H., Sato, M., Bogoni, L., Kaetsu, H., Mitsumoto, A., Endo, I.: Development of an omnidirectional mobile robot with 3 DOF decoupling drive mechanism. In: Proceedings of IEEE International Conference on Robotics and Automation (1995) 8. Chittawadigi, R.G., Matsumaru, T., Saha, S.K.: Intuitive control of virtual robots using transformed objects as multiple viewports. In: Proceedings of IEEE International Conference on Robotics and Biomimetics (2019). https://ieeexplore.ieee.org/author/37346497700 9. Škraba, A., Koložvari, A., Kofjaˇc, D., Stojanovi´c, R.: Wheelchair maneuvering using Leap Motion controller and cloud based speech control: prototype realization. In: Proceedings of 4th IEEE Mediterranean Conference on Embedded Computing (2015)

Effective Teaching of Homogenous Transformations and Robot Simulation Using Web Technologies Yashaswi S. Kuruganti , Apparaju S. D. Ganesh , D. Ivan Daniels , and Rajeevlochana G. Chittawadigi

Abstract Robotics has gained importance over the time. It has become an important interdisciplinary course in the engineering education. The position and orientation of a robot or any part of a robot are described in many ways. One of the most commonly used methods is homogenous transformation matrix (HTM). One of the main topics of robotics is kinematics, which relates to motion of the joints and the end-effector of the robot. Understanding these concepts require three-dimensional visualization of vectors, transformations, and motion, which are difficult to teach using conventional teaching methods, such as using blackboard. There exist many teaching software for effective three-dimensional visualization. One such attempt is RoboAnalyzer, for which the last author is one of the main developers. RoboAnalyzer is a windows-based application, and new developments to make similar modules for Internet browsers using web technologies have been reported in this paper. Keywords Homogenous transformation · Visualization · Robotics learning software · Internet-based learning

1 Introduction In the recent times, with increasing demand of accuracy and reliable automation, robotics has gained a lot of importance in several fields, and hence, the curriculum of engineering colleges has been revised to include more and more robotics-related courses as core and elective subjects. The courses of robotics can be hard to understand for a novice, as many concepts of robotics such as transformations, kinematics, dynamics, motion planning are complicated topics. The mathematical formulations of these concepts are difficult to teach and learn by using conventional blackboards and prescribed standard textbooks, e.g. [1].

Y. S. Kuruganti · A. S. D. Ganesh · D. Ivan Daniels · R. G. Chittawadigi (B) Department of Mechanical Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_54

687

688

Y. S. Kuruganti et al.

(a) Animation of 3D model

(b) Plot of the position of the end-effector

Fig. 1 Graphical user interface (GUI) of RoboAnalyzer software

To overcome these challenges, several teaching softwares have been developed by various universities and organizations. Robotics concepts such as DenavitHartentberg (DH) parameters, forward and inverse kinematics, forward and inverse dynamics, etc. have been implemented in RoboAnalyzer software. RoboAnalyzer has been in active development since 2009. A survey of robotics teaching software and how RoboAnlayzer fares with them is reported in [2]. Figure 1a shows the graphical user interface (GUI) of RoboAnalyzer with a skeleton model of a three degree-offreedom (DOF) robot with the trace of the end-effector point. The corresponding plot of the end-effector point is shown in Fig. 1b. RoboAnalyzer has been used by many teachers in India and overseas for effective teaching or robotics concepts. All the features are explained in detail in [2]. Recently, a new module on homogenous transformation matrix (HTM) was developed and has been reported in [3], which is one of the first attempts at developing a software that can be used to teach the concepts of transformations with ease. The interface of the HTM module is shown in Fig. 2a. RoboAnalyzer has virtual robot module (VRM) [4] with CAD models of more than 20 industrial robots. It was developed so that users can learn the concepts of jointjogging and cartesian-jogging, which are generally used to teach positions/points in an actual robot in its programming. This module allows users without access to physical robots effective visualization of motion and can be used to perform some rudimentary motion planning, such as drawing cubes. The VRM module is shown in Fig. 2b. Some of the other robot simulation software that use CAD models for realistic visualization are RoKiSim (now part of RoboDK) [5], V-REP [6], ROBOMOSP [7], and workspace [8]. RoKiSim has capability for joint and cartesian-jogging of robots and has features similar to VRM. RoboDK is the successor of RoKiSim which is a full-fledged offline robot programming software. It is advanced and is suitable for

Effective Teaching of Homogenous Transformations and Robot …

(a) HTM module

689

(b) VRM module

Fig. 2 Existing modules of RoboAnalyzer as desktop applications

teaching and commercial purposes, and has a larger library of robots, tools, etc. VREP is a generic robot simulator which can simulate fixed base open and closed-loop robot mechanisms, mobile robots, etc. Any customization to robot model (default) can be done using LUA scripting. ROBOMOSP allows motion planning and offline programming of various serial robots. Workspace is a commercial software that allows offline programming of robots in workcell. Comparison of RoboAnlayzer with some of these software is reported in [2]. RoboAnalyzer software and its modules such as HTM module, VRM module are only available as Windows applications and can be used only on windows operating system. Based on requests received by many, web versions of HTM module and VRM module have been developed and reported in this paper, so that they can be accessed in a web browser on any operating system of a computer, mobile, or handheld device. The remainder of the paper is arranged as follows. Section 2 has an overview of the web technologies available at present to render 3D graphics in a browser and other related topics used in the implementation. The implementation of WebHTM module is explained in Sect. 3 and that of WebVRM module in Sect. 4, followed by conclusions.

2 Web Technologies The modules reported in this paper are developed using a combination of hypertext markup language (HTML), Bootstrap, JavaScript, WebGL-Three.js. Web graphics library (WebGL) is a JavaScript library which renders 2D and 3D in any compatible browser, without using any plugins. It can be mixed with other HTML elements to build an interactive web application/web page. It is executed on a computer’s graphics processing unit (GPU) making it capable for running demanding graphics applications. It is based on a shader code that is written in OpenGL ES shading language (GLSL ES). WebGL code is composed in tag of HTML. This

690

Y. S. Kuruganti et al.

(a): A simple WebGL program

(b): Output in browser

Fig. 3 JavaScript code to draw a square using WebGL

specification allows the Internet browser to access the GPUs of the client computers in which they were run. Many graphics libraries of JavaScript are based on WebGL. These libraries offer extra functionality over WebGL. Three.js is one such graphics library and has been used in the development of modules reported in this paper. It is one of the most popular cross-browser JavaScript library and application programming interface (API ) used to display and create animated 2D and 3D computer graphics in a web browser.

2.1 WebGL WebGL is a very low-level system that only draws points, lines, and triangles, to do anything more than that with WebGL usually requires more lines of code. Three.js extends it by handling scenes, lighting, shadows, materials, textures, 3D Geometry, etc., which require more overhead to be performed using only WebGL. Figure 3 demonstrates a sample code to draw a rectangle using WebGL that renders in a HTML browser.

2.2 Transformation Using Three.Js A 3D visualization environment like the ones proposed in this paper requires transforming objects in a 3D scene or world. Three.js allows easy handling of matrices and applying them to transform objects. For example, a snippet of code in Fig. 4 models a cube with known geometry and material. Once the cube rendered on the screen, it can be transformed by calling different functions available in Three.js. The code here transforms about X axis by a certain angle. Similarly, lower-level transformations can be accessed in the form of functions.

Effective Teaching of Homogenous Transformations and Robot …

691

Fig. 4 Transforming an object in Three.js

Fig. 5 Transforming cube when matrix auto update is set to false

In Three.js, every object has a 4 × 4 matrix (homogenous transformation) which defines its position and orientation in 3D space. This matrix gets auto updated if not set to false. However, if object is to be seen rotating, the code in Fig. 5 can be executed, where matrixAutoUpdate property is set to false. Setting an object’s matrixAutoUpdate property to false gives a better control over the matrix calculations. This approach was used to transforming the 3D coordinate frames, used in the proposed modules. A coordinate frame consists of three-line and three-cone objects. These objects can be grouped and treated as a single object. The manipulation or transformation of a group of objects becomes much easy than to transform individual object.

2.3 Importing CAD Models of Objects as STL Files and Their Transformation WebVRM module explained later in this paper performs forward kinematics of industrial robots. The parts or links of a robot are generally modeled in a CAD software and exported as STL files. Three.js has an inbuilt loader which can load these files into the environment. Material properties can be added once they are imported. Figure 6

Fig. 6 Importing STL files

692

Y. S. Kuruganti et al.

Fig. 7 Transforming the link 1 object which holds the geometry of the imported STL file

represents the code for importing a STL file. Each link is transformed based on few parameters (DH parameters) to simulate the robotic action. For a robust web application, classes are created and used which represent realworld objects. Hence, for a project like WebVRM, classes are used such as DHClass, which holds information and has functionality related to links and joints in a serial robot. Therefore, using the Denavit-Hartenberg (DH) parameters, the placement of each imported link can be obtained. Figure 7, represents the code for manipulating a link by changing its transformation, thus mimicking the effect of animation when done in timed loop. Many more features of Three.js and WebGL have been used in the development of WebHTM and WebVRM modules that are explained next.

3 Development of WebHTM WebHTM stands for web version of the homogenous transformation matrix (HTM) module that was recently reported as an enhancement in RoboAnlayzer software [3]. It can be used to teach and learn the concepts related to representing the position and orientation of a coordinate frame with respect to another frame, which is one of the fundamental concept every robotics course covers. WebHTM has been developed using WebGL and Three.js and has been linked from http://www.roboanalyzer.com website. It has 3D environment on the left side with two coordinate frames, as shown in Fig. 8. The bigger frame is considered to be fixed or grounded and the smaller frame can be moved around by performing elementary transformations in the form of its translation and rotation. The type of transformation, i.e., local or global can be chosen. The current HTM of the moving frame, i.e., the position and orientation details of the moving frame is constantly updated and shown to the user on applying any transformation to the moving frame. Simultaneously, the multiplication sequence of the moving frame also gets updated on performing a transformation operation and is shown to the user on a real-time basis. A more detailed explanation about the concepts behind HTM such as direction cosines, unifying the position and orientation component of a three-dimensional object can be found at [1]. The different capabilities of the HTM module, different types of transformations, and standard sequences are reported in [3].

Effective Teaching of Homogenous Transformations and Robot …

693

Type of transformation Option to toggle between Translation and Rotation Components of vectors in corresponding colors

Fixed frame Moving frame

Sequence of transformations

Fig. 8 WebHTM module accessed in a web browser on a computer

Like in any CAD software, X axis is represented by red color, Y axis is represented by green color, and Z axis is represented by blue color as shown in Fig. 8. As a new feature in this web version, an attempt is made to comprehend HTMs more intuitively by breaking down column by column and understand what each column stands for. The HTM which is of 4 × 4 size can be decoupled as orientation or rotation matrix (Q) which is the first 3 × 3 sub-matrix in it. The first three elements of the last column represents the position vector from the fixed frame to the moving frame, represented in the fixed frame. The color of the individual columns of the rotation matrix is set to be same as the corresponding color in the 3D graphics environment. For example, the first column of the Q matrix has the components of the unit vector representing the small red colored vector, i.e., X axis of the moving frame. Similarly, for the other two axes of the moving frame, the position vector is colored as black in both graphics and in the matrix. This new implementation compared to [3] is believed to help in better understanding of the transformations. This becomes easy for teachers also to explain in the class. Also the individual transformations can be visualized so that the users can understand how these have to be multiplied earlier or later in the sequence to represent the global and local transformations, respectively. WebHTM module can also be accessible in mobile phones using a regular browser and has the 3D rendered frames, as shown in Fig. 9. It has also been tested on other handheld devices such as iPad, tablets.

694

Y. S. Kuruganti et al.

Fig. 9 WebHTM module accessed in a web browser on a mobile phone

4 Development of WebVRM As mentioned in the previous section, the virtual robot module (VRM) of RoboAnalyzer was developed using Visual C# and OpenTK, making it limited to only windows operating system. It had no mobile phone compatibility. The inspiration for development of web-based VRM is to fill this gap and make a light-weight module which is accessible across all devices and is mobile phone compatible. It should accessible and easy to use. The methodology to modify the CAD files so as to define the link geometry about the DH frame and the pseudo-code for animation of the links is defined in detail in [9].

4.1 Denavit-Hartenberg (DH) Parameters DH parameters first appeared in [10] to represent the relative position and orientation of two links in a mechanism. The links of a serial robot manipulator are usually connected by a single-degree-of-freedom (DOF) joints such as a prismatic or a revolute joint. It is necessary to find a relation between the end-effector (EE) and the base link and also between the coordinate frames attached at each link. This can be achieved by knowing the transformations between the coordinate frames attached to each link and thus forming overall description in a recursive manner. The convention followed here to name and define DH parameters is adapted from [1].

Effective Teaching of Homogenous Transformations and Robot …

695

4.2 WebVRM Similar to WebHTM interface, WebVRM has been developed using JavaScript language and Three.js library. It is lightweight, mobile-ready and can be used across multiple platforms. Figure 10 shows the user interface of WebVRM which has CAD models of more than 20 industrial robots. The CAD files are same as those available with the desktop version of the software, i.e., any new CAD files added to desktop version can be easily ported to the web version. The details on the working on the VRM module, transformations, etc. can be found at [4]. Once a robot is selected from a drop-down menu, the 3D model is loaded on the browser, using the steps briefly explained in Sect. 2. A set of sliders are provided in the interface that can be used to change the value of joint angles of the robots, based on which the 3D model updates itself. The homogenous transformation matrix (HTM) of the end-effector is also shown to the user. The WebVRM website when accessed from a mobile phone on a browser looks as shown in Fig. 11. For the joint-jogging performed by moving the sliders, forward kinematics of serial robot comes into action. However, if the robot’s end-effector has to reach a particular position and orientation, inverse kinematics has to be implemented, which is discussed in the next section.

5 Inverse Kinematics The inverse kinematics problem of a robot manipulator is generally complex as it involves nonlinear trigonometric or algebraic equations. These problems are quite challenging as it involves more than one solution in general. Also, no generic

Fig. 10 User interface of WebVRM on a desktop with MTAB Aristo robot model

696

Y. S. Kuruganti et al.

Fig. 11 User interface of WebVRM on a mobile phone with MTAB Aristo robot model

approach for all inverse kinematics problem is possible, multiple approaches exist. In this section, inverse kinematics of a six-axis robot with Euler wrist is solved. This architecture (Fig. 12) is present in most of the industrial robots and has eight possible analytical solutions. The end-effector’s orientation and position with respect to the base frame can symbolically be represented as:  [TE E ] B =

R E E pE E 0 1



Wrist Center (W): Point of intersection of axes of joints 4, 5 and 6

Fig. 12 Architecture of six-axis robot with Euler wrist/spherical wrist

(1)

Effective Teaching of Homogenous Transformations and Robot …

697

Here R E E represents the direction cosine matrix of the size 3 × 3, representing the orientation of the end-effector , and pE E represents its position. The inverse kinematics problem can be decoupled into position component and rotation component as the first and last three joints are responsible to position and orient the end-effector, respectively. To obtain the close form solution of a 6six-axis robot with Euler wrist, the below steps are followed. First, find the position of the wrist center (W ). Since it is a spherical wrist, the last three joints can be assumed to rotate about the wrist center. Position of the wrist can be found out by the following transformation, [TW ] B = [TE E ] B ∗ [TE E ]−1 w

(2)

Here [TW ] B is the transformation of wrist frame with respect to base frame. From T     B = wx w y wz can be obtained as the vector from the base frame to the Eq. 2, w origin of the wrist frame, represented in base frame. It can be observed that the joint angle (θ 1 ) does not affect the Z-coordinate of the wrist center, i.e., wz . Therefore, one can conclude that the angle of the first joint can be obtained using the other two coordinates of the wrist center as:   θ1 = a tan 2 w y , wx

(3)

  θ1 = a tan 2 w y , wx + π

(4)

Or

Also, the wrist center always lies in the plane of rotation of joint 1, and the joint axes of joint 2 and joint 3 are always parallel. Therefore, joints 2, 3 and wrist center from a 2R (R: Revolute) planar serial manipulator. Due to the space restrictions, the complete formulations for the inverse kinematics of the 2R manipulator is not detailed in this paper, which can be further referred to [1]. Once it is done, the joint angles 1, 2, and 3 are known. So far, the wrist can be reached by having four sets of solutions of the joint angles 1, 2, and 3. For a particular set, the orientation of the end-effector can be determined by the following transformation: [R E E ]W = [Rw ]−1 B ∗ [R E E ] B ;

(5)

where Rw = R1 ∗ R2 ∗ R3 . Here [R E E ]W is the transformation of the end-effector with respect to the wrist frame, and Ri is the rotation component of ith joint’s DH matrix. As assumed earlier, the joint angles 4, 5, and 6 are rotations about the wrist center, and this rotation is similar to Euler ZYZ sequence of rotation. Therefore, comparing [R E E ]W and Rzyz will result in the inverse kinematics solutions of the last three joints.

698

Y. S. Kuruganti et al.

The above formulations were used to develop the inverse kinematics module in WebVRM, reported next. The solutions obtained were verified to match with the solutions obtained using the inverse kinematics module in the desktop version of the RoboAnlayzer software. Kindly, note that RoboAnlayzer software has been around for more than a decade and has served as a validation tool for computer programs developed by researchers worldwide.

5.1 Inverse Kinematics in WebVRM The joint-jogging available in WebVRM can be used to take the end-effector to any desired position and orientation. Then, invoking inverse kinematics results in eight solutions to reach the same configuration. User can select any solution and accordingly the model gets updated. This is illustrated in Fig. 13.

Fig. 13 Inverse kinematics of MTAB Aristo robot in WebVRM and the possible eight solutions

Effective Teaching of Homogenous Transformations and Robot …

699

6 Conclusion Robotics education has gained importance in the last two to three decades. Since the concepts in it are difficult to be taught without any visual aids, the authors have extended the modules of homogenous transformation matrix (HTM) module and virtual robot module (VRM) in RoboAnalyzer version to a web browser by using JavaScript and Three.js technologies. The WebHTM and WebVRM reported in this paper can be accessed from any web browser on a computer, mobile phone, or a handheld device. This increases the utility of the software and can be used for effective teaching and learning of the concepts. The links to these websites are provided at http://www.roboanalyzer.com, and further updates to web versions will be updated on the website. Acknowledgements The authors would like to thank Prof. Subir Kumar Saha, IIT Delhi, for giving constant feedback and support for the development of the WebHTM and WebVRM modules reported in this paper.

References 1. Saha, S.K.: Introduction to Robotics, 2nd edn. Tata McGraw Hills Publications, New Delhi, India (2014) 2. Othayoth, R.S., Chittawadigi, R.G., Joshi, R.P., Saha, S.K.: Robot kinematics made easy using RoboAnalyzer software. Comput. Appl. Eng. Edu. 25(5), 669–680 (2017) 3. Maram, S.V., Kuruganti, Y.S., Chittawadigi, R.G., Saha, S.K.: Effective teaching and learning of homogenous transformation matrix using RoboAnalyzer software. In: Proceedings of Advances in Robotics: 4th International Conference of The Robotics Society (2019) 4. Sadanand, R.O.M., Chittawadigi, R.G., Saha, S.K.: Virtual robot simulation in RoboAnalyzer. In: Proceedings of 1st International and 16th National Conference on Machines and Mechanisms (2013) 5. RoKiSim webpage. https://www.parallemic.org/RoKiSim.html. Last accessed 2020/07/01 6. V-REP website. http://www.coppeliarobotics.com. Last accessed 2020/07/01 7. Jaramillo-Botero, A., Matta-Gomez, A., Correa-Caicedo, J.F., Perea-Castro, W.: Robomosp. IEEE Robot. Autom. Mag. 13(4), 62–73 (2006) 8. WorkSpace website. http://www.workspacelt.com. Last accessed 2020/07/01 9. Rajeevlochana, C.G.: Unified framework for geometric modeling, animation and collision detection of serial robots. MS (Research) Thesis, IIT Delhi, New Delhi, India (2013) 10. Hartenberg, R.S., Denavit, J.: A kinematic notation for lower pair mechanisms based on matrices. J. Appl. Mech. 77(2), 215–221 (1955)

Abnormal Event Detection in Public Places by Deep Learning Methods Mattaparti Satya Bhargavi, J. V. Bibal Benifa , and Rishav Jaiswal

Abstract Surveillance in public places has become an important aspect of modern lifestyle and security purposes. This is due to an increase in the number of crimes, mischievous activities and abnormal events. The monitoring process should be automated because of the excessive time consumption associated with the manual monitoring process. In general, there are many deep learning methods that exist for the classification process of abnormal events. This article provides a comparative analysis of three state-of-the-art deep learning methods used for abnormal event detection. The three methods, namely convolutional long short-term memory (CLSTM) autoencoders, convolutional autoencoders (CA) and one-class support vector machines (SVM) are tested on a benchmark dataset UCSD Ped1 which contains 34 training videos and 36 testing videos. Out of the three methods of implementation, one-class SVM offers the highest area under the curve (AUC) about 0.692 with the accuracy of 65.82%. By employing these deep learning strategies, the occurrence of abnormal events at public places like malls, roads and parks can be detected quickly, and immediate remedial measures can be implemented to enhance the public security. Keywords Convolutional autoencoders · Convolutional LSTM autoencoders · One-class SVM

1 Introduction Surveillance in all facets of day-to-day life has become the principal priority in public places for the prevention of crime and occurrence of other abnormal events [1]. Presently, the surveillance cameras are being deployed in large scale at airports, railway stations, malls and other public and private premises. This advanced surveillance process results in rapid increase of multimedia data volume which is extremely time consuming for classification [2]. Hence, a large workforce is needed to process

M. S. Bhargavi · J. V. Bibal Benifa (B) · R. Jaiswal Indian Institute of Information Technology, Kottayam, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_55

701

702

M. S. Bhargavi et al.

this video surveillance data, to ensure the safety and security at the public environment. However, the abnormal events have a low probability of occurrence, which makes the manual detection as a very tedious job. As a result, the automatic detection of rare or unusual incidents and activities in a surveillance video becomes an essential requirement. Automated methods should be competent enough to detect the abnormal activities from the available video data instantaneously [3]. Earlier, machine learning (ML) techniques were employed for the same, but it requires huge volume of training data to improve the accuracy. Presently, state-of-the-art deep learning (DL) methods are available to do the classification process in a single shot. Hence, three DL methods are employed to perform the classification process in the present work. The comparative analysis of three different models, namely CLSTM autoencoders [4], convolutional autoencoders (CA) [5] and one-class SVM [1] is performed. Section 2 summarizes the latest research works in the domain of abnormal event detection. Section 3 describes the three state-of-the-art DL methods employed in this research work. Section 4 presents the results and discussion based on the analysis done.

2 Related Work The summary of research works carried out in the recent years for abnormal event detection is presented in this section. Ionescu et al. (2019) proposed an algorithm which consists of two stages which is composed of one-class SVM and k-means clustering that helps to eliminate the outliers [6]. The proposed model gives 91.1 AUC for avenue dataset and 99.3 AUC for UMN dataset. Luo et al. (2017) proposed a method based on temporally coherent sparse coding (TSC) in which the adjacent frames are encoded with the same type of reconstruction coefficients [7]. Subsequently, the TSC is mapped onto a unique type of stacked recurrent neural network (sRNN). The AUC for the UCSD Ped2 dataset is about 92.21%, whereas the avenue dataset offers 81.71% and their own dataset has delivered the AUC of 68.00%. Sun et al. (2019) developed a deep learning-based one-class learning method that integrates CNN and one-class SVM and named as deep one-class (DOC) method for the identification of anomalous events from captured videos. The AUC of DOC method with Ped1 dataset is 0.914 and Ped2 dataset is about 0.911 [8]. Hinami et al. (2017) studied a method for addressing the crux of the joint detection and recounting of the events that are abnormal [9]. This approach uses multi-task fast R-CNN that is first trained on richly annotated image datasets to learn generic knowledge about visual concepts like object, attribute and action. It gives 89.8% AUC on avenue dataset, 92.2% on Ped1 dataset and 89.1% on Ped2 dataset. Liu et al. (2018) proposed an anomaly detection framework to identify the number of events that do not conform to the expected behaviour [10]. The network used for frame generation consists of two modules: (i) an encoder which extracts the features by gradually reducing the spatial resolution and (ii) a decoder which gradually recovers the frame by increasing the spatial resolution. They use a modified U-Net for the future frame

Abnormal Event Detection in Public Places …

703

prediction implementation. This model has produced 85.1% AUC on CUHK avenue dataset, 83.1% on UCSD Ped1 dataset, 95.4% on UCSD Ped2 dataset and 72.8% on ShanghaiTech dataset. Lloyd et al. (2017) presented a practical descriptor based on grey level co-occurrence matrix (GLCM) texture features that encodes the unique features in a crowded environment. In this work, an inter-frame uniformity measure is proposed that predicts the presence of violence as compared with the standard crowd behaviour [4]. It has been claimed that the method is computationally cost effective and offers real-time description. Further, this model provides the AUC of 0.9403 on ViolentFlows dataset, 0.9956 on UCF web abnormality, 0.8218 on UMN abnormal crowd datasets and 0.9782 on the CF-violence dataset, respectively. Xu et al. (2017) proposed a method named as ‘Appearance and Motion DeepNet’ (AMDN) that learns feature representations automatically based on deep neural networks (NN) [11]. In this approach, a fusion framework is used that has the combined advantages of both the traditional early fusion and late fusion strategies. Then, on the basis of the ascertained features, the numerous one-class SVM models are utilized to forecast the anomaly scores of each input. The model has delivered 92.1% AUC on Ped1 dataset and 90.8% AUC on Ped2 dataset, respectively. Yunyoung et al. (2015) investigated a real-time abnormal situation detection method using the crowded scenes based on the crowd motion characteristics including the particle energy and the motion directions [12]. Interestingly, the model has achieved 0.9901 AUC on UCF dataset. Later, Yachuang et al. (2016) proposed a DL model for crowd anomaly detection where the video events are automatically represented and modelled in unsupervised way. By employing PCANet, the appearance and motion features are concurrently absorbed from 3D gradients. Further, to model the abnormal incident patterns, a deep learning-based Gaussian mixture model (GMM) is proposed and trained with standard events [13]. The model has offered 75.4% of AUC on avenue dataset and 81.8% of AUC on Ped1 dataset. Similarly, Gnanavel et al. (2015) proposed a method, where the video frame is divided into 2D patches and then the difference of Gaussian (DoG) filter is applied to extract the edges [14]. Then, the normalized cuts (NCuts) and Gaussian expectation–maximization (GEM) are used for clustering the similar patches into clusters, and the motion context is assigned. This model has produced 0.782 AUC with Ped1 dataset. Chong et al. (2017) proposed a technique for abnormal event detection in videos named as ’spatiotemporal autoencoder’. The principle of this technique states that when an abnormal event happens, the newer frames will be different as compared to the older frames [15]. The training phase includes a feature extractor that substantially absorbs the spatial features. Subsequently, the temporal patterns are learned from the input frames by an encoder–decoder as proposed. The model basically consists of three layers: the spatial encoder and the decoder have two convolutional and deconvolutional layers (in each category), and the temporal encoder consists of three layers of the convolutional long short-term memory (LSTM) model. This model gives the AUC of 89.9% on Ped1 dataset, 87.4% on Ped2 dataset, 80.3% on avenue dataset, 84.7% on subway entrance dataset and 94.0% on subway exit dataset, respectively. Recently, Ravanbakhsh et al. (2019) proposed the generative adversarial nets (GANs)

704

M. S. Bhargavi et al.

that is trained to produce only the normal distribution of the data. This method offers 96.8% AUC on Ped1 dataset and 95.5% AUC on Ped2 dataset, respectively [16]. From the literature, it is inferred that DL methods are useful for anomaly or abnormal event detection because of the computational efficiency. Hence, it is essential to identify the effective DL methods that give a better accuracy on a most common dataset.

3 Proposed Method The abnormal event detection is performed by the implementation of three different methods, and the performance comparison is done among the models. The three methods implemented for the performance analysis are CLSTM autoencoders [4], CA [5] and one-class SVM [1]. The high-level work flow implemented for the comparative analysis in the proposed work is presented in Fig. 1.

3.1 Convolutional LSTM Autoencoders The training video frames are divided into temporal sequences, each of size 10 using the sliding window technique. Subsequently, each frame is resized into 256 × 256 in order to generate the input images of same resolution with the pixel values which are scaled in between 0 and 1 by dividing each pixel by 256. Data augmentation is performed in the temporal dimension. To generate more training sequences, frames are concatenated with various skipping strides. The Adam optimizer is used with a learning rate fixed to 0.0001. Subsequently, the learning rate is reduced as the training loss decrease stopped by using a decay of 0.00001,s and the epsilon value is set to 0.000001. The training phase of CLSTM autoencoder is highlighted in Fig. 2. For initialization process, the Xavier algorithm is used, and it prevents the input signal from decaying or enlarging as it passes through each layer. The convolutional layers include multiple input activations within the fixed receptive field of a filter to a unique activation output. It abstracts the information of a filter cuboid into a

Fig. 1 High-level work flow of the proposed work

Abnormal Event Detection in Public Places …

705

Fig. 2 Training phase of CLSTM autoencoder

Fig. 3 Snapshot of output of CLSTM autoencoder

scalar value. On the other hand, deconvolutional layers intensify the sparse signal by convolutional operations with multiple learned filters [17]. Here, the single input activation is associated with patch outputs using an inverse operation of convolution. The ascertained filters in the deconvolutional layers serve as basis to rebuild the shape of an input action sequence. In the proposed comparative analysis, each input testing video employed is converted into 200 frames. The sliding window technique is used to obtain the consecutive ten frame sequences. If the event is abnormal, then the abnormal event will be highlighted in red colour over it. Figure 3 illustrates the snapshot of output obtained from one of the frames of input testing video.

3.2 Convolutional Autoencoders (CA) The implementation of CA is similar to that of CLSTM autoencoders; however, the number of layers inside the autoencoders may vary for different problems. The

706

M. S. Bhargavi et al.

Fig. 4 Flow chart of convolutional autoencoders

training, optimization and testing stages are similar to the CLSTM autoencoders. The working flow chart of CA approach is presented in Fig. 4.

3.3 One-Class SVM In the one-class SVM approach, the preprocessed images are sent as input to the autoencoder, and the feature vectors are extracted from those input images. The feature vectors are given as input to the SVM model, and the model is trained on only one class, i.e. normal events. While treating the anomalous cases as outliers, it allows one-class SVM classifier to ignore the task of discrimination. Instead, it focuses on the deviations from the normal events or the expected output. The outcome of one-class SVM approach is summarized into two classes: (i) the negative case (class 0) is taken as ‘normal’, and (ii) the positive case (class 1) is taken as ‘anomaly’. When the testing samples are passed into the trained model, it classifies the appropriate case as whether the event is anomalous or not. The working flow chart of one-class SVM approach is displayed in Fig. 5.

4 Results and Discussion The experiments are carried out in Dell NVDIA GPU machines with the Python software modules TensorFlow, OpenCV and Keras. The UCSD Ped1 dataset is used in which it consists of 34 training videos and 36 testing videos in greyscale (which is already divided into frames). It is essential to note that the training frames contain only normal events. Figure 6 presents the ROC curve obtained from CLSTM autoencoders in the course of abnormal event detection. The area under curve (AUC) inferred from receiver operating characteristic (ROC) curve for CLSTM autoencoder is about 65.8%, and the accuracy of CLSTM autoencoder is 62.75%. Similarly, the AUC inferred from the ROC curve for CA is about 66.3%, and it offers the accuracy of

Abnormal Event Detection in Public Places …

707

Fig. 5 Flow chart of one-class SVM approach

Fig. 6 ROC curve for CLSTM autoencoder

61.85%. The ROC curve obtained for CA is presented in Fig. 8. Figures 7, 9 and 11 are the precision–recall curves which show the trade-off between precision and recall for different thresholds while different learning techniques are employed. Here, high AUC represents both high recall and high precision, where high precision relates to a low false positive rate, and high recall is related to a low false negative rate. In the same fashion, the AUC inferred from ROC curve of one-class SVM is about 69.2% with the accuracy of 65.82%. This model has the comparatively higher AUC value because the curve has slightly higher slope in the true positive rate as compared

708

M. S. Bhargavi et al.

Fig. 7 Precision versus recall curve for CLSTM autoencoder

Fig. 8 ROC curve for convolution autoencoder

to the other two models as shown in Fig. 10. Table 1 summarizes the comparison of AUC and accuracy of the three methods implemented. From Table 1, it is inferred that the one-class SVM performs better as compared to other two models and has the AUC of ROC curve about 69.2% with higher accuracy than one-class SVM.

Abnormal Event Detection in Public Places …

709

Fig. 9 Precision versus recall curve for CLSTM autoencoder

Fig. 10 ROC for one-class SVM

5 Conclusions Identification of abnormal events from surveillance video data is essential to enhance the security measures at various locations. Hence, the three state-of-the-art DL methods are investigated for their application and feasibility in the course of abnormal event detection. The three methods, namely CLSTM autoencoders, CA and one-class SVM are compared using UCSD Ped1 dataset for the classification performance.

710

M. S. Bhargavi et al.

Fig. 11 Precision versus recall curve for one-class SVM autoencoder

Table 1 AUC and accuracy values of the three implemented models

Method

AUC

Accuracy

CLSTM autoencoder

0.658

62.75

Convolution autoencoder

0.663

61.85

One-class SVM

0.692

65.82

From the experiments, it is evident that the one-class SVM offers the highest accuracy. Further, it is understood that such automatic anomaly detection methods are required for the rapidly increasing surveillance video data to reduce the manpower involved to capture the abnormal events. The DL methods are competent to deliver high precision in a less possible computational time. In future, the DL methods will be further optimized to propose a novel architecture that works on various datasets to detect the abnormal events with maximum accuracy.

6 Future Work A deep learning-based novel architecture for detecting the abnormal events from the video data will be proposed to offer comparatively higher accuracy and AUC than the existing state-of-the-art methods.

Abnormal Event Detection in Public Places …

711

References 1. Amraee, S., Vafaei, A., Jamshidi, K., Adibi, P.: Abnormal event detection in crowded scenes using One-class SVM. SIViP 12(6), 1115–1123 (2018) 2. Bao, T., Karmoshi, S., Ding, C., Zhu, M.: Abnormal event detection and localization in crowded scenes based on pcanet. Multimedia Tools Appl. 76(22), 23213–23224 (2017) 3. Chen, C., Shao, Y., Bi, X.: Detection of anomalous crowd behavior based on the acceleration feature. IEEE Sens. J. 15(12), 7252–7261 (2015) 4. Lloyd, K., Rosin, P.L., Marshall, D., Moore, S.C.: Detecting violent and abnormal crowd activity using temporal analysis of grey level co-occurrence matrix (GLCM)-based texture measures. Mach. Vis. Appl. 28(3–4), 361–371 (2017) 5. Chen, M., Shi, X., Zhang, Y., Wu, D., Guizani, M.. Deep features learning for medical image analysis with convolutional autoencoder neural network. IEEE Trans. Big Data 1–1 (2017) 6. Ionescu,R.T., Smeureanu, S., Popescu,M., Alexe, B.: Detecting abnormal events in video using narrowed normality clusters. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1951–1960. IEEE (2019) 7. Luo, W., Liu, W., Gao, S.: A revisit of sparse coding based anomaly detection in stacked RNN framework. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 341–349 (2017) 8. Sun, J., Shao, J., He, C.: Abnormal event detection for video surveillance using deep one-class learning. Multimedia Tools Appl. 78(3), 3633–3647 (2019) 9. Hinami, R., Mei, T., Satoh, S.: Joint detection and recounting of abnormal events by learning deep generic knowledge. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3619–3627 (2017) 10. Liu, W., Luo, W., Lian, D., Gao, S.: Future frame prediction for anomaly detection–a new baseline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6536–6545 (2018) 11. Xu, D., Yan, Y., Ricci, E., Sebe, N.: Detecting anomalous events in videos by learning deep representations of appearance and motion. Comput. Vis. Image Underst. 156, 117–127 (2017) 12. Nam, Y., Hong, S.: Real-time abnormal situation detection based on particle advection in crowded scenes. J. Real-Time Image Proc. 10(4), 771–784 (2015) 13. Fang, Z., Fei, F., Fang, Y., Lee, C., Xiong, N., Shu, L., Chen, S.: Abnormal event detection in crowded scenes based on deep learning. Multimedia Tools Appl. 75(22), 14617–14639 (2016) 14. Gnanavel, V.K., Srinivasan, A.: Abnormal event detection in crowded video scenes. In: Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2014, pp. 441–448. Springer (2015) 15. Chong, Y.S., Tay, Y.H.: Abnormal event detection in videos using spatiotemporal autoencoder. In: International Symposium on Neural Networks, pp. 189–196. Springer (2017) 16. Ravanbakhsh, M., Sangineto, E., Nabi, M., Sebe, N.: Training adversarial discriminators for cross-channel abnormal event detection in crowds. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1896–1904. IEEE (2019) 17. Feng, Y., Yuan, Y., Xiaoqiang, L.: Learning deep event models for crowd anomaly detection. Neuro Comput. 219, 548–556 (2017)

Multipurpose Advanced Assistance Smart Device for Patient Care with Intuitive Intricate Control P. Ravi Sankar, A. Venkata Ratnam, K. Jaya Lakshmi, Akshada Muneshwar, and K. Prakash

Abstract In India, nearly 2.2% of population is suffering with regular disability as per the last Census data. Strokes, dementia, autism, spinal injuries, etc., are few common reasons of having disorder. Requirement of assistance for care is a must to disorder patients. Our design model varies with conventional ideas as it does not have bulk hardware setup which is not comfortable to patients and working duty staff. The proposed design deals with sensing movements of a patient’s body with an attached sensor, designed using PVDF sheets and silver ink. This sensor is flexible, small in size, and hence could be attached to any moving part of body like finger, toe, lower jaw, etc. It detects signal and assists the patient with the equipment such that they can communicate their thoughts with intuitive intricate control to reach the guardian. This model communicates between two NodeMCUs wirelessly thereby reducing the hardware. The assistance is displayed on the master screen, where the concerned person could see and help the patient. Keywords Flexible electronics · PVDF sheets · Film coating · IoT · Smart control · Wireless communication · Self-fabrication

1 Introduction Paralysis is a loss of muscle functionality or in simple terms it means that there is something wrong with the way the message passes between the brain and the muscles. Paralysis could be a complete or partial. Based upon the cause of paralysis, it could widespread to whole body or confine to a small region. The paralysis is generally due to nerve diseases, autoimmune diseases, Bell’s palsy, strokes, spinal cord injuries, or any kind of neck treatment. Approximately, 2.2% of the population is suffering from disabilities according to a survey conducted by National Statistical Office (NSO) [1]. These people requires to be treated well and also needs to be acknowledged continuously. Proposed system deals with designing a sensor which takes input through P. R. Sankar · A. Venkata Ratnam · K. Jaya Lakshmi · Akshada Muneshwar · K. Prakash (B) ECE Department, NIT Warangal, Warangal, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_56

713

714

P. R. Shankar et al.

the movement of muscles of the patient and then responds by sending that intricate information to the concerned person. Conventional rigid systems include data transmission by head movement, eye movement, neck movement, tongue movement, etc., to control the equipment and to have acquaintance with the surroundings. The author has designed a model in which the controlling of wheel chair happens with the help of head movement [2]. Head-based movements are done using an EEG sensor to collect various signals inside brain; this setup is known as emotive EPOC headset. This headset must be always worn by the patient who might be uncomfortable. Also, in this paper, tongue movement is used to communicate patient’s thoughts to another person which is again difficult for them while they are eating or drinking. Along with that, the eye movement is accessed by using camera to track retina of the patient. This methodology requires image processing and however is more accurate. But this fails in low light conditions. Few other sensors like optical linear encoder (OLE), strain gauge or potentiometer are used to detect the movements of the body [3]. The OLE sensor has a drawback of being applicable only at two different angles which are 0 degree and 90 degree. Strain gauge gets effectively saturated under huge diversions and potentiometer does not allow correct accuracy due to the sudden change within the movement. Eye motion detection and eye blink detection act a key role in assisting the paralysis patients [4]. These include an infrared camera which is always fixed upon the patients eyes without any special lightning conditions. The drawback using this idea is the camera that cannot work in low light conditions and thus will decrease the accuracy, along with that usage of infrared rays continuously upon the eye may damage cornea, aqueous humor, lens, vitreous humor, and retina. Flex sensors are majorly used to communicate with disabled persons. Using these sensors upon a glove could help to design a hand gesture which is deciphered to an alphabet [5–7], that is further displayed upon the LCD where the other person can read out the thoughts of the differently abled person. Also, these flex sensors are used for robotic surgery, whose main motive is to do remote operations with feasibility [8, 9]. These flex sensors could also be used to detect the bad posture which may affect the neck while sleeping and alarms the person [10]. More explanation about flex sensors is provided in Sect. 2. Flex sensors are expensive and due to the regular usage the performance of the sensor decays due to deformation in shape. To eliminate this drawback, we printed a sensor which is similar to the commercially available flex sensor in performance but relatively cheaper and sensitive to bends. The commercially available flex sensor costs 400 INR whereas the 4 cm × 4 cm size sensor we have printed using 1 g of silver conductive ink on a polyvinylidene fluoride (PVDF) sheet made up using 5gm of dimethylformamide, 1 gm of PVDF pellets with M/W of 530,000 and 5 ml of acetone has costed around 200 INR only. PVDF is a semicrystalline, high purity thermoplastic fluoro-polymer with zero Gaussian curvature [11, 12]. These PVDF sheets exhibit high chemical resistance, good mechanical strength, piezoelectric and pyroelectric properties with good process ability. These kinds of sheets have very lower melting points compared to other fluoro-thermoplastics. Because of the high resistance, it is used as an insulating cable in aircraft and electronic industry, could withstand high temperature wiring,

Multipurpose Advanced Assistance Smart Device …

715

industrial power control systems, etc. Conductivity is a property to allow the charge carriers flow through them; we can frame it as inverse of resistivity. In metals, silver has highest conductivity of 6.30 × 107 S/m at 20 °C. Hence, the sensors are made using PVDF sheets on which silver coating is done in certain patterns which decide its resistivity. The further explanation about the system design is provided in Sect. 2. The working of the system is elaborated in Sect. 3. Further the results are presented in Sect. 4, and followed by conclusion.

2 Proposed System 2.1 Flex Sensor Flex sensor is the most significant component of the proposed model to get moments of the patients to give care. A flex sensor or bend sensor is a sensor that measures the degree of bending or deformation and converts it into measurable resistance. It comprises of the conductive ink itched on the flexible base. Flex sensors are available in various lengths, usage of the same depending on the requirement of the application. The sensor is made of a thin film of plastic, over which copper metal is coated. It has wide range of applications like in robotics, medical appliances, computer peripherals, and many more. It can withstand temperature in range from −450 to 800 °C. Figure 1a shows the commercially available flex sensor [13].

Fig. 1 Sensor and interface: a conventional flex sensor. b Resistance corresponding to various bending angle of the flex sensor. c Voltage divider circuit for flex sensor

716

P. R. Shankar et al.

When the sensor is bent then the conductive layer is stretched, leading to reduction in cross section and increase in length. From the basic formula for resistance, we know that resistance is dependent on the length of the material as shown in Eq. 1. R = ρl/A

(1)

Thus, when sensor is bent, we obtain a change in the value of the resistance of the flex sensor. As the angle of bending of the flex sensor changes, resistance also changes. Figure 1b depicts change in resistance of the flex sensor based on different angle of bending. There is a linear relation between the change in resistance and degree of bending. This sensor is utilized in the project using the voltage divider circuit. One pin of the flex sensor is connected to the analog pin on the microcontroller and the second pin is grounded. Thus, the voltage divider circuit is formed between the power supply and the flex sensor. The voltage given to the microcontroller is the voltage across the flex sensor. Figure 1c shows the voltage divider circuit used with flex sensor. With high usage of this sensor, its sensitivity fades away which leads to errors, hence becoming a major drawback of flex sensors. The main reason for errors is due to the residual resistance formed by heavy bending subsequently, leading to deformation in shape of the sensor. There might be some residual resistance for the flex sensor followed by deformation in shape of the sensor due to its continuous usage. To overcome this drawback of the conventional flex sensor, we have designed the conductive ink-based flex sensor using method of printed electronics. Printed electronics is the practice of using the concept of printing to obtain electronic devices over various materials. Different conductive inks are printed in various forms to achieve the desired functionality. The designed flex will have printed patterns on the polyvinylidene difluoride (PVDF) sheet using conductive silver ink. Patterns were obtained using a shadow mask. For different bends of the sheet, resistance change occurs in the conductive path and thus flex sensor functionality is achieved. The conductive ink-based flex sensor is developed by three main components conductive ink, substrate, and the mask. Mask helps us to obtain the desired pattern on the substrate. Here, conductive silver ink (sensing material) is used to make the sensor. Conductive silver printing ink is a blend of polymer covered silver particles and silver-based compound. It is a minimal effort conductive ink that can be imprinted on a variety of substrates like polyester, glass, and so forth. PVDF sheet is utilized as substrate. It is non-receptive thermoplastic fluoro-polymer. It has not so much weight but rather more adaptability to shape of object it is placed on [14]. Using the mask design, the desired pattern is obtained on the PVDF sheet. This helps in determining the pattern for the sensing conductive silver ink (paste form). Then the silver paste is brushed uniformly over the substrate (PVDF sheet) through the mask. Mask is removed after the desired pattern is obtained and the conductive ink-based flex sensor thus designed is found to be flexible and lightweight. It can be placed on different body parts such as finger, on the lower side of lips, etc. Accordingly, the microcontroller can be programmed so as to interpret the needs of the patient.

Multipurpose Advanced Assistance Smart Device …

717

2.2 Proposed Model The proposed system aims to provide assistance to person suffering from paralysis or partial paralysis. Paralysis is the inability to control the function of muscles in any part of body. Such non-functionality can be localized or widespread over a part of body. It might affect face, one arm or leg or both and can even affect one side of the human body. Such people find it difficult to communicate their requirements to other people. Our system aims to help such people to communicate their needs with simple finger/body movement. The core part of the system is the sensor that is placed on any movable body part like finger, under the lips, toes, wrist, eye lids, etc., of the patient. Our system focuses on assisting the person with partial paralysis. To detect the bending and movement of the body part or finger movement, we have used the printed flex sensor. When the bending occurs then due to the elongation of the conductive ink few cracks are formed upon it. Due to the cracks formed, resistivity of the material changes and thus the resistance changes. The various levels of bending are mapped to different requirement in assistance for the patient. This is then sent wirelessly using NodeMCU Esp8266, to the LCD display to shown what the patient requests. The NodeMCU Esp8266 is programmed using Arduino IDE. The transmitting side of the proposed system consists of the printed flex sensor used by patient to communicate their requirement. And the receiver side consists of the LCD display to show the needs requested by the patient, send alert messages to mobile, and control few electrical appliances. The communication between the transmitter and the receiver part of the system is carried out using the client server model of the NodeMCU as shown in Fig. 2. Server sets up the hotspot and sends signal to its clients. After receiving the ready signal, client establishes the connection with the server using the same SSID and password. A specific state number is assigned to the requests of the patient. On the client side, input is obtained through the flex sensor which is mapped to some state number corresponding to the request of the patient. This state number is transmitted to the server side. There it is mapped to the verbal description of the request and the same is displayed on the LCD screen. The communication between Fig. 2 Block diagram for server–client communication of the proposed system

718

P. R. Shankar et al.

the transmitter and receiver side is established wirelessly. Figure 2 shows the box used on the transmitter side to accommodate the NodeMCU Esp8266 inside it and it carries the connecting wire to the flex sensor placed over the finger of the patient. Two slots have been provided to connect the sensor and to configure the microcontroller as per the customization required.

3 Working Model of the System The system consists of a sensor, transmitter module, and a receiver module. The sensor we use is a fabricated sensor with certain patterns printed on a PVDF sheet using conductive ink. Transmitter module and receiver module are connected using Wi-Fi.

3.1 Working of Printed Sensor Here the composite silver ink acts as a sensing element of which we can sense the change in resistivity the inverse of conductivity when the PVDF sheet is bend or stretched. This happens due to the appearance of cracks on the solidified portion of the ink when the structure of sheet is deformed. These cracks cause the resistance to increase along the conductive path. When the film/sheet returns to its original state, the cracks close up and the resistance of the sensor returns to the default state. This principle of connection and disconnection of cracks during bending of the sensor and change in resistance due to this gives us the functioning of a flex sensor [15–17]. Fig. 3 Sensor and transmitter module for wireless transmission of the number of bends

Multipurpose Advanced Assistance Smart Device …

719

Fig. 4 State diagram for switching of states for assistance and control

3.2 Transmitter Module As shown in Fig. 3, the sensor is connected to ESP32 microcontroller as a variable resistor to a voltage divider network. When the sensor is attached to any movable part of the paralyzed person as stated earlier, the bending of the sensor gives an analog signal which is then converted to a digital form by the ESP32. Figure 4 shows the state diagram defined for switching of states depending on the type of digital input received from the sensor. Initially the system remains in state 0 by default. Depending on the number of bends in the sensor by the person, the system will switch to specific state which provides specific functionality like asking for assistance, sending an alert message, and controlling specific appliances connected to the system. In the state diagram, we can see that state 1 is directly connected to two other states state 2 and state 3. So, when the person bends, the sensor one-time state is switched to state 1, similarly for two bends state 2 is invoked. We provide a window of 2 s to detect and confirm the number of bends made. Table 1 gives us a clear explanation of what kinds of functions are invoked in each state designed. This table shows a prototype design of states controlling fan speed, bed angle, sending assistance request message, etc. However, this can be customized as per the requirements of the person.

3.3 Working of Receiver Module The transmitter module encodes in which state the system has to be in and sends the message to the receiver which is also an ESP32 that acts as a client. The transmission of message is done using Wi-Fi service available by the Wi-Fi module of the ESP32. This establishes a server–client network for signal transmission. Depending on the

720 Table 1 Proposed system state assignment table

P. R. Shankar et al. State number

State name

0

System OFF

1

Conscious

2

Assistance required

3

Give control

4

Personal assistance

5

Hungry

6

Fan control

7

Bed alignment

8

Inc. Fan Speed

9

Dec. Fan Speed

10

Inc. Bed Height

11

Dec Bed Height

Fig. 5 Receiver module for controlling different appliances, displaying and sending alert messages

type of state, the receiver connects to specific appliance or electronic device. Alert messages can be sent directly to mobile by integrating to cloud service and a mobile application. Audio and visual feedback is provided to the person to let him know in which state he is in and what he is controlling and operating on. The complete receiver end equipment’s block diagram is shown in Fig. 5.

4 Results and Discussion Homemade sensor: After screen printing of composite silver paste on PVDF sensor, it is heated at 60–75 °C thus forming the conductive paths on the substrate as shown in Fig. 6a. The flex sensor thus tested before bending and after bending, showed variation in resistance from 0.380 to 1 k as shown in Fig. 6b, c. Here, based on

Multipurpose Advanced Assistance Smart Device …

721

Fig. 6 a Pattern made upon PVDF sheet using composite silver ink, b resistance when sensor is not bent, c resistance when sensor is bent

the various bending position and angles, we have obtained system to show various assistance and control needs of the patient. Figure 7a, b shows how we can use place the sensor at different body parts. We attached sensor at chin and tested a sample. The basic requirement for the system to work as expected is that the patient is able to move his body part enough to flex the sensor. Figure 7c, d shows how the proposed system can be implemented in hospitals. By installing the sensor placed on the movable arm of the person and transmitter module in each room of patients, the requirements of each patient can be monitored from a common LCD screen placed at a central hub. This helps nurses to take care of the patient all the time without the necessity of being around all the time. This system can also be customized and used at homes for individuals who need continuous assistance and monitoring. Using this smart module, the raw data can be collected and used for estimates and requirements of patients at different time slots. The raw data also can be analyzed and used for the forecasting of patients future requirements and needs. Fig. 7 a Image shows flex sensor being placed over the chin, b image shows flex sensor being bent when chin of patient extended. c Flex sensor placed on patient along with transmitter module box. d Status of patients from different rooms monitored on LCD screen

722

P. R. Shankar et al.

5 Conclusion Compared to most of the available assistance devices and systems, this system has advantages like minimal training required for the person using, option to customize the type of control and also to switch to other types of sensors as per the person requirement. Also, the system is very light in weight to be installed on the person and the operation is less complex compared to the bulky and complex systems available in the market. The self-made sensor is more flexible, easy to connect and having a special feature of unchanged resistance while using in real-time applications. Data can be stored for further processing and analysis of patient’s performance and regular requirements at time intervals. In future, we try to design a self-powered sensor that detects the movement of the human finger. It works on the principle of separation of contact operation of triboelectricity [18, 19]. Such sensors can be made out of nylon and polydimethylsiloxane (PDMS) as triboelectric material that interact and copper electrodes. Generation of opposite charge carries occurs at the interface when the two triboelectric materials come in contact during bending of fingers, thus generating potential across it.

References 1. India’s 2.2% population suffering from disability: NSO survey for July–Dec 2018. The Economic Times 2. Roy, R., Archa, S., Jose, J., Varghese, R.: A survey on different methodologies to assist paralysed patients. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 5(3) (2016) 3. Dhepekar, P., Adhav, Y.G.: Wireless robotic hand for remote operations using flex sensor. In: 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICAC-DOT) 4. Pandey, M., Chaudhari, K., Kumar, R.: Assistance for paralyzed patient using eye motion detection. In: 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA) 5. Joshi, H., Bhati, S., Sharma, K., Matai, V.: Detection of finger motion using flex sensor for assisting speech impaired. Int. J. Innov. Res. Sci. Eng. Technol. 6(10) (2017) 6. Tawde, M., Singh, H., Shaikh, S.: Glove for gesture recognition using flex sensor. In: 2016 International Conference on Robotics and Automation for Humanitarian Applications (RAHA) 7. Al Mamun, A., Polash, Md.S.J.K., Alamgir, F.M.: Flex sensor based hand glove for deaf and mute people. Int. J. Comput. Netw. Commun. Sec. 8. Flores, M.B.H., Siloy, C.M.B., Oppus, C., Agustin, L.: User-oriented finger-gesture glove controller with hand movement virtualization using flex sensors and a digital accelerometer. In: 2014 International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM) 9. Guo, Y.-R., Zhang, X.-C., An, N.: Monitoring neck posture with flex sensors. In: 2019 9th International Conference on Information Science and Technology (ICIST) 10. Liou, J.-C., Fang, K.-W.: Flex sensor for stroke patients identify the specific behavior with different bending situations. In: 2017 6th International Symposium on Next Generation Electronics (ISNE) 11. Koshi, M.M., Karthikeyan, S.: A survey on advanced technology for communication between deaf/dumb people using eye blink sensor and flex sensor. In: 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS)

Multipurpose Advanced Assistance Smart Device …

723

12. Polyvinylidene Fluoride (PVDF): Complete Guide 13. Flex Sensor DataSheet. https://www.sparkfun.com/datasheets/Sensors/Flex/flex22.pdf 14. Votzke, C., Daalkhaijav, U., Johnston, Y.M.M.L.: 3D-printed liquid metal interconnects for stretchable electronics. IEEE Sens. J. 19(10) (2019) 15. Correia, V.M.G., Caparros, C., Lanceros-Mndez, S.: Development of inkjet printed strain sensors. Smart Mater. Struct. (2013) 16. Nag, A., Mukhopadhyay, S.C., Kosel, J.: Wearable flexible sensors: a review. IEEE Sens. J. 17(13) (2017) 17. Ozel, S., Skorina, E.H., Luo, M., Tao, W., Chen, F., Pan, Y., Onal, C.D.: A composite soft bending actuation module with integrated curvature sensing. In: 2016 IEEE International Conference on Robotics and Automation (ICRA) Stockholm, Sweden, 16–21 May 2016 18. Dhakara, L., Pitchappaa, P., Tayb, F.E.H., Lee, C.: An intelligent skin based self-powered finger motion sensor integrated with triboelectric nanogenerator. Electrical and Computer Engineering, National University of Singapore, Engineering Drive 3, Singapore 117576 19. Maharjan, P., Bhatta, T., Cho, H., Park, J.Y.: A highly sensitive self-powered flex sensor for prosthetic arm and inter-preting gesticulation. Micro/Nano Devices and Packaging Laboratory, Department of Electronic Engineering, Kwangwoon University, Seoul, Republic of Korea

Assessing the Role of Age, Population Density, Temperature and Humidity in the Outbreak of COVID-19 Pandemic in Ethiopia Amit Pandey, Rajesh Kumar, Deepak Sinwar, Tesfaye Tadele, and Linesh Raja Abstract First case of coronavirus disease 2019 (COVID-19) was reported in Ethiopia on March 13, 2020. As of May 19, 2020, overall of 365 confirmed cases have been recorded in the several zones of Ethiopia, causing five deaths [Zhu in A novel coronavirus from patients with pneumonia in China, 2019, New England J, Med, 2020, Zhou et al. in Nature 579:270–273, 2020]. As per the current statistics, the pandemic has progressed to its next phase and has started transmitting via local transmissions. The situation is alarming, and researches should be carried out revealing the factors affecting the transmission of COVID-19 in the society. Initially, correlation analysis was performed to find any strong association between the environmental factors and the transmission rates of COVID-19 in the various parts of Ethiopia. The factors such as age, temperature, humidity and population density in various zones of Ethiopia were used against the reported number of COVID-19 cases to plot the regression graphs, showing their effects on the transmission of COVID-19 in Ethiopia. This study shows that there is a strong correlation between the age, population density, temperature, humidity, (temperature/humidity) ratio and the number of confirmed COVID-19 cases in Ethiopia. Keywords Coronavirus · COVID-19 · Age · Population density · Temperature · Humidity

1 Introduction The early outburst of COVID-19 pandemic was recorded in December 2019, in the Wuhan city of China [1–3], and on March 13, 2020, the first case of COVID-19 A. Pandey · R. Kumar · T. Tadele College of Informatics, BuleHora University, Bule Hora, Ethiopia D. Sinwar Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India L. Raja (B) Department of Computer Applications, Manipal University Jaipur, Jaipur, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_57

725

726

A. Pandey et al.

was recorded in Addis Ababa, Ethiopia. This confirmed case of COVID-19 was a female with an international travel history. The actions were immediately taken and the patient was quarantined. Still the disease was able to set up its roots in the country, and as of now, there are 365 confirmed cases of COVID-19 in the country. Further, the sources show that the disease has started spreading through local transmissions [4, 5]. Earlier researchers in China have studied the influence of physical factors, such as gender and age together with the environmental conditions, such as temperature and humidity on the transmission patterns of COVID-19 in China [6–11]. Also, some prediction and projection models have been proposed based on the transmission rate of the disease in various stages of its life cycle. The existing models used for estimating the effects and transmission of the pandemic mainly emphasis on exploring the disease transmission rate. They take in account the dynamic changes in the count of infected patients or they consider the factors like effect of containment on the advancement trend of COVID-19. However, they entirely neglect the effect of environmental factors, age and population density in the transmission of COVID-19 [12–17]. Later Bhatnagar V. has performed a descriptive analysis over the COVID-19 patients in India [18]. Ethiopia is among the highest populated countries in the African continent, with its 78.3% population residing in rural parts [19]. In a country like Ethiopia where such a large population is staying in rural parts with limited resources, it becomes utterly significant to study the transmission rate of COVID-19 considering the factors of population density and age. Further, the geographical location of Ethiopia is close to the equator affecting its climate. Hence, it is important to study the environmental factors like temperature and humidity to understand the transmission rate of COVID19 in the country. The data and methods section describes the correlation techniques used in the study. Further, using the information gathered from the online repositories and other news media, the result and discussion section holds the outcomes of the current study revealing the associations between the environmental factors, age and population density with the outburst of COVID-19 pandemic in Ethiopia.

2 Data and Methods 2.1 Time Frame Time frame considered for the study is of 19 days, starting from the day of first case in that zone. The time frame selected covers the duration when the COVID-19 has started spreading through the local transmissions within the country.

Assessing the Role of Age, Population Density, Temperature …

727

2.2 Epidemiological Data The epidemiological data is taken from online repositories [20], and further information from news media and social media has been taken to validate the data. The retrieved information was categorized according to various zones in Ethiopia.

2.3 Weather Data The weather information is taken from the timeanddate.com. It contains the temperature and humidity data for different zones of Ethiopia [21]. For analysis, the average temperature and humidity values were used for the considered time frame.

2.4 Population Data The information regarding the population density in various zones of Ethiopia is taken from the Population and Housing Census of Ethiopia [22].

2.5 Data Analysis: Spearman’s Rank Correlation, Pearson Correlation and Regression Analysis Initially, the correlation analysis is used for finding the associations among the population density, age, temperature, humidity and the COVID-19 pandemic cases in different zones of Ethiopia. (a)

Spearman’s rank correlation (ϕ) In general, the Spearman’s rank correlation can be expressed as,  6 δ2  ϕ =1−  2 n n −1 ϕ is the Spearman’s rank correlation coefficient. δ is the difference in the rank, given to the two variables. n is the number of observations taken. Here, ⎧ ⎨ +1 shows a possitive correlation ϕ = 0 shows there is No correlation ⎩ −1 shows a negative correlation

(1)

728

A. Pandey et al.

A positive value of ϕ, indicates a positive association among the attributes. It means the value of one attribute monotonically depends on the other, i.e., the increasing the value of one attribute will also increase the value of other attribute and vice versa. Also, the closer the value of ϕ is to + 1, stronger is the correlation between the attributes. In same fashion, a negative value of ϕ depicts a negative association among the attributes, i.e., a non-monotonic association. It means, with an increase in value of one attribute, the value of other will decrease. In this case, the closer the value of ϕ is to − 1, the stronger is the negative correlation. Although, the value of ϕ = 0 only depicts that the attributes are independent of each other, and there is no association between them. Furthermore, the nonzero values of ϕ either closer to + 1 or − 1 only show a strong association between the attributes, which can be linear or even nonlinear in many cases. (b)

Pearson correlation (μ) Given n observations for x and y variables, sample mean x¯ and y¯ can be written as,   xi yi x= and y = (2) n n Now, sample variance S xx and S yy can be expressed as, Sx x =

  (xi − x)2 (yi − y)2 and S yy = n n

(3)

Further, sample covariance S xy can be expressed as, Sx y

 (xi − x)(yi − y) = n

(4)

Now, in general the Pearson correlation can be expressed as, μ= √

Sx y 

Sx x S yy

(5)

Here, ⎧ ⎨ +1 shows a possitive correlation μ = 0 swows No relation or Non-Linear relation ⎩ −1 shows a negative correlation A positive value of μ indicates a positive association among the attributes. It means that increasing the value of one attribute will also increase the value of other attribute and vice versa. Also, the closer the value of μ is to + 1, stronger is the association. In similar fashion, a negative value of μ reflects a negative

Assessing the Role of Age, Population Density, Temperature …

729

association among the attributes, i.e., with the increase in value of one attribute the value of other will decrease. In this case, the closer the value of μ is to − 1, the stronger is the negative correlation. Although, the value of μ = 0 only reflects that there is either no association between the attributes at all or the attributes are nonlinearly associated. (c)

Regression analysis Further, in this study, regression analysis has been used to plot regression graphs for showing the correlation between the COVID-19 cases with the age, population density, temperature, humidity and (temperature/humidity) ratio. Here, the number of COVID-19 cases is the regress, and the age, population density, temperature, humidity and (temperature/humidity) ratio are the regressors. The study uses both linear and nonlinear regression plots to display the associations among the considered factors and the number of COVID-19 cases. Conceptually, the linear regression can be expressed as, yi = β0 + β1 xi + εi

(6)

Here, yi β0 β1 xi εi

is the response variable. is the intercept value. is the slope of the line. is the predictor variable. is the error in estimation of the response variable.

(d)

Coefficient of determination (R2 ) To minimize the error, the regressor variables must be valued with extreme accuracy. The R2 is measure of the variability in output variable explained by input variables and can be expressed as;   yi − yˆi 2 R =1−  (yi − y)2 2

Here, yˆi y

is the predicted value. is the mean value.

The value of R2 lies between ‘0’ and ‘1.’ Value close to ‘0’ means model is a poor fit and value close to ‘1’ means model is a good fit. However, it is not sufficient to conclude that model is a linear model.

730

A. Pandey et al.

3 Result and Discussion Tables 1 and 2 show the correlation analysis values for various associations between population density, age, temperature, humidity, (temperature/humidity) ratio for the considered time frame and the number of COVID-19 cases in various zones of Ethiopia.

3.1 Correlation Between COVID-19 Pandemic Cases and Temperature The study considered epidemiological data for the period of 19 days and there is a positive correlation found between the COVID-19 pandemic cases and the temperatures in the various zones of Ethiopia (Table 1). The value of Pearson correlation coefficient for the association is, μ [Temperature] [COVID-19 Cases_Fst 19 Days] = 0.421831. The positive value shows that there is good positive correlation between number of COVID-19 pandemic cases and temperature. It also shows that the zones with higher temperature will have more number of COVID-19 patients. Figure 1 shows the regression plot between the temperature and the COVID-19 cases in first 19 days. The positive slope of the regression line validates the fact that the zones with higher temperature have a greater number of COVID-19 cases. Table 1 Pearson correlation analysis between temperature, humidity, (temperature/humidity) ratio, population density and number of COVID-19 cases Index

Temperature

Humidity

Tem./Hum.

Population per sq. km

COVID19 Cases_Fst 19 days

Temperature Humidity

1

−0.697219

0.964659

−0.371192

0.421831

−0.697219

1

−0.859982

0.191917

−0.815642

Tem./Hum.

0.964659

−0.859982

1

−0.320575

0.602778

Population per −0.371192 sq. km

0.191917

−0.320575

1

0.375511

COVID19 Cases_Fst 19 days

−0.815642

0.602778

0.375511

1

0.421831

Table 2 Spearman’s rank correlation analysis between age groups of infected people and total number of COVID-19 cases

Index

Age group

COVID19_Total cases

Age groups

1

−0.416667

COVID19_Total cases

− 0.416667

1

Assessing the Role of Age, Population Density, Temperature …

731

Fig. 1 Regression plots showing associations between temperature, humidity, (temperature/humidity) ratio, population density and number of COVID-19 cases, respectively

3.2 Correlation Between COVID-19 Pandemic Cases and Humidity There is a strong negative association between the number of COVID-19 cases and the humidity (Table 1). The value of Pearson correlation coefficient is, μ [Humidity] [COVID-19 Cases_Fst 19 Days] = − 0.815642. The negative correlation values clearly reflect that the zones with higher humidity value are having lesser number of COVID-19 cases. Figure 1 shows the regression plot for the correlation between the humidity and the COVID-19 cases in first 19 days with a negative slope.

3.3 Correlation Between COVID-19 Pandemic Cases and (Temperature/humidity) Ratio There is a positive association between the (temperature/humidity) ratio and the number of COVID-19 cases reported during the first 19 days, in different zones of Ethiopia (Table 1). The value of Pearson correlation coefficient is, μ [(Temperature/Humidity)] [COVID-19 Cases_Fst 19 Days] = 0.602778. This positive value evidently shows that there is strong positive association between the number of COVID-19 cases and (temperature/humidity) ratio. It means that there will be

732

A. Pandey et al.

larger number of COVID-19 cases in the zones with higher value of (temperature/humidity) ratio. Figure 1 shows the regression plot for the associations of (temperature/humidity) ratio with the number of COVID-19 cases, and its positive slope confirms that the zones with higher (temperature/humidity) ratio are having larger number of COVID-19 cases.

3.4 Correlation Between COVID-19 Pandemic Cases and Population Density The study considered epidemiological data for the period of 19 days, and there is a positive correlation found between the COVID-19 pandemic cases and the population density of the various zones in Ethiopia (Table 1). The value of Pearson correlation coefficient for the association is, μ [Population per sq km] [COVID-19 Cases_Fst 19 Days] = 0.375511. The positive value shows that there is good positive correlation between number of COVID-19 pandemic cases and population density in various zones of Ethiopia. It also shows that the zones with higher population density will have more number of COVID-19 patients. Figure 1 shows the regression plot for the correlation between the population densities of the various zones in Ethiopia with the number of COVID-19 cases in first 19 days. The positive slope of the regression line means that the zones with larger population density are having larger number of COVID-19 cases.

3.5 Correlation Between COVID-19 Pandemic Cases and Age There is a negative association between the age and the number of COVID-19 cases reported (Table 2). The value of Spearman’s rank correlation coefficient is, ϕ [Age Groups] [COVID-19_Total Cases] = − 0.416667. Figure 2 shows the regression plot for the associations of patient’s age with the number of COVID-19 cases. The

Fig. 2 Regression plots showing associations between age and number of COVID-19 cases

Assessing the Role of Age, Population Density, Temperature …

733

regression plot is a curve of second order. It means that the COVID-19 infection is higher in persons of middle age group, and there are lesser number of COVID-19 cases for the children and old aged people.

4 Conclusion The study performed based on temperature, humidity, age and population density in various zones of Ethiopia, to understand the outbreak of COVID-19. Study shows that there is a high correlation between the considered factors and the number of COVID-19 pandemic cases in Ethiopia. Ethiopia is a country that embraces lots of environmental and geographical diversities, and the outcomes of this study will be helpful in taking the necessary actions for the repression of the COVID-19 pandemic in the country.

References 1. Zhu, N., et al.: A novel coronavirus from patients with pneumonia in China, 2019. N. Engl. J. Med. (2020). https://doi.org/10.1056/NEJMoa2001017 2. Zhou, P., Yang, X., Wang, X., et al.: A pneumonia outbreak associated with a new coronavirus of probable bat origin. Nature 579, 270–273 (2020). https://doi.org/10.1038/s41586-020-2012-7 3. Ghinai, I., et al.: First known person-to-person transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in the USA. the Lancet (2020). https://doi.org/10. 1016/S0140-6736(20)30607-3 4. World Health Organization. Novel coronavirus (2019-nCoV). Available at https://www.who. int/emergencies/diseases/novel-coronavirus-2019. 5. COVID-19 pandemic in Ethiopia. Available at https://en.wikipedia.org/wiki/COVID-19_pan demic_in_Ethiopia 6. Poirier, C., Luo, W., Majumder, M., Liu, D., Mandl, K., Mooring, T., Santillana, M.: The role of environmental factors on transmission rates of the COVID-19 outbreak: an initial assessment in two spatial scales. (March 9, 2020). Available at SSRN https://ssrn.com/abstract=3552677 or https://doi.org/10.2139/ssrn.3552677 7. Biswas, M., Rahaman, S., Biswas, T.K., Haque, Z., Ibrahim, B.: Effects of sex, age and comorbidities on the risk of infection and death associated with COVID-19: a meta-analysis of 47807 confirmed cases (3/28/2020). Available at SSRN https://ssrn.com/abstract=3566146 or https:// doi.org/10.2139/ssrn.3566146 8. Barreca, A.I., Shimshack, J.P.: Absolute humidity, temperature, and influenza mortality: 30 years of county-level evidence from the United States. Am. J. Epidemiol. 176(suppl_7), S114– S122 (2012) 9. Shaman, J., Goldstein, E., Lipsitch, M.: Absolute humidity and pandemic versus epidemic influenza. Am. J. Epidemiol. 173(2), 127–135 (2011) 10. Zhang, et al.: The Epidemiological Characteristics of an Outbreak of 2019 Novel Coronavirus Diseases (COVID-19)—China, 2020. Chinese Center for Disease Control and Prevention CCDC Weekly, Epub vol. 2, No. x (2020).

734

A. Pandey et al.

11. Haider, N., Yavlinsky, A., Simons, D., Osman, A.Y., Ntoumi, F., Zumla, A., Kock, R.: Passengers’ destinations from China: low risk of novel coronavirus (2019-nCoV) transmission into Africa and South America. Epidemiol. Infect. 148(e41), 1–7 (2020). https://doi.org/10.1017/ S0950268820000424 12. Du, S., Wang, J., Zhang, H., Cui, W., Kang, Z., Yang, T., Lou, B., Chi, Y., Long, H., Ma, M., Yuan, Q., Zhang, S., Zhang, D., Xin, J., Zheng, N.: Predicting COVID-19 using hybrid AI model (3/13/2020). Available at SSRN https://ssrn.com/abstract=3555202 13. Jiang, S., et al.: Mathematical Models for Devising the Optimal SARS-CoV-2 Eradication in China, South Korea, Iran, and Italy (3/19/2020). Available at SSRN https://ssrn.com/abstract= 3559541 or https://doi.org/10.2139/ssrn.3559541 14. Zhang, J., Dong, L., Hang, Y., Chen, X., Yao, G., Han, Z.: Predicting the Spread of the COVID19 Across Cities in China with Population Migration and Policy Intervention (3/19/2020). Available at SSRN https://ssrn.com/abstract=3559554 or https://doi.org/10.2139/ssrn.3559554 15. Liu, M., Ning, J., Du, U., Cao, J., Zhang, D., Wang, J., Chen, M.: Modeling the Evolution Trajectory of COVID-19 in Wuhan, China: Experience and Suggestions (3/26/2020). Available at SSRN https://ssrn.com/abstract=3564404 or https://doi.org/10.2139/ssrn.3564404 16. Li, Q., et al.: Early transmission dynamics in Wuhan, China, of novel coronavirus-infected pneumonia. N Engl. J. Med. (2020). https://doi.org/10.1056/NEJMoa2001316 17. Wallinga, J., Lipsitch, M.: How generation intervals shape the relationship between growth rates and reproductive numbers. Proc. R. Soc. B. Biol. Sci. 274(1609), 599–604 (2007) 18. Bhatnagar, V., et al.: Descriptive analysis of COVID-19 patients in the context of India. J. Interdiscip. Math. (2020) 19. The World Fact Book. Available at https://www.cia.gov/library/publications/the-world-fac tbook/geos/et.html 20. COVID-19 Coronavirus Pandemic. Available at https://www.worldometers.info/coronavirus/# countries 21. Past weather Information. Available at https://www.timanddate.com/weather/?type=historic& query=Ethiopia 22. 2007 Population and Housing Census of Ethiopia. Available at https://www.csa.gov.et/index. php?option=com_rubberdoc&view=doc&id=264&format=raw&Itemid=521

Soft Computing Tool for Prediction of Safe Bearing Capacity of Soil Narhari D. Chaudhari, Neha N. Chaudhari, and Gaurav K. Bhamare

Abstract Safe bearing capacity (SBC) of the soil plays vital role in structural designs of civil engineering projects. SBC of soil is an initial requirement for structural designers for structural designs of civil engineering projects. Conventional plate load test method for knowing safe bearing capacity of soil is costly, cumbersome, and timeconsuming experimental setup. Linear genetic programming (GP), a soft computing technique, is a useful tool which can be employed for geotechnical engineering branch as an alternative tool for prediction of safe bearing capacity of soil. In the present work, seven soil parameters like angle of internal friction, cohesion of soil, % silt and clay, % sand, % gravel, specific gravity of soil and safe bearing capacity of soil were collected from various sites in seven districts of Maharashtra state in India. First six soil parameters were set as input parameters and safe bearing capacity of soil as output parameter for LGP model formulations. Discipulus software was employed in model formulation. Total 125 site datasets were employed of which first 88 values were used to train the models and remaining 37 values were used for validation purpose. Good results were observed in the GP4 model (r = 0.88) which works slightly better than GP5 model. Results are poor for GP3. GP4 model can be used for prediction of safe bearing capacity of soil by merely using simple input parameters which can be obtained easily, quickly and with a lesser cost from soil laboratories of academic institutions or private soil laboratories. Keywords Genetic programming · Soft computing · Safe bearing capacity

N. D. Chaudhari (B) Civil Engineering Department, Gokhale Education Society’s R. H. Sapat College of Engineering, Management Studies and Research, Nashik, Maharashtra 422005, India N. N. Chaudhari Matoshri College of Engineering, Nashik, Maharashtra, India G. K. Bhamare Computer Engineering Department, Gokhale Education Society’s R. H. Sapat College of Engineering, Management Studies and Research, Nashik, Maharashtra 422005, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_58

735

736

N. D. Chaudhari et al.

1 Introduction Civil structures like buildings, dams, bridges are built on soils. A foundation is required to transmit the load of the structure on large areas of soil. The foundation of the structure should be so designed that soil does not fail in shear nor there is the excessive settlement of the structure. Methodology of foundation design is based on the value of safe bearing capacity of soil. Safe bearing capacity is an important parameter for structural designer. Traditionally safe bearing capacity of soil used by structural designer is getting from the plate load bearing test. Plate load bearing tests are costly, cumbersome, and time-consuming so there is a need to go for an alternative technique for estimation of safe bearing capacity of soil. Soft computing tool can be a useful tool to overcome this drawback. Objective of the present work is to search better soft computing model of linear genetic programming for estimation of safe bearing capacity of soil. The data-driven models, which work on the data rather than the physics of the process, can play their role here. Neural networks, fuzzy systems, genetic programming, model trees (MTs), and support vector regressions can be grouped as datadriven modeling techniques, which focus on building models that would complement or replace the ‘knowledge-driven’ models describing the physical behavior [1]. In the present work, seven soil parameters like angle of internal friction, cohesion of soil, % silt and clay, % sand, % gravel, specific gravity, and safe bearing capacity of soil were collected from various districts (Jalgaon, Dhule, Nashik, Thane, Mumbai, Pune, and Raigad) of Maharashtra state in India. First six soil parameters were set as input parameters and safe bearing capacity of the soil was set as output parameter for genetic programming (GP) model formulation. Out of four developed models, three models perform better and one model performs slightly poorly. The following sections describe soft computing tool, data and study area, model formulation and assessment and discussion of results and at the end conclusions.

1.1 Soft Computing Tool Genetic Programming Genetic programming creates programs automatically to perform a selected task based on the Darwinian’s principle of survival of the fittest. GP developers direct the computer to perform a task by giving examples. The GP software then writes a computer program that performs the task described by the examples. GP is a dynamic, robust, and rapidly growing discipline which has been applied to various problems successfully, equaling or exceeding the best human created solutions to many difficult problems [2–5]. Good, detailed treatments of GP may be found in Banzhaf et al. [4], Koza [5]. DiscipulusTM [6] is a LGP software package that operates directly on machine code. The LGP algorithm in DiscipulusTM is simple. It initiates with a population of randomly generated computer programs. These programs are the ‘primordial soup’ in which computerized evolution operates. Then, GP conducts a

Soft Computing Tool for Prediction of Safe Bearing …

737

‘tournament’ by selecting four programs from the population, also at random and measures how well each of the four programs performs the task designated by the GP developer. The two programs that perform the task better ‘win’ the tournament. The GP algorithm then copies the two winner programs and transforms these copies into two new programs via crossover and mutation transformation operators, in short, the winners have ‘children’. These two new child programs are then inserted into the population of programs, replacing the two loser programs from the tournament. GP repeats these simple steps over and over until it has written a program that performs the selected task. In the recent past, many researchers have worked with soft computing techniques in the field of geotechnical engineering. Few contributions of the researchers are given in the following paragraph. Juwaied [7] used artificial intelligence in geotechnical engineering field, which overcome lacuna of traditional method in geotechnical engineering. Dutta et al. [8] predicted ultimate bearing capacity of different regular shaped skirted footings resting on sand using artificial neural networks, which gave reasonably acceptable results in the predication of bearing capacity overcoming the drawback of costly and time-consuming experimental setup for bearing capacity. Jabbar et al. [9] investigated the utility of k-nearest neighbor (k-nn) approach in predicting the ultimate bearing capacity of soils for shallow foundation with good results. Terezie et al. [10] studied the relationship between the estimated bearing capacity of fine-grained soils with respect to the classes of foundation soils and concluded that the biggest influence on the approximate bearing capacity of grained soils has the consistency of the soil. Shahin [11] reviewed the artificial intelligence applications in shallow foundations and presented the salient features associated with the AI model development, discussed the strengths and limitations of AI techniques compared to other modeling approaches. Sadrossadat et al. [12] developed a new design equation for the prediction of the ultimate bearing capacity of shallow foundations on granular soils using linear genetic programming (LGP) methodology with better prediction performance than the well-known prediction equations for the bearing capacity of shallow foundations. Chandwani et al. [13] applied of two major soft computing techniques of artificial neural networks and genetic algorithms in the field of civil engineering, which has saved time and money for getting bearing capacity. Alavi et al. [14] employed linear and tree-based genetic programming for solving geotechnical engineering problems with good accuracy in model results. They formulated effective angle of shearing of soils in terms of physical properties. Mousavi et al. [15] developed new empirical equations to predict the soil deformation moduli utilizing a hybrid method coupling genetic programming and simulated annealing, called GP/SA. Their models are able to estimate the soil deformation moduli with an acceptable degree of accuracy. Adarsh et al. [16] examined the potential of two soft computing techniques, namely support vector machines (SVMs) and genetic programming (GP), to predict the ultimate bearing capacity of cohesionless soils beneath shallow foundations. They found GP efficient in accurate prediction of ultimate bearing capacity cohesionless soils when compared with other soft computing models. Mousavi et al. [17] developed new nonlinear solutions to estimate the soil shear strength parameters utilizing linear genetic programming (LGP). They formulated soil cohesion intercept (c) and angle

738

N. D. Chaudhari et al.

of shearing resistance (ϕ) in terms of the basic soil physical properties. They selected the best models after developing and controlling several models with different combinations of influencing parameters. Developed models were able to effectively learn the complex relationship between the soil strength parameters and their contributing factors. The LGP models provided a significantly better prediction performance than the regression models. Heshmati et al. [18] used linear genetic programming (LGP) using various soil parameters for prediction of classification of soils with good accuracy. Davidson et al. [19] described a new method for creating polynomial regression models and compared with stepwise regression and symbolic regression. In the present study, different methodology is adopted as compared to above mentioned research work. Effort is made to minimize the work of estimating safe bearing capacity of soil using a smaller number of inputs, which can be easily obtained from any private, government laboratories or from academic institutions by performing simple laboratory work with lesser cost.

2 Study Area and Data Soil investigation data were collected from 125 sites from seven districts of Maharashtra state, India. Soil parameters in the data included % gravel, % sand, % silt and clay, specific gravity of soil, the cohesion of soil, angle of friction, and safe bearing capacity of soil. First six parameters were used as input parameter and last parameter is used as output. All the data was for major civil engineering projects of government and private sector. The location map is shown in Fig. 2 [20].

3 Model Formulation and Assessment The objective of the present study is to estimate safe bearing capacity of soil for a civil engineering project site using soil parameters such as % gravel, % sand, % silt and clay, specific gravity of soil, the cohesion of soil, and angle of friction employing LGP as computing tool. In all data from 125 sites was used as mentioned above. Initially it was decided to develop LGP model considering all above six input parameters as mentioned above, to estimate soil safe bearing capacity of the soil. The model developed was then named as GP6. Then the five influential inputs of GP6 were used for formulating GP5. Least influential input was dropped. Then observing the impact of inputs for GP5 model, least influential input of GP5 was dropped for further model (GP4) formulation. This process of dropping inputs continued till three soil parameters served as inputs for LGP models for estimation of soil safe bearing capacity. Further, models developed were named as GP5, GP4, and GP3 depending on number of influential inputs. The work of model formulation was terminated at GP3 as further two models did not yield good results so GP2 GP1 are not considered

Soft Computing Tool for Prediction of Safe Bearing …

739

Fig. 1 Flowchart (GP)

in the present study. Thus, in all four LGP models were developed for estimating safe bearing capacity of soil at a site. Out of 125 values 88 (70%) were used for calibrating the model and remaining 37 values were withheld for testing purpose. Performance of the models was checked by calculating the correlation coefficient between the observed and the estimated safe bearing capacity values and plotting the scatter plots between the same for testing dataset. Additionally, the root mean square error (RMSE), mean absolute error (MAE), and mean squared relative error (MSRE) were calculated, also coefficient

740

N. D. Chaudhari et al.

Fig. 2 Study area

of efficiency (CE) and index of agreement d were calculated and used to judge the performance of models as suggested by Dawson and Wilby [21]. Lower the value of the RMSE, MAE, MSRE greater is the accuracy. CE and d indicate prediction capabilities of values different from the mean and varies from –∞ to + 1. CE of 0.9 and above is very satisfactory and that of below 0.8 is unsatisfactory. Following equations were used to estimate the values MSRE, RMSE, MAE, CE, and d.   n    1. MSRE = (Ba − Bm)2 / Ba 2 /n I =1 n  2. RMSE = (Ba − Bm)2 /n 3.

MAE =

n 

I =1

|Ba − Bm|

i=1 n

4.

CE = 1 −

5.

d =1−

i=1 (Ba−Bm)

n

n

(

2

)

2 i=1Ba−Ba n 2 i=1 (Ba−Bm)

i=1

(| Bm−Ba |+| Ba−Ba |)

2

where Ba is observed safe bearing capacity soil, Bm is LGP predicted safe bearing capacity soil, Ba (bar) is average bearing capacity soil, and n is total number of data. In this work of prediction of soil bearing capacity using above-stated four inputs of soil parameters, genetic programming uses fitness function to find an error of correlated data. While transferring data to the next generation, the lowest fitness function is selected. Fitness (zi) is used for each selection to calculate error rate. In Discipulus software, initial population values are generated and prediction array

Soft Computing Tool for Prediction of Safe Bearing …

741

value are set to zero, then to transform variable, absolute value of variable is used, if value is negative then function return −1 else it will compute 2x−1 .

)=

1

)

(

where N Yi Pi

is number of training data. is actual value of ith training. predicted value of ith training.

4 Results and Discussions Models with six inputs showed acceptable results with a correlation of coefficient (r = 0.83) between model estimated safe bearing capacity of soil and observed safe bearing capacity of soil however coefficient of efficiency (CE) value (0.63) is less. MSRE, RMSE, and MAE are better as shown in Table 1. Also, index of agreement is good (0.86) for GP6. Table 1 gives consolidated results of all four GP models showing coefficient of correlation between LGP estimated and observed safe bearing capacity of soil for testing dataset. For models with five inputs and four inputs worked very well with good results with coefficient of correlation (r = 0.88 for both) as shown in table below, however coefficient of efficiency values is (0.75, 0.76), respectively. There is marginal difference between GP5 and GP4 results. Both the models can be of great use to structural designers in civil engineering field to save money and time. GP3 model did not showed good results but the model results are close to the limit except for coefficient efficiency (0.45) owing to that GP4 becomes poor model. The least performance was shown by GP3 (r = 0.74) among all the models which should not be looked upon as inferior owing to complex nonlinear nature of the underlying Table 1 Model assessment Model

MSRE

RMSE

MAE

CE

d

r

GP6

0.023

4.55

2.87

0.63

0.86

0.83

GP5

0.022

3.08

2.41

0.75

0.91

0.88

GP4

0.020

2.98

2.06

0.76

0.92

0.88

GP3

0.026

4.55

2.87

0.45

0.73

0.74

Note r is coefficient of correlation between observed and model estimated safe bearing capacity of soil

742

N. D. Chaudhari et al.

natural soil for estimation of safe bearing capacity of soil which depends on, structure of soil, site conditions, moisture content, etc. Scatter plots for the two models (GP4 and GP5) are shown in Figs. 3 and 4 which confirm the accuracy of the results. The sequence of inputs was randomly arranged for first GP model with six inputs. For formulating GP5, less influential input which was (% sand) was removed owing to it lesser influence. Similarly (% silt & clay) showed lesser influence in GP5 so it was removed while formulating GP4. While formulating GP3 impact of inputs of GP4 was referred and it was observed that (% gravel) showed lesser influence so it was removed and three influential inputs were used for GP3. It may be because two important inputs mentioned above are removed while formulating GP3 it has shown inferior results compared to other models. Fig. 3 Scatter plot for GP4

60

GP4 Predicted SBC odf Soil

r = 0.88

40

20

0

0

20

40

60

Observed SBC of Soil

Fig. 4 Scatter plot for GP5

60

GP5 Predicted SBC odf Soil

r = 0.88

40

20

0

0

20

40

Observed SBC of Soil

60

Soft Computing Tool for Prediction of Safe Bearing …

743

5 Conclusions In the present study, it is noticed that GP4 model performed well with all assessment parameters and can be employed for estimation of safe bearing capacity of soil although it shows marginal value of (CE = 0.76). This shall also provide guidelines to many small civil engineering projects where estimating safe bearing capacity of soil by experimental method is overlooked by clients as well as structural designers at competitive rates and time, money, and structures can be designed with correct data rather than assuming safe bearing capacity without carrying out geotechnical investigations. Also, it is noticed that GP does not work well for lesser inputs of three, two, and one.

References 1. Solomatine. D.P., Ostfeld. A.: Data-driven modelling: some past experiences and new approaches. J. Hydroinformatics, 10(1), 3–22 (2008) 2. Deschaine, L.M., Patel, J.J., Guthrie, R.G., Grumski, J.T., Ades, M.J.: Using linear genetic programming to develop a C/C++ simulation model of a waste incinerator. In: The Society for Modeling and Simulation International: Advanced Simulation Technology Conference, Seattle, WA, April 2001, pp. 41–48 (2001) 3. Deschaine, L.M.: tackling real-world environmental challenges with linear genetic programming. PCAI 15(5), 35–37 (2000) 4. Banzhaf, W., Nordin, P., Keller, R.E., Francone, F.D.: Genetic Programming—An Introduction on the Automatic Evolution of Computer Programs and Its Applications. Morgan Kaufmann, San Francisco, USA and dpunkt, Heidelberg, Germany (1998) 5. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge, MA, USA (1992) 6. https://www.aimlearning.com. Last accessed 1 July 2020 7. Juwaied, N.S.: Application of artificial intelligence in geotechnical engineering. ARPN J. Eng. Appl. Sci. 3(8), 2764–2785 (2018) 8. Dutta, R.K., Rani, R., Gnananandarao, T.: Prediction of ultimate bearing capacity of skirted footing resting on sand using artificial neural networks. J. Soft Comput. Civ. Eng. 2(4), 34–46 (2018) 9. Jabbar, S.F., Hameed, R.I., Alwan, A.H.: The potential of non-parametric model in foundation bearing capacity prediction. Neural Comput. Appl. 30, 3235–3241 (2018) 10. Terezie, V., Nemec, F., Gombar, M., Cejka, J and Lizbetin, J.: Relationship estimated bearing capacity of fine-grained soils with respect to the classes of foundation soils. In: World Multidisciplinary Earth Sciences Symposium (WMESS), IOP Conference, Earth and Environmental Science, IOP Conference Series, vol. 44, pp 1–6 (2016) 11. Shahin, M.A.: A review of artificial intelligence applications in shallow foundations. Int. J. Geotech. Eng. 9(1), 49–60 (2015) 12. Sadrossadat, E., Soltani, F., Mousavi, S.M., Marandi, S.A., Alavi, A.H.: A new design equation for prediction of ultimate bearing capacity of shallow foundations on granular soils. J. Civ. Eng. Manag. 19(Sup1), s78–s90 (2014) 13. Chandwani, V., Agrawal, V., Nagar, R.: Applications of soft computing in civil engineering a review. Int. J. Comput. Appl. 81(10), 13–20 (2013)

744

N. D. Chaudhari et al.

14. Alavi, A.H., Gandomi, A.H., Bolury, J., Mollahasani, A.: Linear and tree-based genetic programming for solving geotechnical engineering problems. In: Yang XS et al. (eds) Chapter 12 in Metaheuristics in Water Resources, Geotechnical and Transportation Engineering, pp. 289–310, Elsevier (2012) 15. Mousavi, S.M., Alavi, A.H., Gandomi, A.H., Mollahasani, A.: Nonlinear genetic-based simulation of soil shear strength parameters. J. Earth Syst. Sci. 120, 1001–1022 (2012). (Springer) 16. Adarsh, S., Dhanya, R., Krishna, G., Merlin, R., Tina, J.: Prediction of ultimate bearing capacity of cohesionless soils using soft computing techniques. ISRN Artif. Intell. ID 628496, 10 pp (2012). https://doi.org/10.5402/2012/628496 17. Mousavi, S.M., Alavi, A.H., Mollahasani, A., Gandomi, A.H.: A hybrid computational approach to formulate soil deformation moduli obtained from PLT. Eng. Geol. 123(4), 324–332 (2011). (Elsivier) 18. Heshmati, A.A.R., Salehzade, H., Alavi, A. H., Gandomi, A. H., Badkobeh. A, Ghasemi, A.: On the applicability of linear genetic programming for the formulation of soil classification. Am. Eurasian J. Agric. Environ. Sci. 4(5), 575–583 (2008) 19. Davidson, J.W., Savic, D.A., Walters, G.A.: Symbolic and numerical regression: experiments and applications. Inf. Sci. 150, 95–117 (2003) 20. https://surveyofindia.gov.in. Last accessed 1 July 2020 21. Dawson, C.W., Wilby, R.L.: Hydrological modelling using artificial neural networks. Prog. Phys. Geogr. Earth Environ. 25(1), 80–108 (2001)

Smart Saline Monitoring System for Automatic Control Flow Detection and Alertness Using IoT Application D. Ramesh Reddy, Srishti Prakash, Andukuri Dinakar, Sravan Kumar, and Prakash Kodali

Abstract Since the world’s population is increasing day by day, there is a need to ramp up healthcare facilities. The most important and primary mandate for patients that are in hospitals is that they should be observed and treated properly with the provision of necessary nutrition/medicine at the right time. Apart from various treatments that they receive from doctors, saline therapy is the most basic treatment in which a bottle of saline water is fed to them for treating their dehydration, thus improvising their condition. However, the patient needs to be continuously monitored by the nurse during the whole time, which is not possible to a complete extent due to high number of beds and other duties on them. Unfortunately, the patient’s blood starts flowing back in the saline tube due to unavoidable conditions and carelessness. These results can be fatal and life-threatening in many cases. So, to prevent them from vital effects and unconditional flows, smart saline bottle level monitoring and the alert system has to be developed to protect their lives during feeding hours. Here the proposed system has a suitable method of measuring the saline flow rate and remotely monitoring it using the IoT platform. The proposed system determines the levels of the saline and indicates at the partial and critical levels with the help of a buzzer, level LED and sent notification to the hospital staff/control room using Blynk mobile app (IoT platform) over Wi-Fi. This automatic system implements a mechanism that will stop reverse flow when the bottle goes empty. The system is compact, cheap, and can be implemented in rural as well as urban hospitals. Keywords Saline monitoring · Node-MCU · Sensors · Buzzer · Blynk · IoT

D. R. Reddy · S. Prakash · A. Dinakar · S. Kumar · P. Kodali (B) ECE Department, NIT Warangal, Warangal, India e-mail: [email protected] D. R. Reddy VNR VJIET, Hyderabad, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_59

745

746

D. R. Reddy et al.

1 Introduction Many people are required to take proper care of their health personally or through others like the guardian, caretaker and nurses. Recently, there have been lots of technological advancements in the healthcare sector with the use of computers, sensors and microcontrollers hence assuring rapid patient recovery. Nowadays, technology is flourishing at a very fast rate. Automation is a need of today’s generation, and we are relying much more on electronic gadgets. The primary focus of this saline monitoring system is to make human lives easier. The basic objective of the saline level indicator is to improve human lives, hence saving their time and increasing their efficiency. The existing healthcare system is posing a problem as the patients have to first stand in long queues and once they get admitted in hospitals, the nurses or doctors have to continuously monitor them especially when saline therapy is going on. When the saline bottle is fully consumed, it needs to be replaced immediately, but if this doesn’t happen on time, then the pressure difference between the empty bottle and blood can cause the venial blood to rush back into the tube, causing fatal results for the patient. This has already occurred several times in hospitals. It is the need of patients to have some technology available to them in hospitals that will reduce danger. Thus keeping all things in mind, an automatic device has been proposed in this paper to avoid any mishappening that can be caused to the patient in case of lack of proper monitoring by his relatives or the staff. This device aims to bring a revolution in the medical sector, which is nowadays very primitive and dangerous, so that in the long runs, we can get guaranteed patient safety with minimum human interference. IoT (Internet of Things) connects objects through a network that is able to collect and exchange data. IoT is used to collect data from sensors and sends it to the cloud. Proposed IoT-based devices consist of IR sensors and corresponding LEDs for indication at three levels of the bottle, top partial and critical. Whenever the fluid reaches its critical level, the device produces an alarming sound to alert the staff and sends a notification to the nurse or doctor using a mobile application that uses cloud-based platform for data storage. It displays patient details in the display screen present by his side and in the control room and also stops the flow of fluid using electronic valve attached to the tube. No such device has been made as of now, and it has lot of advantages mentioned below as compared to present systems. • The system overcomes the drawbacks of manually operated controlled systems. • It is more accurate than conventional saline therapy and prevents the patients from risk. • It curbs the harm caused to patients during saline feed due to negligence, sends a notification as well as indicates the staff for the same. • It measures the flow rate and sends the time it will take for the bottle to go empty along with patient details through the IoT-based mobile application platform over Wi-Fi. • It is affordable, portable and instrumental in night time since it abates the efforts of the staff to to monitor the patient continuously.

Smart Saline Monitoring System for Automatic Control Flow …

747

• It also stops the flow of liquid once the bottle goes empty, thus preventing any mishappening with the help of an electronic valve that closes the tube.

1.1 Related Works This paper describes a system consisting of atmega328, CC2500 wireless module, Bluetooth module, buzzer and IR to develop an automatic saline monitoring system [1]. This paper develops the system using an IR sensor with GSM modem that helps the nurse monitor the saline flow rate. It uses 8051 microcontrollers for action and an IR sensor to measure the flow rate of the fluid. The sensors output is then generated and sends to the nurse or doctor’s mobile phone through a GSM modem for further action [2]. This system can be used to monitor saline bottles in rural hospitals. It works on three stages: the normal stage, the warning stage, and the critical stage corresponding to LEDs [3]. The widespread growth in wireless technology and mobile facilities has created a huge impact in our life. Some efforts have been taken in the medical fields also. This system aims at sensing patient’s ECG through three electrodes and amplifying the signal using AD8232 and sending it to Arduino that processes it along with saline level. The level is then detected with IR sensors whose output is fed to the LCD screen [4]. The main aim behind this system is to notify the hospital staff whenever the saline bottle goes empty or below the critical value. The saline bottle is installed in load cell module that will use it to measure its weight. The output of this will be sent to the microcontroller that will convert it into volume which is then displayed on the screen. The alert message will be sent to the staff when the bottle reaches a critical level using the Bluetooth module [5]. This biomedical system aims at developing a saline level indicator system with LEDs and a buzzer. Whenever the saline level goes beyond critical value, the buzzer will beep and LED will glow [6]. This project is based on the digitization of saline flow. It uses two push buttons and a steeper motor that rotates clockwise or anticlockwise, increasing or decreasing the flow rate. Whenever the first push-button is pressed is pressed once 1 drop/sec flows, twice 2 drops/s flows. One push button has been used for increasing the rate and the second for decreasing the same. The SMS for the same is then sending to the doctor via GSM module [7]. The paper describes the saline monitoring system, which uses a temperature sensor, load cell sensor, water pump, solenoid valve, relay and voltage regulator all connected to LPC 2148 microcontroller. The load cell measures the amount of saline present in the bottle. The sensors are connected to the bottle that sends the output to the microcontroller sending the data to Wi-Fi module and to AWS (Amazon web services) server to get static IP address [8]. The proposed system consists of an IR sensor as a level sensor and Arduino as the microcontroller. Whenever the saline reaches its critical level, the buzzer starts ringing and the message will be sent to staff nurses or doctors. It uses spring and DC motor to stop the flow when the bottle goes empty [9]. The proposed system consists of monitoring the saline flow using ultrasonic sensor and regulating it using a servo motor. When the saline reaches its

748

D. R. Reddy et al.

critical level, buzzer beeps and an alert message is sent to the doctor or nurse [10]. The paper explained the development of saline flow rate monitoring system using flow rate sensor working on hall effect, Arduino as microcontroller and RF Zigbee module for the transmission and reception of signal [11]. The system consists of an infusion monitoring device, central monitor and control system. The device uses an infrared sensor to detect the drip rate, infusion capacity and transfers the data to the central monitor placed in the nurse’s room [12]. In view of providing an automatic alerting system for running out injection fluid, the system implements an RFID tag on the saline bottle that is abled when the bottle is going empty, thus transferring the data through LAN [13]. The system has been developed to monitor drip infusion rates in patients. It detects the fall of a drip by using three number constant copper foil electrodes wrapped around PVC (polyvinyl chloride) tube from saline bag [14]. The apparatus for monitoring the flow rate of infusion fluid consists of housing, alarm, processor and drop sensor. The housing being attachable to the drip chamber and the drop sensor is placed in the housing to obtain flow rate [15]. The system uses Zigbee technology to monitor the infusion rate and Cortex M3 processor technology to control it [16]. The paper proposes the accurate measurement of droplets by using the principle of data dimension reduction and logistic classifier [17]. The paper uses a liquid flow sensor and the microcontroller to control the saline flow rate using a matrix keypad or an android mobile phone [18]. The paper proposes the use of trickle implantation framework to monitor flow rate, sugar level observing gadgets and display screen and uses pressure sensor (MPX10GP) for the same [19]. In this device, the IV set is first attached to the drip chamber and the flow sensor detects the droplets. For each drop, the beam of light is broken that is then transmitted and received by IR sensor bringing a change in its output voltage. The flow rate is represented on LCD through which one can identify the volume of liquid so present in the IV set. If the device does not sense the droplet in 45 s, it will give an alarm [20]. The system uses a PIC microcontroller and sensor to sense the temperature and drip status of the patient being monitored [21].

2 Proposed System The proposed system consists of a movable closed box-like setup attached to the saline stand. The box contains Node-MCU as a microcontroller with which five IR sensors, LED, buzzer and electronic valve are interfaced. Three IR sensors are connected to top, partial and critical levels of the bottle with corresponding red, green and yellow LED. The rest two IR sensors are connected to the dripper part of the saline tube to measure the flow rate. Firstly, when the saline bottle is fed to the patient and the saline starts flowing, the first IR sensor will detect the change in level and red led will glow. Then when the fluid reachs the partial level of the bottle, green led will glow and notification will be sent through Blynk IoT platform of “Bottle Partially Filled” over Wi-Fi. When the liquid reaches it’s critical level, yellow LED will be on. The buzzer will start beeping, As the notification of “Bottle Going empty”

Smart Saline Monitoring System for Automatic Control Flow …

749

will be sent and the electronic valve will stop the flow by closing the saline tube. During the process, The IR sensors placed in the dripper part will keep on measuring the flow rate and the time it takes for the bottle to go empty thus by sending it to the Blynk platform that will then get displayed on the LCD present by the patient side and in control room as well. Block diagram of the proposed saline setup and display setup is shown in Fig. 1a, b. The proposed system uses two displays. • Display mounted on the side of the patient displays the saline status, patient number, node id, bed number, patient name and device status. Active, In-active or failed. The display also contains flow rate of the fluid for that particular patient and the time taken for the bottle to empty. • The Control room display contains the above-mentioned details of every patient present in the room. Whenever patient details are entered into the application during admission, the same data is displayed in the control room. The block diagram for display is shown in Fig. 1b and different levels are shown with multiple effects of IR. Action taken with respect to decisions are shown in Table 1.

Fig. 1 Block diagram: a Proposed system. b System display unit essentials to monitor

Table 1 Different levels with effect on IR sensors and action Different levels

Effect on IR sensor Action taken

Completely filled First LED is ON

No action taken

Partially filled

Second LED is ON Notification of bottle being partially filled is send to nurse

Critically filled

Third LED is ON

Notification of bottle going empty is send to the nurse, buzzer beeps and electronic valve automatically closes the cannula tube

750

D. R. Reddy et al.

Fig. 2 a Node-MCU microcontroller. b Infrared sensor. c Electronic value

3 Hardware and Software The project involves the following hardware and software components—NodeMCU, IR sensor, buzzer, LED, electronic value, Arduino IDE, Blynk IoT platform.

3.1 Node-MCU Node-MCU is a microcontroller board used for developing IoT-based products. The interfaces that it supports UART, SPI and I2C. It operates at 80–160 MHz clock frequency. It has 128 KB of RAM and 4 MB of flash memory for storing data and programs. It uses Lua scripting language, and it has an inbuilt Wi-Fi module, generalpurpose input and output pins. It is built around a very cheap SoC (System on chip), i.e., ESP8266 with high processing capability and can be powered using a USB jack. IR sensor, buzzer, LED and electronic valve are interfaced to Node-MCU to monitor the saline level in real time and update the status on LCD present in patient room and nurse station. Node-MCU having in built Wi-Fi, stores the saline status and alerts the nurse and doctor through a mobile application and controls the reverse flow of blood in the tube using the electronic valve. Node-MCU is shown in Fig. 2a.

3.2 IR Sensor IR sensor is an electronic sensor that senses specific characteristics of its surrounding by emitting IR radiation. It consists of an IR transmitter that emits IR waves that is then reflected back from the obstacle or object to the IR receiver. IR sensors are capable of detecting the heat of the surrounding objects. The IR sensor module consists of IR transmitter–receiver, Op-amp, variable resistor and output LED for indication. The Sensor detects the objects that are in range. IR sensor interfaced to the microcontroller, and level of the saline is monitored using the IR sensor. Figure 2b shows IR sensor.

Smart Saline Monitoring System for Automatic Control Flow …

751

3.3 Buzzer A buzzer is a device that contains a piezoelectric crystal which produces a beeping sound when voltage is applied. It is a kind of audio signaling device that are generally used for generating alarming sound for indication purposes. It produces a continuous beep rather than a single beep. It is operated with a voltage between 3 and 6 V DC, and the input current should be below 25 mA. This especially has a smaller dimension, low power consumption and high volume out to alert people to act.

3.4 Indicators Set When current is allowed to pass through it, it is a p–n junction diode that emits light when made active through supply. When voltage is applied, the electrons combine with the holes that emit energy in the form of photons. The color of led varies based on its wavelength. The proportion of different LED materials emits different colors of light such as the combination of InGaN and SiC emits blue light. Light-emitting diode set is a low current indicator device that is used for levels indication in the proposed system. Different levels of saline are indicated using LEDs.

3.5 Electronic Valve The electronic valve contains two parts solenoid and mechanical valve. The solenoid converts electrical energy to mechanical energy that opens or closes the valve mechanically. The proposed system used electronic value to control the saline bottle whenever the bottle goes empty or the sensor detects a high or low flow rate. If any leakages are detected, the valve automatically stops the flow. The electronic valve is connected to the tube to stop the blood’s reverse flow, when saline present in the saline bottle is completely finished. It can withstand maximum pressure up to MPa. An Electronic valve is as shown in Fig. 2c.

3.6 Arduino IDE Arduino IDE is an open-source coding platform used to write, compile and upload programs on the microcontroller. We can choose our microcontroller board in the tools menu. Sensor libraries can be uploaded to the Arduino IDE. Programs are written in C and C++ languages. Arduino IDE is used to program the Node-MCU microcontroller to monitor saline levels and control the electronic valve.

752

D. R. Reddy et al.

3.7 BLYNK-IoT Platform Internet of Things (IoT) is the network of physical objects capable of transferring data over a network without any manual or system interference. Blynk is an Internetof-Things-based platform developed to make smart IoT. It can be used to read, store and visualize sensor data and also to control hardware remotely. The following are the steps followed in creating a mobile application. Download the Blynk app and create a Blynk account. After login, create a new Project in that. Choose the hardware on which you are working on. The Auth Token will be sent to your account. Add Widget from the Widget Box. Run The Project. When a saline bottle is about to empty, NodeMCU sends the notification to the doctor or nurse through the mobile application. The patient details and flow rate are also recorded in the mobile application; it is displayed in the patient room and control room display.

4 Project Setup The proposed system consisting of infrared sensors, buzzer, electronic valve and Node-MCU. IR sensor will detect the saline water level in the saline bottle. Notifications will be sent to the doctor or nurse through the event’s trigger in the mobile app Blynk (IoT platform) over Wi-Fi. The system also inculcates an electronic valve to close the flow automatically. Suppose the nurse or caretaker is not present so that the valve will get closed automatically. The setup is highly compact, portable, movable and low cost. In this setup, the indicative elements like buzzer and notification through the mobile app are playing vital role to remind the nurse. This setup is highly useful in the night time when there will be no such need for the hospital staff to continuously monitor the patient as they will automatically get notification of the status of the bottle. To work in rural and remote areas, the device can be operated with battery backup at low power modes. Conventional saline setup without monitoring system is shown in Fig. 3b. Saline monitoring setup with sensors and indication is shown in Fig. 3a, c. Figure 3c shows the complete set up for saline having a complete edge over conventional systems with regard to its functions. Given below are the displays of the patient side and control room. All the patient details and the device status are being mentioned on LCD. The control room display contains the combined detail of every patient room wise, thus making the device feasible to monitor the patient from a distance and prevent them from any risk if possible, as shown in Fig. 4a, b. The below Fig. 5 flowchart shows the flow of actions that are taken to develop this model. Whenever the sensor detects change or its data becomes 1, action is being taken accordingly.

Smart Saline Monitoring System for Automatic Control Flow …

753

Fig. 3 a Proposed system. b Basic saline setup. c Saline monitoring with proposed system

Fig. 4 Monitoring: a patient side display. b Control room display to know overall status

5 Results Given below is the 3D model representation of “Saline Monitoring System” as shown in Fig. 6a–c. The proposed system can monitor the saline level, status and flow rate automatically with the use sensors and displays the level using LCD and microcontrollers. It

754

D. R. Reddy et al.

Fig. 5 Flowchart of proposed system

Fig. 6 a–c 3D model of overall proposed system

can send data wirelessly over Wi-Fi to doctor or nurse mobile as well as display the results in the form of saline status, flow rate, start time, end time of the bottle and other patient details to the LCD present by the patient side and in the control room. The setup box consists of IR sensors, LEDs, buzzer and electronic valve. Whenever the saline level reaches its critical point, the LED will glow, a buzzer will beep and electronic valve will close the saline tube for safety purpose. At the same time, notification will be sent to the staff over Wi-Fi with the help of Blynk-IoT platform thus saving the life of the patient from danger. The pictorial representation of the project is also been implemented in 3D to make us understand it better and visualize its advantages. The system is affordable, portable and convenient to use for hospital staff. It is reusable for the next saline bottle as well and highly advantageous for the staff during night time as it will reduce the need of nurses to go to patient’s bed every time to monitor them. As it is highly reliable and of low cost, it can be implemented in urban as well as rural hospitals without incurring much cost. Figure 7a shows the saline setup box, and the IR sensors are shown connected to the microcontroller (Node-MCU) for carrying out the required function. Figure 7c shows the creation of new project, and Fig. 7b shows new event through eventer widget of widget box. Figure 8a shows the Blynk notification for “Bottle partially filled” with the device

Smart Saline Monitoring System for Automatic Control Flow …

755

Fig. 7 a Saline setup box. b Creating new widget. c Creating new project

Fig. 8 Blynk interface and notification: a partially filled. b Going empty. c Device error

number. Figure 8b shows the notification at the time when the bottle has reached critical level “Bottle going empty” with device number. Figure 8c shows the notification when there is “Device Error” to a particular device. In order to send notifications, we need to create a new event through an event or widget in the widget box. Whenever the second and the third IR sensor detect the partial and the critical level, respectively (lower than normal), the event or will activate this event and it will send a notification as “Bottle partially filled” and “Bottle going Empty” to the nurse or doctor. The patient side display contains all possible details of the patient along with the name of the nurse assigned to do so.

6 Conclusion The IoT-based saline level monitoring system will bring a big change in the way treatment with saline therapy is done. The model will save manual efforts. It requires the

756

D. R. Reddy et al.

least human interference as it is completely automatic. It is highly advantageous in night when there will be no need for the nurses to look after the patients during the saline therapy since the notification will be sent to them at the time of need and when the bottle reaches its critical level. The Proposed system helps the nurse monitor the saline status as per the bed in the control room and the display present by the patient side. This will also reduce the pressure and stress that the doctors or nurses have on them due to continuous monitoring and heavy duties. It also provides patient side display, control room display and the device status in order to monitor them from a distance. The system can be used in the home as well as the components used do not involve any recurring cost. It helps in abating the operational cost also. The device is reliable, affordable and portable too and aims at saving lives of patients due to any mishappening. Patients are being monitored on continuous basis in real time without frequent visits by the doctors or nurses and the chances of reverse blood flow due to any carelessness will also be controlled. This will eventually help reduce human errors. On the implementation of such a system, patients will also be rest assured that in return will help them get well more quickly.

References 1. Umchid, S., Kongsomboom, P., Buttongdee, M.: Design and development of a monitoring system for saline administration. In: Proceedings of the World Congress on Engineering 2018 vol. I, WCE 2018, July 4–6, 2018, London, U.K. 2. Gavimath, C.C., Krishnamurthy Bhat, C.L. Chayalakshmi, Hooli, R.S., Ravishankera, B.E.: Design and development of versatile saline flow rate measuring system And GSM based remote monitoring device. Int. J. Pharm. Appl. 3(1) (2012) 3. Rangsee, P., Suebsombut, P., Boonyanant P.: Low-cost saline droplet measurement system using for common patient room in rural public hospital. In: The 4th Joint International Conference on Information and Communication Technology, Electronic and Electrical Engineering (JICTEE), Chiang Rai, Thailand, pp. 1–5 (2014). https://doi.org/10.1109/JICTEE.2014.6804111 4. Kalaivani, P., Thamaraiselvi, T., Sindhuja, P., Vegha, G.: Real time ecg and saline level monitoring system using Arduino UNO processor. Asian J. Appl. Sci. Technol. (AJAST), 1(2), 160–164 (2017). Available at SSRN: https://ssrn.com/abstract=2941750 5. Tawade, I.S., Pendse, M.S., Chaudhari, H.P.: Design and development of saline flow rate monitoring system using flow sensor, microcontroller and rf zigbee module. Int. J. Eng. Res. Gen. Sci. 3(3), 472–478 (2015). ISSN 2091–2730 6. Swain, M.K., Mallick, S.K., Sabat, R.R.: Smart saline level indicatorcum controller. Int. J. Appl. Innov. Eng. Manage. (IJAIEM) 4(3), 299–301 (2015) 7. Kulkarni, N., Shrivastav, S., Kumar, S., Patil, R.: Advanced automatic saline level detection & patient monitoring system. Int. J. Sci. Res. Dev. 6(3) (2018). ISSN (online): 2321-0613 8. Gangavati, S., Gawde, G.: Smart saline monitoring system: Int. J. Res. Appl. Sci. Eng. Technol. (IJRASET) 6(XI), (2018). ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 6.887 9. Jagannathachari, A., Nair, A.R.: IOSR J. Comp. Eng. (IOSR-JCE), e-ISSN: 2278-0661,p-ISSN: 2278-8727, pp. 13–16. www.iosrjournals.org 10. Madouron, M.N., Anitha, E., Geetha, R., Karthikayan, S., Balasundaram B.: Automatic monitoring and regulating of saline flow rate using IoT. Int. J. Emerg. Technol. Innov. Res. 6(6), 227–231 (2019). (www.jetir.org | UGC and issn Approved). ISSN 2349-5162.

Smart Saline Monitoring System for Automatic Control Flow …

757

11. Kunal, P., Sekhar. R.S., Patnaik, Pratyush, P., Anilesh, D., Suraj, N.: Development of a wireless intravenous drip rate monitoring device. Int. J. Sens. Netw. 29, 159. https://doi.org/10.1504/ IJSNET.2019.10019723 12. Yadav, S., Jain, P.: Real time cost effective e-saline monitoring and control system. In: International Conference on Control, Computing, Communication and Materials (ICCCCM), Allahbad, pp. 1–4 (2016). https://doi.org/10.1109/ICCCCM.2016.7918254 13. Huang, C.F., Lin, J.H., A warning system based on the RFID technology for running-out of injection fluid. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2212–2215. PMID: 22254779. https://doi.org/10.1109/iembs.2011.6090418 14. Ogawa, H., Maki, H., Tsukamoto, S., Yonezawa, Y., Amano, H., Caldwell, W.M.: A new drip infusion solution monitoring system with a free-flow detection function. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, pp. 1214–1217 (2010). https://doi.org/10.1109/IEMBS.2010.5626449 15. Kumar, C.R., Vijayalakshmi, B., Karthik, S., Hanitha, R., Hemapreetha, T.: Drip rate monitor for infusion fluids. Taga J. ISSN: 1748–0345 (Online), pp. 2312–2316 (2018) 16. Yang, W., Sun, L., A novel medical infusion monitoring system based on ZigBee wireless sensor network. In: 2009 International Symposium on Web Information Systems and Applications (WISA’09), pp. 291–293 (2019, May 22–24). (Nanchang, P.R. China) 17. Zhang, Y., Zhang, S., Ji, Y., Wu, G.: Intravenous infusion monitoring system based on WSN. In: IET International Conference on Wireless Sensor Network 2010 (IET-WSN 2010), Beijing, 2010, pp. 38–42. https://doi.org/10.1049/cp.2010.1024 18. Rashid, H., Shekha, S., Taslim Reza, S.M., Ahmed, I.U., Newaz, Q., Rasheduzzaman, M.: A low cost automated fluid control device using smart phone for medical application. In: 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, pp. 809–814 (2017). https://doi.org/10.1109/ECACE.2017.7913014 19. Rani, K.R., Shabana, N., Tanmayee, P., Loganathan, S., Velmathi, G.: Smart drip infusion monitoring system for instant alert-through nRF24L01. In: 2017 International Conference on Nextgen Electronic Technologies: Silicon to Software (ICNETS2), Chennai, pp. 452–455 (2017). https://doi.org/10.1109/ICNETS2.2017.8067976 20. Vasuki, R., Dennis, C., Changer, H.P.: An portable monitoring device of measuring drips rate by using an Intravenous (IV) set. Int. J. Biotechnol. Trends Technol. 1(3) (2011, November 4) 21. Ramya, V., Palaniappan, B., Kumari, A.: Embedded patient monitoring system. Int. J. Embed. Syst. Appl. (IJESA) 1(2) (2011)

Comparative Study on AutoML Approach for Diabetic Retinopathy Diagnosis V. K. Harikrishnan, Harshal Deore, Pavan Raju, and Akshat Agarwal

Abstract Diabetic retinopathy is one of the common eye diseases caused because of diabetics. There are mainly four types of retinopathy conditions—mild, moderate, severe and proliferative. Once retinopathy reaches proliferative stage, the person will have vision loss. In this study, random wired and NASNet AutoML models are trained to predict diabetic retinopathy from retina images. Using architecture search technique and random graph models, optimized architecture is achieved. A comparative analysis was done between NASNet model and ER, BA and WS graph theory models within random wired architecture, to understand how each algorithm impacts the architecture. Model was trained on 3652 images. The trained model achieved sensitivity and specificity above 80% on E-Ophtha Database when trained for up to 80 epochs. Keywords Diabetic retinopathy · Auto ML · Random wired network · NASNet

1 Introduction Diabetic retinopathy (DR), a microvascular intricacy, is mainly identified in patients who have diabetes. Occurrence of DR effects person’s ability to see and once it reaches severe stage can damage eye. The more extended a person has untreated diabetes; the higher the probability of a person to get diagnosed with DR. It is expected that around 600 million people worldwide will be suffering from DR by the end of 2030 [1]. In the early stages of DR, patients won’t be able to understand the onset of disease [2]. DR is normally diagnosed by utilizing retinal images of patients [3, 4]. Diabetic retinopathy may grow through mainly four stages: mild non-proliferative DR, small balloon like swelling occurs, called smaller-scale aneurysms; moderate non-proliferative DR, veins will start swelling heavily and mutilate and loses capacity V. K. Harikrishnan · H. Deore · P. Raju Heu.ai, Hyderabad, Telangana 500055, India A. Agarwal (B) Amity University Gurugram, Gurugram, Haryana 122413, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_60

759

760

V. K. Harikrishnan et al.

to transport blood; severe non-proliferative DR, numerous veins gets blocked causing severe damage to retina; proliferative DR, fresh blood vessels gets expanded in retina. The fresh blood vessels are delicate, that makes them more inclined to spill and drain. Once it reaches severe DR and higher stages, visual impairment is almost certain. This makes identifying DR in early stages very critical. As the initial symptoms of DR are less recognizable and can result in human error, automated DR diagnosis is the need of the hour. Using computerized mass screening of patients, we can identify anomalous pictures, which are then referred to ophthalmologists for further inspection [5]. Ordinary structures of retina are the optic plate, macula and veins. The trademark elements of DR are small-scale aneurysms, haemorrhages and exudates [6]. Automated machine learning (AutoML) helps us in identifying learning techniques without the need of an expert. Our focus is neural AutoML technique that gives optimized architecture using deep RL. For example, using neural architecture search researchers has developed architectures that performs at the same level as those made by human experts. Eventhough AutoML gives new optimized networks, they are expensive as it requires training many networks which requires vast computational resources [7]. Plus, the network needs to be retrained for using it in another application. This can be addressed by using a progressive search space [8], or by sharing weights among generated networks [9]. In this paper, we will be comparing the usage of randomly wired neural network approach [10] and neural architecture search technique [7] for obtaining best suitable architecture for diagnosis of DR. The intuition behind this technique is that if the general idea of architecture search worked well for the fixed blocks and connections, perhaps running it with a much wider search space (i.e. random connections) will lead to some unexplored configurations.

2 Background and Related Work Main aim of automated DR systems is to identify and predict whether the person is suffering from DR using features of MA, exudates or using blood vessel information. Researchers have applied ResNet, DenseNet and Ensemble networks to predict DR from retina images. Feature extraction was achieved using the discrete wavelet transform (DWT) and the stationary wavelet transform (SWT) coefficient along with a selection of the top-rank algorithm and AdaBoost methods. In [11], using 4D feature vectors a support vector machine classifier (SVM) was trained for identifying PDR, which achieved 91% and 96%, sensitivity and specificity, respectively. In [12], fundus images were classified using computationally expensive neural networks and SVM achieving 80% accuracy. In using binary DL classifier, DR images were classified keeping normal and mild DR cases as one class and moderate, severe and proliferative cases as second class. Model was trained on EyePACS dataset comprising of more than 100,000 images. Sensitivity of 97% and Specificity of 93.5% were achieved surpassing human capabilities.

Comparative Study on AutoML Approach …

761

3 Methodology 3.1 Random Wired Architecture With recent success of deep learning in predicting and solving complicated tasks, there is a huge demand for automated deep learning. The effort required to design and optimize an architecture for a specific use case is huge and requires lot of trial and error from human expert. Advancing the trend of deep learning, NAS was introduced. In NAS [7], patterns and combinations of various blocks were searched but limited itself to sample only those wiring patters that were generated by hand-designed network generator. This limited NAS to perform search only within those connections and lacked networks that is generated from scratch. In randomly wired network [10], for reducing human bias and building random network generator, graph models from graph theory were added, namely Erdos-Renyi (ER), Barabasi-Albert (BA) and Watts–Strogatz (WS) models. With respect to NAS, major change is the introduction of random network generator. The main functionality of network generator is to identify how the computational graph must be wired to achieve better model. These graph models generate generic graphs irrespective of how it might be introduced in neural network. This ensures that the graph generators mentioned above can freely create nodes and edges connecting them. Once graph is generated, each directed edge in graph is assumed to be sending data from one node to the other. Each node in the graph will have input and output edges and each node can perform aggregation, transformation, distribution operations. In aggregation, input is combined with weighted sum, where the weights are learnable. After aggregation, ReLu and convolution are applied to value and the transformed value is distributed via output edges. This forms one stage of the network. Like other deep learning architectures, for progressive downsampling of the input image multiple such stages exist, where each stage will be connected using its input and output node. Table 1 shows summarization of the different stages that exist in network. A random graph is identified by number of nodes, T and number of channels, D (Figs. 1 and 2; Table 2). In ER graph model, for each node generated, an edge relates to probability P and does not depend on other nodes and edges. Probability of generating single connected component graph is high in ER model if probability satisfies the condition as shown in Eq. 1. This shows a bias introduced by the generator. P>

ln(N ) N

(1)

In BA graph model, a random graph is generated by stacking nodes. In the initial state, there will be M nodes which does not have edges (1 ≤ M < N). So, a new node gets added with M edges sequentially. A new node should be connected to an existing one before its added. Until new node has M edges, it repeatedly adds non-duplicate edges. This is then iterated till it reaches N nodes. Main parameter for BA generation model is M and its denoted by BA(M) and graph generated through this model will

762

V. K. Harikrishnan et al.

Table 1 DR recognition systems and accuracy achieved Automatic DR classification

Methodology used

Test results

Detection of DR related lesions [12]

BOW with feature extraction and max pooling

AUC 94.2

Differentiate between normal and DR fundus images [13]

Using DWT and SWT feature extraction along with AdaBoost methods

Accuracy 94.17

Detection of PDR [14]

SVM trained using feature vectors

Sensitivity: 91 Specificity: 96

Grading images into different severity Feature extraction and training using levels [15] SVM and neural networks

Accuracy 80

Detecting NPDR of different levels and PDR [12]

Accuracy 85

Achieved using vessel extraction and blood vessel detection in retina after pre-processing images

Detection of different levels of NPDR With the help of random forest Accuracy 87.5 and PDR [16] classifier, haemorrhages, blood vessel near retina were identified

Fig. 1 Overview of RandWire architecture

contain M(N − M) edges. Despite randomness in the graph generator, all the graphs given by BA(M) will be a subset of all possible N node graphs. In WS Graph Model, N nodes are initially kept at equal intervals in a ring and node gets connected to its K/2 neighbouring nodes, where K is even. For every node, rewiring is done to all edges that connect the note to its clockwise next node. Rewiring is done with probability; P. Rewiring is done by choosing a random node uniformly which does not create a duplicate edge. For WS Model, K and P are the only two parameters, denoted as WS(K, P). Graphs generated using this model contain N * K edges.

Comparative Study on AutoML Approach …

763

Fig. 2 Randomly wired neural networks generated by the classical Watts–Strogatz (WS) [10] Table 2 RandWire architecture [10] Stage

Output

Small network

Regular network

Conv1

112 × 112 3 × 3 convolution, D/2

Conv2

56 × 56

3 × 3 convolution and nodes T

Conv3

28 × 28

Random network with nodes T and Random network with nodes T and channels D channels twice D

Conv4

14 × 14

Random network with nodes T and Random network with nodes T and channels twice D channels four times D

Conv5

7×7

Random network with nodes T and Random network with nodes T and channels four times D channels eight times D

Classifier 1 × 1

1 × 1 convolution, 1280 descriptor, then global average pool and finally a SoftMax layer

Random network T /2 and channels D

764

V. K. Harikrishnan et al.

3.2 NaSNet Architecture Neural architecture search (NAS) aims to find suitable architecture for training model [7]. This search is computationally intensive and takes lot of time. NAS basically consists of search space, search strategy and evaluation strategy. Search space is mainly focused with defining possible neural architectures to consider. Search strategy mainly focuses on random search to explore the search space to obtain optimal architecture. Measuring the quality of each architecture is done by evaluation strategy. As shown in Fig. 3 NAS has recurrent neural network (RNN) that acts as a controller, whose main functionality is to try out different combinations and produce an optimum architecture. Each architecture chosen is trained, and accuracy score is calculated on validation set. Based on accuracy obtained, new architecture is proposed for improving the same. The controller weights are updated with policy gradient. Controller comes up with an architecture within search space that will have specific probability of obtaining the desired result. With the help of child network, the chosen architecture is trained on dataset and once convergence is reached accuracy is noted. Gradient difference between accuracy after training and initial probability is scaled to update the RNN. Smallest unit of NAS architecture is a block, and various blocks form a cell, which is depicted in Fig. 4. Entire network is divided into different cells which is further divided into blocks, and this forms search space. These search spaces are optimized for specific dataset that is used to train. Normal convolutions, separable convolutions, max pooling, average pooling, identity mapping, etc., are some of possible operations that can be done in a block. The blocks combine previous and present input to feature map that acts as output. Elementwise addition is performed inside block. Assume a block with feature map

Fig. 3 NAS overview

Comparative Study on AutoML Approach …

765

Fig. 4 Taxonomy

having specific height and width is present in a cell and let stride be 2, the size of output will be reduced by 2 and if stride is 1, it will be reduced by 1. Complete network is formed based on cell structure, number of cells that can be stacked and number of filters in first layer. During initial search iterations, number of cells and filters are fixed. As search progresses, number gets optimized to match with height and width of developed network. When search is completed, models of different sizes are formed to fit the training dataset. At the end, cells get interconnected in an optimized way to form final NASNet architecture. Through pairwise operations multiple hidden layers are formed within architecture. Each hidden layer can do convolution and pooling operations. To make search faster, NAS selects only the best cells, thereby obtaining much better generalized features. Entire search space, as mentioned in starting, normal cells contribute towards controller and reduction cells contribute towards child network. Difference between these two cells are, normal cell keeps the same size as that of input feature, whereas reduction cell makes dimensions reduce by two. NASNet has many numbers reduction cells which are in between normal cells.

3.3 Data and Pre-processing For training DR model, we used Kaggle dataset released for APTOS 2019 competition [17]. Dataset consisted of 5 classes—no DR denoted by 0, mild DR denoted by 1, moderate DR denoted by 2, severe DR denoted by 3 and proliferative DR denoted by 4. After performing data analysis on dataset as shown in Fig. 5, a number of images available per class were identified. The number of non-DR images was greater compared to DR images. Also, within DR images, we have a smaller number of images for detecting mild DR and severe DR. Predicting severe DR by model, won’t be that less since at max the model will predict the person has moderate DR instead of severe. But due to a smaller number of mild DR images, model will tend to classify MA as healthy images. Main reason for this is MA will start showing in mild DR images and remaining categories will have more features for the model to classify them. We visualized a random image from each label to get a general sense of the distinctive features that separate the classes as shown in Fig. 6. For mild DR, we can observe at least one MA, and, in some cases, slight haemorrhages will also be

766

V. K. Harikrishnan et al.

Fig. 5 Label distribution of training set

Fig. 6 DR image visualization

present. In case of moderate DR, we will be able to observe higher MA and HEM and slight presence of CWS. For severe and proliferative will see extensive HEM and MA along with CWS and ME. Major pre-processing step taken was to reduce lighting condition effects by first applying masks to image, then we resized image to 224 * 224. On resized image, we applied Gaussian blur to enhance the image. We cropped out uninformative area from the image and kept only necessary parts. Using bi-linear interpolation images were resized using Eq. 2: Inorm =

I − min(I ) max(I ) − min(I )

(2)

For obtaining mask, retina image was converted to greyscale and tolerance value of greater than 7 was set. Main purpose of this is to remove black portions from image and keep only informative content. Once image is cropped out and resized to required size as per model, we apply Gaussian blur to enhance the image. We chose standard deviation value as 10 in both X- and Y-directions. Using Gaussian Kernel, each point of the input array is convolved and summed to produce the output array. In one dimension, formula for Gaussian function is given by Eq. 3: G(x) = √

1 2π σ 2

−x 2

e 2σ 2

(3)

Comparative Study on AutoML Approach …

767

Fig. 7 Enhanced DR images

Blurred image and original image were blended using Eq. 4: g(x) = (1 − α) f 0 (x) + α f 1 (x)

(4)

Original image, we gave weight of 4, and to the blurred image, we gave weight of −4 and gamma value chosen was 128. The final image was obtained after applying Eq. 5: dst = α · img1 + β · img2 + γ

(5)

Figure 5 shows the final pre-processed image, used in training model. After preprocessing, the features and nerves of each fundus image are clearer, and background black portion is removed to avoid unnecessary data (Fig. 7).

3.4 Training For training the network, learning rate of 0.01 was chosen with graph probability set at 0.75. Each node relates to 4 nearest neighbours in ring topology. The number of edges is attached from a new node to an existing node is taken as 5, and the number of graph nodes were selected as 32. With above parameters kept as constant, the network was trained on different graph model by tweaking the number of channels for each node. 3662 images were used for training and for validation purpose used 10 images consisting of 2 images per class. For WS graph model, channel count value chosen was 78 for each node. During training for 80 epochs, model attained training accuracy of 70% and validation accuracy of 82%. For BA graph model, channel count value chosen was 78 and after training for 80 epochs, model attained training accuracy of 72% and validation accuracy of 85%. For ER graph model, channel count value chosen was 154 and after training for 80 epochs, model attained training accuracy of 74% and validation accuracy of 85% (Table 3).

768

V. K. Harikrishnan et al.

Table 3 Comparison of different graph models Graph model

Channel count

Epochs

Training accuracy

Validation accuracy

BA

78

80

72.3

85.7

WS

78

80

69.8

82.1

ER

154

80

74.6

85.72

4 Results To compare the model with other works, we tested the random wired model against exudate disease images provided by E-Ophtha [18] gave an accuracy of 87% with sensitivity of 86.4% and specificity of 87.2%. E-Ophtha dataset contained 35 healthy images without any exudate present and 47 images which has exudate at different severities. Upon testing NASNet model against exudate disease images provided by EOphtha, gave an accuracy of 76.8% with sensitivity of 76.6% and specificity of 77.1%. In the given dataset, the number of images for mild DR, severe DR and proliferative DR is less compared to moderate and non-DR. This causes the model to learn better in classifying whether the retina image is healthy or not but tends to have less accuracy when identifying different DR categories. Another limitation with DR classification is the absence of standards in classifying DR into different categories. For same image, physicians from different countries/hospitals have varying parameters that cause DR prediction to be tough. To minimize this, for preparing DR dataset itself opinion of multiple doctors are taken into consideration before finalizing on the category in which it belongs (Figs. 8, 9, 10 and 11).

Fig. 8 Training and validation graph for BA model

Comparative Study on AutoML Approach …

Fig. 9 Training and validation graph for WS model

Fig. 10 Training and validation graph for ER model

Fig. 11 Training and validation graph for NASNet model

769

770

V. K. Harikrishnan et al.

5 Conclusion We explored possibility of achieving unique architecture by keeping human bias to minimum and did comparative analysis of different graph models available in graph theory. Previously, attempts have only been made to predict whether a given DR image is healthy or not. With the help of dataset provided by APTOS team, trained model achieved more than 80% accuracy against standard database. The future works include training the model, with combined datasets from multiple sources and preparing the optimal architecture which could be used by different teams. Also, in future, number of graph nodes and number of edges attached to a node need to be tweaked. This will ensure that the model obtained is specifically trained and tuned for the dataset or application in general. For each application datasets or image objects, optimal graph parameters may vary.

References 1. Guariguata, L., Nolan, T., Beagley, J., Linnenkamp, U., Jacqmain, O.: International Diabetes Federation, Diabetes Atlas. International Diabetes Federation, Brussels, Belgium (2014) 2. Gegundez-Arias, M.E.: Inter-observer reliability and agreement study on early diagnosis of diabetic retinopathy and diabetic macular edema risk. In: International Conference on Bioinformatics and Biomedical Engineering. Springer International Publishing (2016) 3. Saleh, M.D., Eswaran, C.: An automated decision support system for non-proliferative diabetic retinopathy disease based on mas and has detection. Comput. Methods Programs Biomed. (2012) 4. Tamilarasi, M., Duraiswamy, K.: Genetic based fuzzy seeded region growing segmentation for diabetic retinopathy images. In: Computer Communication and Informatics (ICCCI) (2013) 5. Bhatkal, A.P., Kharat, G.: FFT based detection of diabetic retinopathy in fundus retinal images. In: Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. ACM (2016) 6. Faust, O.: Algorithms for the automated detection of diabetic retinopathy using digital fundus images: a review. J. Med. Syst. (2012) 7. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: ICLR (2017) 8. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition (2017) 9. Liu, C., Zoph, B., Shlens, J., Hua, W., Li, L.-J., Fei-Fei, L., Yuille, A., Huang, J., Murphy, K.: Progressive neural architecture search (2017) 10. Xie, S., Kirillov, A., Girshick, R.B., He, K.: Exploring randomly wired neural networks for image recognition (2019) 11. Pires, R.: Advancing bag-of-visual-words representations for lesion classification in retinal images (2014) 12. Mookiah, M.R.K.: Computer aided diagnosis of diabetic retinopathy using multi-resolution analysis and feature ranking frame work (2013) 13. Feurer, M., Springenberg, J.T., Hutter, F.: Initializing bayesian hyperparameter optimization via meta-learning. In: AAAI (2015) 14. Negrinho, R., Gordon, G.: Deeparchitect: automatically designing and training deep architectures (2017) 15. Wichrowska, O., Maheswaranathan, N., Hoffman, M.W., Colmenarejo, S.G., Denil, M., de Freitas, N., Sohl-Dickstein, J.: Learned optimizers that scale and generalize (2017)

Comparative Study on AutoML Approach …

771

16. Welikala, R.A.: Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy (2015) 17. Asia Pacific Tele-Ophthalmology Society (APTOS): Blind detection competition (2019). https://www.kaggle.com/c/aptos2019-blindness-detection 18. TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM

7-DOF Bionic Arm: Mapping Between data gloves and EMG arm band and Control by EMG arm band D. B. Suriya, K. Venkat, Aparajit Balaji, and Anjan Kumar Dash

Abstract The major challenge while dealing with a bionic arm is their control by patient (amputees) while wearing it. The present bionic arm is of 7° of freedom with each finger having a separate servo motor, one servo for wrist and one servo for elbow. Since single servo cannot account for elbow part which has to lift the whole hand, a gear assemble is made to increase the torque capacity of the servo. The bionic arm is 3D modeled and fabricated using 3D printers. Initially, a data glove with Bluetooth module is used to verify the motion capability of all the fingers. Secondly, an EMG arm band with EMG sensors is used to get the data from the elbow muscles, and it is mapped to the movement of the fingers and wrist. A Python program is used to map EMG sensor data to signal from data gloves while doing activities. Based on this, EMG sensor data is converted to the rotation of the motors of the bionic arm and finally, the bionic arm is controlled with the EMG arm band. The arm is tested for various tasks like lifting a bag, holding a bottle, etc., and it is found to be very successful. Keywords Bionic arm · data glove · EMG arm band · Mapping · 7 DOF

1 Introduction Research on prosthetics is generally more complex in terms of mechanical design and control and monitoring systems but are inferior to commercial devices in terms of practicality, cost and robustness [1]. The vast majority of commercial prosthetic fingers are actuated through a joint linkage system powered by DC electric motors. The problem with this type of design is that there is no control over individual finger joints. Another critical design point in commercial prosthesis is durability. The average user will wear a prosthetic hand in excess of 8 h per day. Therefore, prosthetic arms for commercial use must be robust, lightweight and packaged into a closed system that can be attached to an amputee. The human hand comprises of at D. B. Suriya · K. Venkat · A. Balaji · A. K. Dash (B) School of Mechanical Engineering, SASTRA Deemed to Be University, Thanjavur, Tamil Nadu 613401, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_61

773

774

D. B. Suriya et al.

least 27 bones (depending on the individual) [4], more than 30 individual muscles and over 100 named ligaments, nerves and arteries. The field of prostheses aim to mimic the function of the human body and return functionality to persons with missing limbs. No current prosthetics can match the dexterity, flexibility and fluidity of the human hand. The human finger in total has four degrees of freedom [3]. Three of these degrees of freedom are the rotations of each joint which combine to control flexion and extension of the finger. The knuckle also allows for abduction/adduction (wiggling the finger from side to side). In the thumb, the lower CMC joint also allows for adduction which gives 5 DOFs in the thumb [2]. The Bebionic is a world leading commercial myoelectric arm. Like others of its kind, the Bebionic3 uses a predefined grip system. A user can select from 14 different grip patterns using muscle activity around their upper forearm [13]. In order to switch between opposed and non-opposed positions, the user must apply an external force to “click” the thumb into position [7, 12]. The Bebionic3 is controlled through electromyography (EMG) electrodes placed on the surface of the user’s skin. The placement of these electrodes depends on the level of amputation but is usually around the upper forearm. The iLimb digits developed by touch bionics incorporate electric motors directly into the prosthetic fingers [14]. This allows for the palm area to fit into a socket connection attaching the prosthetic fingers to the hand. Over the past couple of years, developing 3D printed bionic limbs has become quite popular [6]. InMoov is an independently running project developing a life-like humanoid robot from 3D printing technology. The InMoov fingers (including the thumb) only have a single degree of freedom which limits the dexterity of the hand [8]. However, this is a simple solution which has a great advantage over other anthropomorphic hands which is that this design is low cost and easily manufactured through 3D printing [10]. In this work, the number of DOF of the bionic arm is limited to 7 such that it is rugged and reliable as a first prototype. Each finger has one motor, one motor for the wrist flexion and one motor for the elbow. To increase the torque required at the elbow, gears are mounted and all these are 3D printed.

2 Design and Fabrication 2.1 Design of Wrist and Elbow This project is a continuation of design of 3D printing of the prosthetic arm [6]. The single motor present in the elbow has to actuate the whole weight of the complete bionic arm and the load. The minimum torque required for this is around 29 kg-cm (calculations are given below). The Dynamixel smart servo which is used provides a maximum torque of 15 kg-cm when connected to 12 V and 1.5 A power supply. Since these motors do not meet the required torque, a gear is assembled to meet the required torque.

7-DOF Bionic Arm: Mapping Between data gloves …

775

Fig. 1 Design of gear for the elbow Dynamixel motor to give higher torque

The final weight of the bionic arm is around one kilogram, and the total length of the arm is 30 cm. So, it is like 1 kg of load acting at 30 cm away from the elbow pivot. Therefore, the torque required is TReqd. = FXd = 1(9.81)(0.03) = 2.943 Nm = 29 kg cm. Since Dynamixel servos can provide only 15 kg-cm torque, torque of the motor is to be increased. For this a small gear is messed with the larger gear attached to the elbow pivot. Making use of all the space available in the elbow part, the larger gear is made as large as possible to have high torque as shown in Fig. 1. Inkscape is a free and open-source vector graphics editor which can be used to create or edit vector graphics such as illustrations, line arts, diagrams, charts and so on. Inkscape’s primary vector graphics format is scalable vector graphics (SVG). So, using the gear plugin of Inkscape, the 2D SVG file of the gears was made within minutes [5, 11]. The conversion of the created 2D model to 3D model was done with the help of FreeCAD. FreeCAD is a open-source software; it is similar to Solidworks to create 3D models [9]. 2D SVG file has to be opened in FreeCAD and using extrude option, the 2D file is converted into 3D object. Messing is done, and finally the 3D model of gears were ready to be fabricated. Figure 2 shows the assembly of the elbow joint with 3D-printed gear. Features of New Design • For the elbow joint, the same Dynamixel motor (AX12A) is used which is also used for the individual fingers. The torque capacity of the motor is increased by using 3D-printed planetary gears. Normally in bionic arms, a heavier motor is used for the elbow joint. • Unlike commercially available bionic arms which used to have many motors, this bionic arm is with optimal numbers of DOF, namely 5 for 5 fingers and one for

776

D. B. Suriya et al.

Fig. 2 Elbow assembly with 3D-printed gear

wrist and one for elbow. This was done keeping in mind that a low-income person also can afford to have a bionic arm. • Normally in mapping of the EMG arm band to joint motors, the fabricators do not use data gloves. In this research, to map the data of EMG arm band to the joint motors, readings of data gloves and flex sensors are used.

2.2 Final Assembly The 3D printed model and the gears had some axial offset with respect to the frame; so it could not be matched. So extra layers of materials are used to fill the gaps in the 3D-printed parts. Finding the proper axis line and fixing the gear and elbow shaft proved to be a difficult task. Figure 3 shows the entire assembly along with laptop, SMPS to give power supply to all the Dynamixel motors, elbow gear and the bionic arm.

3 Controlling the Bionic Arm with data gloves and Flex Sensors (See Fig. 4). In this section, it is shown how to control the seven motors of the bionic arm with the help of data gloves (purchased from 5DT, USA) and two flex sensors mounted at wrist and elbow (Fig. 4). The two flex sensors are directly connected to the Arduino.

7-DOF Bionic Arm: Mapping Between data gloves …

777

Fig. 3 Final assembly-front view

Fig. 4 a data gloves and flex sensors to capture the movement of fingers, wrist and elbow. b Flowchart for controlling the five fingers of the bionic arm with the signal from 5DT data gloves. c Flowchart for controlling all the seven motors of bionic arm with signal from 5DT data gloves and two flex sensors

778

D. B. Suriya et al.

But the data gloves had a Bluetooth module, and the signals for the five fingers are received in a laptop with the help of a Python program. A serial communication is made between Python and Arduino IDE by which the five signals of the fingers are received at the Arduino. The Arduino program checks whether all the seven data are received and then create new control data to be given to the Dynamixel servo motors which actuate the bionic arm. Then, the Arduino passes the control data to Dynamixel servos, and the bionic arm mimics the actions of human hand in real time.

4 Controlling the Bionic Arm with EMG arm band 4.1 Mapping Data of EMG arm band to that of data gloves and Two Flex Sensors Figure 5 shows the EMG arm band which has an inbuilt Bluetooth and the Arduino UNO with a Bluetooth module. First, power the Arduino and switch on the EMG arm band. The BLE module automatically finds and connects with the EMG arm band. Then, the arm band sends gesture index and quaternion values to BLE module. BLE module forwards the received gesture index and quaternion through its COM port (TX) to Arduino UNO. Arduino gets the gesture index and quaternion through its COM port (RX). These quaternions are the EMG signals, and they can be resolved as per the need of the user. Mapping the flex sensor and EMG sensor signals All the seven flex sensors along with the EMG sensor are worn. A Python program is written to get reading from all the sensors. The Python program maps the new EMG signals with the reference flex sensor signals. The program compares both EMG and flex sensor signals. Each gesture say FIST (closing of all the fingers) needed to be trained within the Python environment. A lot of training has to be done to confirm the pattern of signals for each gesture. Many such gestures are trained. After successful training, the new EMG signals are recorded down and saved as binary file. These EMG signals are going to be the final control signals for the bionic arm. Fig. 5 Collecting data from EMG arm band using an Arduino

7-DOF Bionic Arm: Mapping Between data gloves …

779

4.2 Control of Bionic Arm Using EMG arm band Upon successful mapping of EMG arm band signal with the data from the data gloves and flex sensors for a specific gesture, then there is no need of the data gloves and flex sensors. Now the bionic arm can be controlled only using the EMG arm band. The flowchart describes the algorithm to control the bionic arm using EMG arm band. Figure 6 shows the complete description of how to control the bionic arm using the arm band. Figure 6a shows the flowchart of the algorithm. Here one thing is to be noted down is that two Arduinos are required for the entire task—one Arduino to get the data from the arm band and the other to control the bionic arm. First the EMG arm band is worn by the human. First Arduino (say Arduino_1) is connected to BLE module which is connected to the EMG arm band. The Second Arduino (say Arduino_2) is connected to the bionic arm. The Arduinos are connected to the laptop. The Arduino_1 fetches the quaternions signals from the EMG arm band and

Fig. 6 a Flowchart of the algorithm to control the bionic arm using EMG arm band. b Block diagram showing the different connections between the different electronic components. c The physical setup made for this purpose

780

D. B. Suriya et al.

Fig. 7 Enlarged view of the complete setup; a Arduino-1 for capturing data from arm band, b Arduino-2 to get the data from the Arduino-1 and actuating the servo motors of the bionic arm

converts it to binary form. Then the signal is compared with the trained gesture binary files, and the gesture performed by the human is identified. Arduino_1 passes the identified gesture to the Arduino_2. This Arduino program sends respective signals to the actuators (Dynamixel AX-12A smart servos). Finally, the bionic arm performs the same gesture as the human does. Figure 7 shows the enlarged view of the physical setup. Figure 7a shows the Arduino 1 and the arm band to capture the signal from the arm band. This signal of the EMG arm band is mapped to the desired finger and elbow movement based on the mapping as described in Sect. 4.1. Figure 7b shows the Arduino 2 which gets the mapped signal and sends the signal to the respective Dynamixel motors at different parts of the bionic arm. And thus the bionic arm executes the gesture. Some of the gestures are shown below which shows the successful execution of this entire project (Fig. 8).

4.3 Risk and Performance Analysis • The EMG arm band is very sensitive to temperature. If it is tuned in the laboratory environment which is not air-conditioned, it does not immediately respond when it is taken to an air-conditioned environment. It takes time. Similarly, when it is taken out of air-conditioned environment, again it takes time to be accustomed to non AC environment. This would be a serious problem during practical implementation. • The 3D printing of entire arm was done with 70% density. This was done to facilitate machining (screwing a hole, etc.) which is not possible if it is printed with a higher density. However, this results in reduction in strength of the arm. In this research, it is found that when the arm was used to lift a heavier weight, it started to crack.

7-DOF Bionic Arm: Mapping Between data gloves …

781

Fig. 8 Some of the gestures exhibited by the bionic arm when done by a human being wearing EMG arm band

• Wrist joint is given only one DOF, and hence, it is not able to perform as a biological wrist would have worked. This was done keeping in view that the cost is minimized and the user would be able to do most important things that he wants to do. • While fabricating the elbow joint, the holes are to be made with perfect alignment. In the fabricated bionic arm, there was a little bit eccentricity which would be very crucial in practical implementation. • In this research, all the motors are powered by an SMPS. However, practically implementing this would be difficult. • EMG arm band gives very good signal when mounted on the elbow muscle. However, if it is worn on the arm, then the finger movements are not captured by the EMG.

5 Conclusion In this project, an already 3D-printed bionic arm is controlled using EMG arm band. To map the values of the EMG arm band to the exact movement of the different parts of the body, first of all the subject wears a data gloves and two flex sensors. The data gloves measure the movement of five fingers, and the two flex sensors are for the movements of elbow and wrist. When the subject wears arm band, data gloves and flex sensors then the data collected from the arm band are mapped to that of the data gloves and the flex sensors for a single gesture. Several times this mapping is done for a single gesture. Then that gesture is trained. Like this several gestures are trained. When it is trained, the subject does not have to wear the data gloves and

782

D. B. Suriya et al.

flex sensors; only the arm band is sufficient. Then the data collected from the arm band alone are mapped, and the data is re-sent to the Arduino which then controls the seven motors of the bionic arm. The entire project is successfully executed. The authors want to acknowledge the funding received by SASTRA Deemed University under the grant called Research and Modernization—2017.

References 1. Belter, J.T., Segil, J.L., Dollar, A.M., Weir, R.F.: Mechanical design and performance specifications of anthropomorphic prosthetic hands: a review. J. Rehabil. Res. Devel. 2. Chang, L.Y., Matsuoka, Y.: A kinematic thumb model for the ACT hand. In: Proceedings of the 2006 IEEE International Conference on Robotics and Automation. The Robotics Institute, Carnegie Mellon University 3. ElKoura, G., Singh, K.: Handrix, animating the human hand. In: Eurographics/SIGGRAPH Symposium on Computer Animation. Department of Computer Science, University of Toronto, Toronto, Canada Side Effects Software, Inc., Toronto, Canada (2003) 4. Europen Commission community research, cognitive robotic systems. DEXMART. In: Dexterous and Autonomous Dual-Arm/Hand Robotic Manipulation with sMART SensoryMotor Skills: A Bridge from Natural to Artificial Cognition (2009) 5. Hussein, M.E.: 3D printed myoelectric prosthetic arm thesis 6. Krishna, S.S., Ramnath, V., Dash, A.K.: EEG controlled bionic arm. In: 2019 International Conference on Computing, Power and Communication Technologies (GUCON), NCR New Delhi, India, pp. 892–897 (2019) 7. Pylatiuk, C., Schulz, S., Döderlein, L.: Results of an Internet survey of myoelectric prosthetic hand users. Prosthet. Orthot. Int. (Sage Publications 1 Dec 2007) 8. Rothling, ¨ F., Haschke, R., Steil, J.J., Ritter, H., Neuroinformatics Group, Faculty of Technology, Bielefeld University: Platform portable anthropomorphic grasping with the bielefeld 20-DOF shadow and 9-DOF TUM hand. In: Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, USA (Oct 29–Nov 2, 2007)

Web References 9. 10. 11. 12.

FreeCad 3D modelling tutorials. https://www.freecadweb.org/wiki/Tutorials InMoov, Open source 3D printed life size robot. http://www.inmoov.fr/ Inkscape SVG creator. http://www.inkscape.org Myoelectric Bebionic 3 bionic hand. CNET (November 2012). https://www.youtube.com/ watch?v=KCIpbRSMfGM 13. RSL Steeper. Bebionic 3 Technical Information (2014). http://bebionic.com/ 14. Touch Bionics. i-limb digits clinician user manual (2014). http://www.touchbionics.com/

Hopping Spider Monkey Optimization Meghna Singh, Nirmala Sharma, and Harish Sharma

Abstract The area of swarm intelligence has been increasing extensively and new algorithms are getting formulated inspiring from the collective behavior of the social animal societies. One such algorithm is the spider monkey optimization (SMO) algorithm evolved by Bansal et al. (Memetic Comput. 6(1):31–47, 2014) in 2014. It is based on Fission-Fusion socially structured animals. It has been considered as a wellbalanced algorithm and has outperformed many of the other competitive algorithms. In this paper, to advance its diversification and intensification capability, a new alternative of SMO algorithm namely Hopping Spider Monkey Optimization (HSMO) has been proposed on being inspired by the hopping mechanism of a grasshopper. To testify the efficacy and to derive how accurate this newly proposed variant is, it is proved over 15 standard benchmark functions. The so obtained numerical results are analyzed and contrasted with numerous state-of-art algorithms accessible in the research and hence validate the newly proposed approach. Keywords Swarm intelligence · Nature-inspired algorithm · Optimization · Expedition and exploitation · Spider monkey optimization algorithm

1 Introduction Swarm intelligence is one field that deals with not just the artificial but also the natural system comprising of numerous individual that coordinately work using concepts of self-organization and decentralized control. It focuses on collective behaviors of the interaction of unified individuals towards each other and towards their own environment. Swarm intelligence based algorithms are known for efficiently solving complex optimization problems be it the combinatorial or numerical ones [1, 10, 19, 21]. Spider monkey optimization (SMO) is one such swarm-based algorithm which M. Singh (B) · N. Sharma · H. Sharma Rajasthan Technical University, Kota, India e-mail: [email protected] H. Sharma e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9_62

783

784

M. Singh et al.

was developed on being motivated from the brilliant rummaging behavior of monkeys belonging to the category of fission-fusion relied animals. In this, individuals form minuscule temporary groups, the members of which hail from a huge group to a minute one and conversely depending upon the inadequacy and sufficiency of food origins. This algorithm illustrates the food rummaging practice of these spider monkeys. Various researchers have been researching upon SMO and several modifications have been carried out on the same to enhance the expedition and exploitation concepts the swarm intelligence based algorithms revolve around [20–23]. It has been reported that the SMO algorithm often experience the problems of stagnation and premature convergence [3]. To cope up with these problems, an enhanced alternative of SMO, i.e., Hopping SMO (HSMO) inspired from the hopping mechanism carried out by a grasshopper has been presented in this paper. In this, a new strategy has been introduced which will explore the search space locally to its full and this in turn will help in the enhancement of exploitation and expedition proficiencies of the algorithm. The further sections of the paper is framed in the following way: The primary SMO Algorithm has been deliberated in Sect. 2. The newly proposed HSMO has been dealt in Sect. 3. The standard benchmark problems, results, and statistical analysis have been conferred in Sect. 4. Section 5 includes the closure got on the work done so far.

2 Spider Monkey Optimization Algorithm The algorithm constitute six major steps [3] that makes up the algorithm on a whole. The steps are elucidated in the further sections as below.

2.1 Initialization Step It marks the first step of the algorithm in which the SMO procreates an equally well scattered random populace of ‘Z ’ spider monkeys where every individual monkey SMi (i = 1, 2, . . . , Z ) be a vector of dimension Dim and this S Mi serves as the ith monkey in the swarm and also as an appropriate solution to the considered problem. These S Mi are initialized in the following manner using Eq. 1: SMi j = SMminimj + RN(0, 1) ∗ (SMmaximj − SMminimj )

(1)

here SMminimj and SMmaximj be the constraints of SMi in direction j and RN be an arbitrary number scattered evenly in between the scope of [0, 1].

Hopping Spider Monkey Optimization

785

2.2 Local Leader (LLdr) Step In this step, every monkey will get a chance to modernize its location relying on the knowledge it gets from the local leader (LLdr) and fellow members of group. The new location’s fitness is determined. If the advanced value of fitness seems improved than the old value, then the monkey enhances its location with this recent one. The location amendment equation for the ith spider monkey who happens to be a part of the vth local group is given by: SMnewi j = SMi j + RN(0, 1) ∗ (LLdrv j − SMi j ) + RN(−1, 1) ∗ (SMr j − SMi j ) (2) where SMnewi j be the new enhanced solution, LLdrv j be the LLdr in the dimension j of vth group and SMr j be the monkey of dimension j which is randomly selected, RN be an arbitrary number scattered evenly in between the scope of [0, 1 and −1, 1].

2.3 Global Leader (GLdr) Step This step gives a chance to all the spider monkeys to enhance their location from what it learns from the global leader (GLdr) and knowledge of the members belonging to the local group. The equation for location amendment for this step is as given below: SMnewi j = SMi j + RN(0, 1) ∗ (GLdr j − SMi j ) + RN(−1, 1) ∗ (SMr j − SMi j ) (3) where, GLdr j is representing the GLdr’s location of dimension j where j ∈ (1, 2, . . . , Dim) symbolizes an index which is taken casually. The location of monkeys are modified relying on a factor of probability called as (probabi ) which has to be a function representing fitness. So in that way, a superior individual will be highly opportunistic to become way more superior. The probabi is calculated using the equation as follows: probabi = 0.9 ∗

fitnessi + 0.1 maximum_ fitness

(4)

where, fitnessi symbolizes the fitness amount of the ith monkey whereas maximum_fitness symbolizes the maximum amount of fitness held in the whole group.

786

M. Singh et al.

2.4 Learning Step of Global Leader In the aforementioned step, the location of the individual monkey having a leading fitness amid the whole swarm will be taken as the enhanced location for the GLdr by making use of the greedy selection concept. Furthermore, the location of the leader (global one) is analyzed whether or not it is enhancing and in case if not then the value of global limit count (GLC) is raised by a value of 1.

2.5 Learning Step of Local Leader Here, in this step, the location of the monkey acquiring the leading fitness in the specific group will be taken as the enhanced location of the LLdr by again making use of the greedy selection concept. Now when this enhanced location of LLdr is contrasted with the previous old location and even then if the LLdr’s location does not seem to be enhanced, then the value of local limit count (LLC) is raised by a value of 1.

2.6 Decision Step for Local Leader In the following step, if the LLdr is not enhanced up to a pre-planned verge called the local leader’s limit then each member of that minuscule group will try to enhance its location either by initializing randomly or by making use of the mixed data gathered from the leaders (local and global) by using Eq. 5: SMnewi j = SMi j + RN(0, 1) ∗ (GLdr j − SMi j ) + RN(−1, 1) ∗ (SMi j − LLv j ) (5) The dimension of this so enhanced spider monkey will be appealing towards the GLdr and will drive away from the LLdr as per the above equation.

2.7 Decision Step for Global Leader In this step, the global leader (GLdr) is kept under constant monitoring and if it has not enhanced up to a planned verge which is called the GLdr’s limit then this GLdr will disband the community in minuscule group. The community is first disbanded into 2 groups then 3 and then so on and so forth until the maximum group (MG) formulation is attained. A local leader learning process is set up after every division to designate a LLdr in the newly build group. The case wherein the maximum groups

Hopping Spider Monkey Optimization

787

are formulated and if then too the location of the GLdr does not seem to be any enhanced then the GLdr fuses all the minuscule groups to a solo one.

3 Hopping Spider Monkey Optimization (HSMO) Algorithm Grasshoppers are a group of insects that hail from suborder Caelifera. These grasshoppers specifically dwell on grounds having powerful hind legs which makes them capable to leap in case of threats [4]. Locust is a more generalized term when the grasshoppers can change color, behavior, and form swarms. These swarms can have horrendous effects as they are serious pests and plant eaters. By lengthening their large legs backward and pushing them against the surface they are on, they are propelled to air as an effect of the reaction force [12]. They jump either to move from one place to other or as a mode to escape from a predator. When carrying out a jump for escape, the grasshoppers put a strong pressure selectively to increase their take-off velocity as this is responsible to determine the range. In order to overcome a property of the muscle that simultaneously, it cannot contract with high velocity as well as with high force [13], grasshoppers use a catapult mechanism for the same to intensify the mechanical power so developed by their muscles. Their jump is a mechanism that is carried out in three stages [6, 7]. At first, the grasshopper flexes the lower part of its leg known as tibia against its upper part known as femur by stimulating the muscle associated with both the parts. The second stage is co-contraction wherein extensor tibia muscle and flexor tibia muscle are associated and works coordinately. The flexor tibia muscle contracts and the tibia be in full flexed condition [12]. The flexor muscle though being weaker than the extensor muscle, still has a special property in the joint that leads to a larger mechanical advantage. This process of co-contraction lasts for half of a second and meanwhile the extensor muscle curtails and the elastic energy is stored due to the distortion caused in the leg [4]. This contraction of the extensor muscle is really slow leading the development of a high force. The third and the final stage of the jump is when the flexor muscle relaxes getting the flexed tibia back to normal. The grasshopper follows the catapult mechanism [17] wherein the stiff muscle acts like an elastic or the bow of a bow-and-arrow. Energy is put into the store at low power by slow but strong muscle contraction, and retrieved from the store at high power by rapid relaxation of the mechanical elastic structures [5, 25]. The strategy also associates with one of the well-known concept of Ballistic movement given by Sir Issac Newton in the seventeenth century [11] which says that the motion of the hopping organism can be closely associated with the motion of a ball once it is thrown or it could be like a bullet shot out of a gun and the famous projectile motion concept given by the well known physicist and astronomer Galileo Galilei [8]. The range of the projectile or the grasshopper can be deduced from Eq. 6 : v 2 sin 2θ (6) D= g

788

M. Singh et al.

where, D is the horizontal distance that any projectile in motion travels with a takeoff velocity v and take-off angle as θ , and g is the acceleration due to gravity, i.e., 9.81 m/s2 . The key point that was derived from these kinematics was that if a projectile (grasshopper in our case) launches at an angle of 45◦ then its range completely depends on its take off velocity, irrespective of the weight and size of the animal and displaces it the farthest. This range is maximum when sin θ = 1, making value of 2θ to be 90◦ . This article hence proposes another nature-inspired phenomena for searching locally being inspired by the jumping distance as acquired by the grasshopper and it has been hybridized with SMO algorithm. The location enhancement is dependent on a new criterion as derived from equation above and modified in a slightly different way as shown in Eq. 7:  = dbestj



(dbestj )2 + (dbestj − dr j )2 ∗ sin 2θ

(7)

 is the enhanced where r is a index selected randomly from the population, dbestj location of the best solution of swarm, θ depicts the angle of rotation whose value varies in between 0◦ and 360◦ and can be calculated as per the equation below:

θ = 10 ∗ iter

(8)

where iter is the current iteration of the search. The value of T , i.e., the total number of iteration of the local search is decided in reference to an analysis as discussed in the experimental setting Sect. 4.1. The pseudo code of the newly presented Hopping local search strategy is as depicted in Algorithm 1. Algorithm 1 Hopping Local search strategy (HLSS) Input the objective function Min f (x); Select the best solution dbest from the populace which will be enhancing itself; Initialise iteration counter to zero and total iteration of the local search T. while iter < T do  Produce a new solution dbest using Algorithm 2  Evaluate the objective function value f (dbest ).  if f (dbest ) < f (dbest ) then  dbest = dbest end if iter = iter + 1; end while

Like the basic SMO, HSMO Algorithm is also divided into the equivalent steps, i.e., Local leader step, Global Leader step, Local and Global leader learning steps, Local and Global leader decision steps and lastly the Hopping local search strategy. It is briefly discussed in Algorithm 3.

Hopping Spider Monkey Optimization

789

Algorithm 2 Generating new solution Feed the best solution dbest from the populace; Pick a solution drand randomly from the populace; Initialise the value of θ = 10*iter /*iter is the current iteration*/; for j=1 to Dim do if R N (0, 1) < Pr then /* Pr is the perturbation rate*/  dbest j = dbest j else   2 2 dbest j = (dbest j ) + (dbest j − dr j ) ∗ sin 2θ; end if end for  Return dbest j

Algorithm 3 Hopping Spider Monkey Optimization Algorithm (HSMO) while Termination criteria is reached do Step 1. Local Leader Step. Step 2. Global Leader Step. Step 3. Learning Step of Local Leader. Step 4. Learning Step of Global Leader. Step 5. Decision Step for Local Leader. Step 6. Decision Step for Global Leader. Step 7. Hopping local search strategy using algorithm number 1. end while Output the best solution.

Algorithm 1 depicts that the proposed strategy has been implemented after all steps of the SMO algorithm have been executed once. Here on, the best solution so found will be given more chances to enhance its quality by searching with an appropriate size of step locally making it to be an apt strategy for improved exploitation and convergence property of the SMO algorithm.

4 Test Problems Under Consideration The proposed mechanism is analyzed on 15 distinct standard benchmark functions P1 to P15 [2, 24] as illustrated in Table 1.

4.1 Experimental Results A contrasting comparison has been taken amid HSMO, basic SMO [3], artificial bee colony (ABC) [14], differential evolution (DE) [18], particle swarm optimization (PSO) [15] and global best artificial bee Colony (GABC) [9]. To test the above algorithms on the respective testing problems, the experimental settings that have been adopted are as follows:

790

M. Singh et al.

Table 1 Test problems Test problem Sphere

Objective function D P1 (y) = i=1 yi2 D P2 (y) = i=1 i.(yi )4

Search range

D

AE

[−5.12 5.12]

30

1.0E−05

[−5.12, 5.12]

30

1.0E−05

Griewank problem

P3 (y) = yi 1  D y 2 −  D cos( √ 1 + 4000 ) i=1 i i=1

[−600, 600]

30

1.0E−05

Rastrigin

P4 (y) = D [yi2 − 10 cos(2π yi )] 10D + i=1

[−5.12, 5.12]

30

1.0E−05

Ackley problem

[−30, 30]

30

1.0E−05

Pathological problem

P5 (y) =  D 3 −20 + e + exp(− 0.2 i=1 yi ) D n P6 (y) = i=1 |yi sin yi + 0.1yi | D P7 (y) = i=1 yi 2 − D 0.1( i=1 cos5π yi ) + 0.1D  D 2 P8 (y) = 1 − cos(2π i=1 yi ) +  D 2 0.1( i=1 yi ) n−1 P9 (y) = i=1 (0.5 +

Neumaier 3 problem (NF3)

P (y) = 10n 2 n i=1 (yi − 1) − i=2 yi yi−1

De Jong f4 problem

i

Alpine Cosine mixture problem Salomon problem

Levy Montalvo 1 problem

[−10, 10]

30

1.0E−05

[−1, 1]

30

1.0E−05

[−100, 100]

30

1.0E−01

[−100, 100]

30

1.0E−1

[−900, 900]

10

1.0E−01

2 P11 (y) = π D (10 sin (π x 1 ) +  D−1 2 × (1 + (x − 1) i i=1

[−10, 10]

30

1.0E−05

P12 (y) =  D−1 (yi − 1)2 × 0.1(sin2 (3π y1 ) + i=1

[−5, 5]

30

1.0E−05

[−5, 5]

4

1.0E−05

[−5, 5]

2

1.0E−05

[0, 180]

10

1.0E−02

 2 )−0.5 sin2 ( 100yi2 +yi+1 2 2 )2 ) 1+0.001(yi −2yi yi+1 +yi+1

10 sin2 (π xi+1 )) + (x D − 1)2 ), where xi = 1 + 41 (yi + 1)

Levy Montalvo 2 problem

Kowalik Six-hump camel back problem Sinusoidal

(1 + sin2 (3π yi+1 )) + (y D − 1)2 (1 + sin2 (2π y D )) y1 (b2 +b y2 ) 2 11 P13 (y) = i=1 [ai − 2 i i ] bi +bi y3 +y4 P14 (y) = (4 − 2.1y12 + y14 /3)y12 + y1 y2 + (−4 + 4y22 )y22 n P (y) = −[ A i=1 sin(yi − z) + 15 n sin(B(y − z))], i i=1

A = 2.5, B = 5, z = 30

• • • •

Estimated runs/simulation = 100. Size of Population (Z ) = 50. Maximum groups = Z /10. The total local search iterations ‘T’ have been set to 36 as the termination criterion relying on a sensitivity analysis done with the sum of success rate (SR) as shown in Fig. 1. • The perturbation rate (Pr) has also been delicately chosen to be as 0.6 in between [0 and 1] as it is giving much promising results as compared to other values in the interval as depicted in Fig. 2.

Hopping Spider Monkey Optimization

1200

Success Rate (SR)

Fig. 1 Change as seen in sum of success rate with total local search iterations ‘T’

791

1000

800

600

400 0

10

20

30

40

50

60

Total Iterations 'T'

Fig. 2 Effect on perturbation rate with sum of success rate Success Rate (SR)

1300

1200

1100

1000

900 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Perturbation Rate (Pr)

• The other parametric setting for the rest of the competitors like SMO, ABC, DE, PSO, and GABC are similar as in their original work. The comparison amid HSMO, SMO, ABC, DE, PSO, and GABC is illustrated in Table 2 on the respective parameters as mentioned and it can be clearly interpreted that HSMO is superior in reliability, efficacy, and veracity terms as contrasted to its fellow competitors under consideration. The convergence speed of the respective algorithms are compared via the average number of function evaluations (AFEs). The higher the speed of convergence, the lesser the value of AFE will be. To curb the impact of the algorithm’s stochasticity, for each and every problem, the function evaluations are adopted as average over 100 runs. Also to contrast the convergence speeds, another term is put under consideration known as the Acceleration Rate (AC R) which depends basically on the AFEs of the competitive algorithms as depicted in the equation below: ACR =

AFEALGO AFEHSMO

(9)

where, ALGO ∈ (SMO, ABC, DE, PSO, GABC) and Acceleration Rate >1 shows that HSMO is comparatively fast.

792

M. Singh et al.

Table 2 Contrasting results on selected problems Problem Algorithm SD ME P1

P2

P3

P4

P5

P6

HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC

2.58E−06 8.37E−07 2.02E−06 8.24E−07 5.31E−07 1.81E−06 1.06E−06 1.20E−06 3.11E−06 8.51E−07 9.55E−07 2.72E−06 1.46E−06 4.21E−03 1.42E−03 5.00E−03 7.46E−03 2.79E−06 1.92E−06 1.56E−06 3.15E−06 4.63E+00 1.20E+01 3.07E−06 0.00E+00 9.32E−07 1.46E−06 4.42E−07 3.46E−07 1.26E−06 4.97E−07 1.38E−06 5.69E−06 4.48E−07 4.00E−07 2.18E−06

2.22E−06 8.87E−06 8.17E−06 9.06E−06 9.41E−06 8.11E−06 1.66E−07 8.49E−06 4.90E−06 9.01E−06 9.02E−06 5.51E−06 3.46E−07 1.73E−03 2.53E−04 1.71E−03 4.59E−03 6.37E−06 5.34E−07 8.24E−06 5.28E−06 1.49E+01 3.80E+01 6.13E−06 4.44E−16 9.26E−06 8.43E−06 9.46E−06 9.61E−06 8.76E−06 5.00E−08 9.56E−06 8.87E−06 9.50E−06 9.58E−06 7.97E−06

AFE

SR

502.20 12,642.30 20,409 22,444 37,880 14,347.50 354.33 10,725.66 9578.50 20,859.50 32,529.50 8388 848.16 67,200.59 41,977.53 55,540 138,821 31,851.45 848.16 96,073.45 49,063 200,000 250,050 34,388.50 850.95 32,438.70 49,274.50 43,100.50 77,801.50 30,501 873.27 75,407.41 78,781.50 61,895.50 94,751.50 57,913.04

100 100 100 100 100 100 100 100 100 100 100 100 100 84 97 86 64 100 100 100 100 0 0 100 100 100 100 100 100 100 100 98 95 100 100 100 (continued)

Hopping Spider Monkey Optimization Table 2 (continued) Problem Algorithm P7

P8

P9

P10

P11

P12

HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC

793

SD

ME

AFE

SR

2.73E−06 4.80E−02 2.16E−06 4.72E−02 7.64E−02 2.03E−06 3.31E−02 3.36E−02 1.37E−01 3.46E−02 4.93E−02 1.13E−01 5.75E−07 6.29E−01 3.54E−01 1.66E+00 1.19E−01 4.52E−01 2.05E−06 6.56E−06 5.31E+00 1.50E−06 4.86E−07 8.68E−01 8.16E−07 1.45E−02 2.42E−06 9.48E−07 6.73E−07 2.11E−06 1.46E−06 1.09E−03 2.15E−06 1.87E−03 3.29E−03 2.30E−06

4.46E−06 1.77E−02 7.66E−06 1.33E−02 3.70E−02 8.02E−06 3.31E−02 1.87E−01 1.11E+00 1.88E−01 2.76E−01 7.19E−01 5.78E−08 3.29E+00 3.60E+00 3.76E+00 2.74E+00 2.26E+00 9.12E−06 1.13E−05 4.34E+00 8.05E−06 9.46E−06 7.67E−01 9.00E−06 2.08E−03 7.36E−06 9.02E−06 9.24E−06 7.66E−06 8.67E−06 1.18E−04 7.58E−06 3.38E−04 1.11E−03 7.05E−06

351.54 62,144.30 23,105.50 37,386 84,845.50 15,366.50 499.41 191,407.68 100,004.24 195,073 250,050 250,061.46 873.27 203,631.03 100,000.31 185,708.50 250,050 250,012.17 258,679.66 279,065.61 300,003.13 276,422.50 68,807.50 250,018.77 32,197.94 17,328.72 19,406.50 32,950 34,167.50 33,161 49,722.39 18,856.32 49,720.50 50,989 57,541.50 14,461

100 88 100 92 79 100 100 13 0 9 0 0 100 0 0 8 0 0 100 85 0 100 100 0 100 98 100 100 100 100 100 99 100 97 90 100 (continued)

794

M. Singh et al.

Table 2 (continued) Problem Algorithm P13

P14

P15

HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC HSMO SMO ABC DE PSO GABC

SD

ME

AFE

SR

1.88E−05 1.34E−04 1.03E−04 3.32E−04 8.37E−05 1.91E−05 1.43E−05 1.44E−05 1.12E−05 1.42E−05 1.15E−05 1.47E−05 2.96E−03 4.87E−03 2.06E−03 2.53E−01 2.83E−01 2.35E−03

8.28E−05 1.11E−04 2.31E−04 2.39E−04 9.69E−05 8.19E−05 1.81E−05 1.85E−05 1.27E−05 1.67E−05 1.48E−05 1.68E−05 7.71E−03 1.04E−02 8.08E−03 4.58E−01 3.74E−01 7.90E−03

54,773.52 55,983.70 93,117.52 46,592.50 57,529 97,266.98 307,864.18 121,234.12 971.01 100,758.50 101,920.50 319,914.72 275,566.50 275,553.13 303,984.65 298,882.50 212,422 249,857.62

100 96 15 79 99 99 46 42 100 50 60 49 94 66 95 2 25 99

Acceleration Rate is evaluated between HSMO and SMO, HSMO and ABC, HSMO and DE, HSMO and PSO, HSMO and GABC using Eq. 9. Thus, the compared results are charted in Table 3 depicting HSMO to be the fastest amid all the competitors under consideration for maximum functions used for testing. Also, in addition, Box-plots [8] are evaluated for AFE on all the considered algorithms namely HSMO, SMO, ABC, DE, PSO, GABC to pictorially depict the experimental division of the data. The diagram’s depiction is shown in Fig. 3. It is noticeable from the boxplot that the medians and range i.e., interquartile ones of HSMO are in contrast lesser. It can therefore be easily concluded from the boxplots that HSMO is efficient in terms of cost as compared to SMO, ABC, DE, PSO, GABC. Apart from this, to investigate whether the results so obtained have a reasonable difference or not, a statistical test has been conducted along with a non-parameterized analysis known as Mann Whitney U Analysis [16] to compare the non-Gaussian data. A 5% consequence level is taken in this study, i.e., β = 0.05 is taken for the comparison between HSMO and SMO, HSMO-ABC, HSMO-DE, HSMO-PSO, HSMO-GABC. The outcome of the MWU analysis for AFE of hundred estimated runs are tabulated in Table 4. We will first analyze if there is any symbolic difference between the sets of data. The conditions where HSMO has worked over more or less AFEs than the other competitors then for that, the operators “−” and “+” are respectively used and is all depicted in Table 4. It clearly indicates that out of 15 Problems, HSMO

Hopping Spider Monkey Optimization

795

Table 3 (ACR) of HSMO as contrasted to SMO, ABC, DE, PSO, GABC Problem SMO ABC DE PSO 25.1738 30.2703 79.2310 113.2728 38.1206 86.3506 176.7773 383.2676 233.1822 1.0788 0.5382 0.3792 1.0221 0.3938 1.0000

Fig. 3 Boxplot graph for AFEs

40.6392 27.0327 49.4925 57.8464 57.9053 90.2144 65.7265 200.2448 114.5125 1.1597 0.6027 1.0000 1.7000 0.0032 1.1031

44.6914 58.8703 65.4829 235.8046 50.6499 70.8778 106.3492 390.6069 212.6587 1.0686 1.0234 1.0255 0.8506 0.3273 1.0846

75.4281 91.8057 163.6731 294.8147 91.4290 108.5020 241.3538 500.6908 286.3376 0.2660 1.0612 1.1573 1.0503 0.3311 0.7709

28.5693 23.6728 37.5536 40.5448 35.8435 66.3175 43.7120 500.7138 286.2942 0.9665 1.0299 0.2908 1.7758 1.0391 0.9067

× 10 5 Average no.of Function Evaluation

P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15

GABC

3 2.5 2 1.5 1 0.5 0 -0.5 HSMO

SMO

ABC

DE

PSO

GBEST_ABC

performed better on 12 problems as compared to basic SMO, 13 problems on ABC, 13 problems on DE, 12 problems on PSO and 12 problems on GABC. Hence, making HSMO a real good strategy to work with.

5 Conclusion The paper proposed an innovative version of SMO named as Hopping Spider Monkey Optimization (HSMO) which offered a new technique based on the jumping mechanism of grasshopper. To testify the actual potential of the new strategy so proposed, it is graded experimentally on 15 standard benchmark problems. It when hybridized

796

M. Singh et al.

Table 4 AFE and MWU test based comparison (‘+’ indicates HSMO is better, ‘−’ symbolizes that HSMO is worse and ‘=’ symbolizes that there does not have any distinctive difference) Problem HSMO versus HSMO versus HSMO versus HSMO versus HSMO versus SMO ABC DE PSO GABC P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 Total positive signs

+ + + + + + + + + + − − + − + 12

+ + + + + + + + + + − + + − + 13

+ + + + + + + + + + + + − − + 13

+ + + + + + + + + − + + + − − 12

+ + + + + + + + + − + − + + − 12

with SMO algorithm greatly helped in improving the exploitation capability and also maintained the convergence speed to a great extent. The result’s evaluation helps in concluding that HSMO might prove to be a good substitute to generate solutions in time to the convoluted optimization issues making our newly proposed variant to be an advantageous strategy.

References 1. Al-Azza, A.A., Al-Jodah, A.A., Harackiewicz, F.J.: Spider monkey optimization: a novel technique for antenna optimization. IEEE Antennas Wirel. Propag. Lett. 15, 1016–1019 (2015) 2. Ali, M.M., Khompatraporn, C., Zabinsky, Z.B.: A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Global Optim. 31(4), 635–672 (2005) 3. Bansal, J.C., Sharma, H., Jadon, S.S., Clerc, M.: Spider monkey optimization algorithm for numerical optimization. Memet. Comput. 6(1), 31–47 (2014) 4. Bennet-Clark, H.C.: The energetics of the jump of the locust Schistocerca gregaria. J. Exp. Biol. 63(1), 53–83 (1975) 5. Biewener, A., Patek, S.: Animal Locomotion. Oxford University Press, Oxford (2018) 6. Burrows, M.: Motor patterns during kicking movements in the locust. J. Comp. Physiol. A 176(3), 289–305 (1995)

Hopping Spider Monkey Optimization

797

7. Burrows, M., Morris, G.: The kinematics and neural control of high-speed kicking movements in the locust. J. Exp. Biol. 204(20), 3471–3481 (2001) 8. Erlichson, H.: How Galileo solved the problem of maximum projectile range without the calculus. Eur. J. Phys. 19(3), 251 (1998) 9. Gao, W., Liu, S., Huang, L.: A global best artificial bee colony algorithm for global optimization. J. Comput. Appl. Math. 236(11), 2741–2753 (2012) 10. Gupta, K., Deep, K., Bansal, J.C.: Improving the local search ability of spider monkey optimization algorithm using quadratic approximation for unconstrained optimization. Comput. Intell. 33(2), 210–240 (2017) 11. Hall, A.R.: Isaac Newton: Adventurer in Thought, vol. 4. Cambridge University Press, Cambridge (1996) 12. Heitler, W.J., Burrows, M.: The locust jump. I. The motor programme. J. Exp. Biol. 66(1), 203–219 (1977) 13. Hill, A.V.: The heat of shortening and the dynamic constants of muscle. Proc. R. Soc. Lond. Ser. B: Biol. Sci. 126(843), 136–195 (1938) 14. Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Technical report, Technical report-tr06, Erciyes University, Engineering Faculty, Computer. . . (2005) 15. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95— International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE (1995) 16. McKnight, P.E., Najab, J.: Mann-Whitney U test. The Corsini Encyclopedia of Psychology, p. 1 (2010) 17. Offenbacher, E.L.: Physics and the vertical jump. Am. J. Phys. 38(7), 829–836 (1970) 18. Qin, A.K., Huang, V.L., Suganthan, P.N.: Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 13(2), 398–417 (2008) 19. Sharma, A., Sharma, H., Bhargava, A., Sharma, N.: Optimal design of PIDA controller for induction motor using spider monkey optimization algorithm. Int. J. Metaheuristics 5(3–4), 278–290 (2016) 20. Sharma, A., Sharma, H., Bhargava, A., Sharma, N.: Fibonacci series-based local search in spider monkey optimisation for transmission expansion planning. Int. J. Swarm Intell. 3(2–3), 215–237 (2017) 21. Sharma, A., Sharma, H., Bhargava, A., Sharma, N.: Power law-based local search in spider monkey optimisation for lower order system modelling. Int. J. Syst. Sci. 48(1), 150–160 (2017) 22. Sharma, A., Sharma, H., Bhargava, A., Sharma, N., Bansal, J.C.: Optimal power flow analysis using lévy flight spider monkey optimisation algorithm. Int. J. Artif. Intell. Soft Comput. 5(4), 320–352 (2016) 23. Sharma, A., Sharma, H., Bhargava, A., Sharma, N., Bansal, J.C.: Optimal placement and sizing of capacitor using limaçon inspired spider monkey optimization algorithm. Memetic Comput. 9(4), 311–331 (2017) 24. Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y.P., Auger, A., Tiwari, S.: Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005005(2005), 2005 (2005) 25. Vogel, S.: Comparative Biomechanics: Life’s Physical World. Princeton University Press, Princeton (2013)

Author Index

A Abbas, S. A., 1 Abdul Nazeer, K. A., 575 Afshar Alam, M., 383 Agarwal, Akshat, 759 Agarwal, Ritu, 585 Agarwal, Varun, 495 Ahmed, Faisal, 61 Akshada Muneshwar, 713 Angamuthu, A., 633 AnuShyni, S. K., 633 Arya, Rakesh Kumar, 415 Ashok, Alaknanda, 443 Aung, Tin Tun, 125 Aziz, Mohd Hairy, 15

B Balaji, Aparajit, 773 Balouchzahi, F., 75 Bandyopadhyay, Mainak, 281, 357 Bazilevych, Kseniia, 49 Bekeneva, Yana, 35 Bermúdez, Danay Vanessa Pardo, 115 Bhamare, Gaurav K., 735 Bhargavi, Mattaparti Satya, 701 Bhati, Gaurav Singh, 269 Bibal Benifa, J. V., 701

C Castañeda, Miguel Angel Polo, 115 Chandrasekaran, K., 293 Chaudhari, Narhari D., 735 Chaudhari, Neha N., 735 Chernokulsky, Vladimir, 125

Chilamkuri, Pranaya, 431 Chittawadigi, Rajeevlochana G., 673, 687 Chouhan, Lokesh, 237 Chumachenko, Dmytro, 49

D Dandagi, Vidya S., 453 Dannana, Dimple, 673 Dao, Thi-Kien, 89, 139 Dash, Anjan Kumar, 151, 773 Deeksha, 237 Deore, Harshal, 759 Dev Choudhury, Nalin B., 223 Dinakar, Andukuri, 745 Dogra, Ayush, 647 Dutta, Maitreyee, 179 Dutt Mishra, Siddheshwari, 179

E Evnevich, E. L., 1

G Ganapathy, Sannasi, 189 Ganesh, Apparaju S. D., 687 Ganesh, M. A., 547 Garg, Akhil Ranjan, 269 Geetha, V. H., 315 Gopakumar, G., 169 Gope, Sadhan, 223 Goyal, Bhawna, 647 Gupta, Roopam, 415

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 H. Sharma et al. (eds.), Congress on Intelligent Systems, Advances in Intelligent Systems and Computing 1335, https://doi.org/10.1007/978-981-33-6984-9

799

800 H Hanot, Rahul, 253 Harikrishnan, V. K., 759 Hemakumar Reddy, Galiveeti, 223 Hirakawa, Rin, 25 Hu, Rong, 139

I Idrus, Syed Zulkarnain Syed, 15 Indu, M. T., 329 Isravel, Deva Priya, 511 Ivan Daniels, D., 687

J Jaiswal, Rishav, 701 Jameel, Roshan, 383 Jamlos, Mohd Aminudin, 15 Janamala, Varaprasad, 467 Jaya Lakshmi, K., 713 JeenBritto, M., 633 Jiang, Shi-Jie, 89 Ji, Xiao-Rong, 89 Jothi Sivam, V. R., 547

K Kamal Kumar, U., 467 Kaur, Bobbinpreet, 647 Kaur, Harleen, 383 Kawano, Hideaki, 25 Kedia, Akanksha, 223 Khairunizam, Wan, 15 Khamayseh, Faisal, 99 Kishore, P., 151 Kodali, Prakash, 745 Kolluri, Priyanka, 431 Kotecha, Ketan, 305 Kritika, 585 Kumar, Ashish, 575 Kumar Goswami, Arup, 223 Kumar, K. Dilip, 151 Kumar, Rahul, 407 Kumar, Rajesh, 725 Kumar, Raman, 397 Kumar, Sanjay, 253 Kumar, Shushant, 293 Kumar, Sravan, 745 Kuruganti, Yashaswi S., 687

L Lal, Sonu, 415

Author Index Lata, Navdeep, 397 Lekshmy, P. C., 575 Lijiya, A., 369

M Maddi, Raghuveer, 673 Manikandan, P., 161 Manjula, S., 161 Manne, Uday, 673 Marndi, Ashapurna, 209 Mathew, G. Philip, 161 Meniailov, Ievgen, 49 Mihara, Yujiro, 25 More, Raju, 223 Mugunthan, Dheerkesh, 151 Muneeswaran, V., 565 Mustafa, Wan Azani, 15

N NagaDeepa, Choppakatla, 431 Nagaraj, P., 565 Nair, Manjusha, 169 Nakashi, Kenichi, 25 Nakatoh, Yoshihisa, 25 Nanda, Satyasai Jagannath, 659 Narayana, Jetti B., 659 Ngo, Truong-Giang, 89, 139 Nguyen, Thi-Xuan-Huong, 139 Nguyen, Trong-The, 89, 139 Novikova, Evgenia, 35

O Ovhal, Prasad, 535

P Padalko, Halyna, 49 Padmaja, V., 431 Pandey, Amit, 725 Pandey, Arun, 281, 357 Pandya, Vivek, 281, 357 Pasupuleti, Devasena, 673 Patra, G. K., 209 Pawar, Ambika, 305 Polepogu, Rajesh, 599 Prabanchan, V., 161 Pradhan, Sushant, 407 Pragallapati, Akhil, 151 Prakash, Jay, 407 Prakash, K., 713 Prakash, Srishti, 745

Author Index

801

Pranav, J., 565 Pratap Singh, Pranay, 523 Prome, Junnatul Ferdouse, 61 Puri, Shalini, 341, 613 Purohit, Sunil Dutt, 585 Puzhakkal, Niyas, 575

Someshwara, A. L., 565 Soni, Karan, 369 Sowmya, B. J., 315 Sruthi, C. J., 369 Suriya, D. B., 773 Swetha Sree, T. S., 315

R Raja, Linesh, 725 Raja Sree, T., 485 Rajsingh, Elijah Blessing, 511 Raju, Pavan, 759 Ramakrishna Gunaga, Shruti, 223 Ramesh, Aniirudh, 151 Ranjan Tewari, Rajiv, 495 Rebaka, Tejaswi, 407 Reddy, D. Ramesh, 745 Reddy, Saichethan Miriyala, 305 Rohani, Mohamad Nur Khairul Hafizi, 15 Rosemary Binoy, M., 315

T Tadele, Tesfaye, 725 Thaher, Thaer, 99 Thaw, Aung Myo, 125 Tran, Huu-Trung, 89 Tripathi, Abhay Anand, 305 Tsarenko, Olga, 35 Tyagi, Ekta, 237 Tyagi, Kanishk, 305

S Sabik Ali, R., 565 Sandhya Devi, R. S., 633 Sangeeth Kumar, T., 565 Sankar, P. Ravi, 713 Saraswat, Mukesh, 443 Sasidharan Rajeswari, Sreeja, 169 Seema, S., 315 Sharma, Harish, 783 Sharma, Hitesh Omprakash, 523 Sharma, Nirmala, 783 Sharma, Tripti, 647 Shashirekha, H. L., 75 Shetty, Chetan, 315 Shukla, Urvashi Prakash, 659 Shukla, Vipin, 281, 357 Shultana, Shahana, 61 Shunmuga Velayutham, C., 329 Sidnal, Nandini, 453 Silas, Salaja, 511 Singh, Meghna, 783 Singh, Roop, 443 Sinwar, Deepak, 725 Sivakumar, P., 633

V Vaegae, Naveen Kumar, 599 Valadi, Jayaraman K., 535 Venkata Ratnam, A., 713 Venkat, K., 773 Vigneshwaran, K., 547 Vijayakumar, D. Sudaroli, 189 Villota, Constanza Ricaurte, 115 Vinoth Kumar, B., 633 Vodyaho, A. I., 1

U Ujwala, G., 315

W Wang, Hong-Jiang, 139

Y Yakovlev, Sergiy, 49 Yasmin, Afrida, 61

Z Zhukova, N. A., 1 Zhukova, Nataly, 125